url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/2001.03082 | Obtaining higher-order Galerkin accuracy when the boundary is polygonally approximated | We study two techniques for correcting the geometrical error associated with domain approximation by a polygon. The first was introduced some time ago \cite{bramble1972projection} and leads to a nonsymmetric formulation for Poisson's equation. We introduce a new technique that yields a symmetric formulation and has similar performance. We compare both methods on a simple test problem. | \section{Introduction}
When a Dirichlet problem on a smooth domain is approximated by a polygon,
an error occurs that is suboptimal for quadratic
approximation \cite{lrsBIBaa,lrsBIBab,lrsBIBae}.
However, this can be corrected by a modification of the variational form
\cite{bramble1972projection}.
Here we review this approach and suggest a new one.
Let $\Omega$ be a smooth, bounded, two-dimensional domain.
Consider the Poisson equation with
Dirichlet boundary conditions:
\begin{equation}\label{eqn:simplpder}
-\Delta u=f\hbox{ in }\Omega,\quad u=g\hbox{ on }\partial\Omega.
\end{equation}
We assume that $f$ and $g$ are sufficiently smooth that $u$ can be extended to be
in $H^{k+1}(\widehat\Omega)$, where $\widehat\Omega$ contains a neighborhood
of the closure of $\Omega$.
One way to discretize \eqref{eqn:simplpder} is to approximate the domain
$\Omega$ by polygons $\Omega_h$, where the edge lengths of $\partial\Omega_h$
are of order $h$ in size.
Then conventional finite elements can be employed, with the Dirichlet boundary
conditions being approximated by the assumption that $u_h=\hat g$ on
$\partial\Omega_h$ \cite{lrsBIBgd}, with $\hat g$ appropriately defined.
For example, let us suppose for the moment that $g\equiv 0$ and we take
$\hat g\equiv 0$ as well.
In particular, we assume that $\Omega_h$ is triangulated with a quasi-uniform
mesh $\mathcal{T}_h$ of maximum triangle size $h$, and the boundary vertices of $\Omega_h$ are in $\partial \Omega$. We define $\mathring{W}_h^k:= H_0^1(\Omega) \cap W_h^k$ where
\begin{equation*}
W_h^k=\{ v \in C(\Omega_h): v|_T \in \mathcal{P}_k(T), \forall T \in \mathcal{T}_h\}.
\end{equation*}
Then the standard finite element approximation finds $u_h\in \mathring{W}_h^k$ satisfying
\begin{equation}\label{eqn:polyapprocirkl}
a_h(u_h,v)=(f,v)_{L^2(\Omega_h)},\quad \forall v\in \mathring{W}_h^k,
\end{equation}
where $a_h(u,v):=\int_{\Omega_h} \nabla u\cdot\nabla v\,dx$. Here we assume that $f$ is extended smoothly outside of $\Omega$.
This approach for $k=1$ (piecewise linear approximation) leads to the error
estimate
$$
\norm{u-u_h}_{H^1(\Omega_h)}\leq C h \norm{u}_{H^2(\hat{\Omega})}.
$$
However, when this approach is applied with piecewise quadratic
polynomials ($k=2$), the best possible error estimate is
\begin{equation}\label{eqn:bestpossib}
\norm{u-u_h}_{H^1(\Omega_h)}\leq C h^{3/2} ,
\end{equation}
which is less than optimal order by a factor of $\sqrt{h}$.
The reason of course is that we have made only a piecewise linear approximation
of $\partial\Omega$.
Table \ref{tabl:jpcircle} summarizes some computational experiments for the
test problem in Section \ref{circle}.
We see a significant improvement for quadratics over linears, but there
is almost no improvement with cubics.
Moreover, we will see that a significant improvement using quadratics can
be obtained using simple approaches that modify the variational form.
There have been many techniques introduced to circumvent the loss of accuracy
with quadratics (and higher-order piecewise polynomials)
\cite{lrsBIBae,ref:stenbergNitscheMethod}.
However, all of them require some modification of the quadrature for the elements
at the boundary.
Here we review an approach by Bramble et al. \cite{bramble1972projection} that solves directly
on $\Omega_h$, but with a modified variational form based on the method of
Nitsche \cite{ref:stenbergNitscheMethod}.
The method \cite{bramble1972projection} has been modified and applied in
many ways \cite{ref:CutFEMbasedonBDT}. However, the method in \cite{bramble1972projection} leads to a non symmetric bilinear form. Given this shortcoming we define a new method that is symmetric and solves the problem on $\Omega_h$ that has similar convergence results. As we will see in the next section, one main idea in \cite{bramble1972projection} is that one uses a Taylor series of the solution near the boundary to define appropriate boundary conditions on $\partial \Omega_h$. We should mention that this idea has been used recently (see for example \cite{ Cockburn2012, main}).
\begin{table}
\begin{center}
\begin{tabular}{|c|c||c|c||c|c||c|c|c|}\hline
$k$ &$M$& L2 err&rate& H1 err&rate&seg&hmax\\
\hline
1& 2&1.84e+00& NA&6.25e+00 & NA & 10& 1.05e+00 \\
1& 4&2.93e-01&2.65 & 1.89e+00 & 1.73 & 20& 4.94e-01 \\
1& 8&9.55e-02& 1.62 &1.06e+00 & 0.83 & 40& 2.61e-01 \\
1& 16&2.47e-02& 1.95 &5.45e-01 & 0.96& 80& 1.35e-01 \\
\hline
2& 2&4.18e-01& NA & 1.41e+00 & NA & 10& 1.05e+00 \\
2& 4&9.44e-02& 2.15 &4.26e-01 & 1.73 &20& 4.94e-01 \\
2& 8&2.30e-02& 2.04 &1.59e-01 & 1.42 & 40& 2.61e-01 \\
2& 16&5.62e-03& 2.03 &5.45e-02 & 1.54 & 80& 1.35e-01 \\
\hline
3& 2&3.17e-01& NA &8.25e-01 & NA & 10& 1.05e+00 \\
3& 4&8.81e-02& 1.85 &2.94e-01 & 1.49 &20& 4.94e-01 \\
3& 8&2.22e-02& 1.99 &1.07e-01 & 1.46& 40& 2.61e-01 \\
3& 16&5.53e-03& 2.01 &3.82e-02 &1.49& 80& 1.35e-01 \\
\hline
\end{tabular}
\end{center}
\caption{Errors $u_h-u_I$ in $L^2(\Omega_h)$ and $H^1(\Omega_h)$,
as a function of the maximum mesh size (hmax) for the polygonal
approximation \eqref{eqn:polyapprocirkl} for test problem
in Section \ref{circle} using various polynomial degrees $k$.
Key: ``$M$'' is input parameter to {\tt mshr} function {\tt circle} used
to generate the mesh, ``seg'' is the number of boundary edges.
The approximate solutions were generated using \eqref{eqn:polyapprocirkl}.}
\label{tabl:jpcircle}
\end{table}
\section{The Bramble-Dupont-Thom\'ee approach}
\label{sec:BDTmeth}
\begin{figure}
\centerline{(a)\includegraphics[width=2.5in]{deltadef.pdf}
\qquad (b) \includegraphics[width=2.5in]{distbdry.pdf}}
\caption{Definitions of (a) $\delta$ and (b) $d$.}
\label{fig:bdryedge}
\end{figure}
The method \cite{bramble1972projection} of Bramble-Dupont-Thom{\' e}e (BDT)
achieves high-order accuracy by modifying Nitsche's
method \cite{ref:stenbergNitscheMethod} applied on $\Omega_h$. We assume that $\Omega_h \subset \Omega$ and we do not necessarily assume that the boundary vertices of $\Omega_h$ belong to $\partial \Omega$. The bilinear form used in \cite{bramble1972projection} is
\begin{equation}\label{eqn:toddformr}
N_h(u,v)=a_h(u,v)-\int_{\partial\Omega_h} \derdir{u}{n} v \,ds
-\int_{\partial\Omega_h}
\Big(u+\delta\derdir{u}{n}\Big)\Big(\derdir{v}{n} -\gamma h^{-1} v\Big) \,ds
\end{equation}
Here, $n$ denotes the outward-directed normal to $\partial\Omega_h$ and
$$
\delta(x)=\min\set{s>0}{x+sn\in\partial\Omega}.
$$
Contrast the definition of $\delta$ to the closely related function $d$ defined by
$$
d(x)=\min\set{|x-y|}{y\in\partial\Omega}.
$$
For simplicity the assume that $g=0$. Then the BDT method will find $u_h \in W_h^k$ such that
\begin{equation*}
N_h(u_h,v)=\int_{\Omega_h} fv\,dx \qquad \text{ for all } v \in W^k_h.
\end{equation*}
If $\delta$ were 0, this would be Nitsche's method on $\Omega_h$.
Corrections of arbitrary order, involving terms
$\delta^\ell\, {\frac{\partial^\ell u}{\partial n^\ell}}$ for $\ell>1$
are studied in \cite{bramble1972projection}, but for simplicity we restrict
attention to the first-order correction to Nitsche's method given in \eqref{eqn:toddformr}. The error estimates obtained in \cite{bramble1972projection} are as follows
$$
\tbnorm{u-u_h}_1
\leq Ch^k\norm{u}_{H^{k+1}(\Omega)}+ Ch^{7/2}\norm{u}_{W^{2}_\infty(\Omega)},
$$
where
$$
\tbnorm{v}_1:=\Big(a_h(v,v) +h^{-1}\int_{\partial\Omega_h}v^2\,ds
+h\int_{\partial\Omega_h}\Big(\derdir{v}{n}\Big)^2\,ds\Big)^{1/2}.
$$
Thus using the variational form \eqref{eqn:toddformr} leads to an approximation
that is optimal-order with quadratics and cubics and is only suboptimal for
quartics by a factor of $\sqrt{h}$.
\begin{figure}
\centerline{(a)\includegraphics[width=3.0in]{plotshortbdtL2.pdf}
(b)\includegraphics[width=3.0in]{plotshortbdtH1.pdf}}
\caption{Errors $u_h-u_I$ in (a) $L^2(\Omega_h)$ and (b) $H^1(\Omega_h)$ as
a function of the maximum mesh size for the BDT method with $\gamma=100$.
The asterisks indicate data for (a) $k=4$ and (b) $k=5$.}
\label{fig:plotshortbdt}
\end{figure}
\subsection{An example of a circle}\label{circle}
We consider a numerical example. Consider the case where $\Omega$ is a disc of radius $R$ centered at the origin,
in which case we have $d(x)=R-|x|$.
However, it is more difficult to evaluate $\delta(x)$.
We have $x+\delta(x)n\in\partial\Omega$ for $x\in\partial\Omega_h$,
where $n$ denotes the outward normal to $\Omega_h$.
We can write $x=(x\cdot n)\,n+(x\cdot t)\,t$, and
$(x\cdot t)^2=|x|^2-(x\cdot n)^2$.
Since $|x+\delta(x)n|=R$, we have
$$
R^2=(x\cdot t)^2+(x \cdot n+\delta(x))^2
=|x|^2-(x \cdot n)^2+((x\cdot n+\delta(x))^2.
$$
Then
$$
\delta(x)=\pm\sqrt{R^2-|x|^2+(x\cdot n)^2}-x\cdot n \, .
$$
Note that for $x\in\partial\Omega_h$, $|x|\leq R$ and $x\cdot n>0$.
Since $\delta(x)\geq 0$, we must pick the plus sign, so
$$
\delta(x)=\sqrt{R^2-|x|^2+(x\cdot n)^2}-x\cdot n \, .
$$
It is not hard to see that $d-\delta=\mathcal{O}(h^4)$ in this case.
This problem is simple to implement using the FEniCS Project code
{\tt dolfin} \cite{fenicsbook}.
We take $R=1$, $u(x,y)=1-(x^2+y^2)^3$, and $f=36(x^2+y^2)^2$
in the computational experiments described subsequently.
Computational results for this example are given in Table \ref{tabl:rateshortbdt}
where we see optimal order approximation for $k\leq 3$, improvement for
$k=4$ over $k=3$ (suboptimal by a factor $h^{-1/2}$), and no improvement
for quintics. These errors are depicted in Figure \ref{fig:plotshortbdt}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c||c|c||c|c|}\hline
$k$&$M$ &hmax& L2 error&rate& H1 error&rate\\
\hline
1 & 8 & 0.261 & 0.0947 & 1.61 & 1.06 & 0.82 \\
1 & 16 & 0.135 & 0.0245 & 1.95 & 0.544 & 0.96 \\
1 & 32 & 0.0688 & 0.00639 & 1.94 & 0.277 & 0.97 \\
1 & 64 & 0.0353 & 0.00158 & 2.02 & 0.137 & 1.02 \\
\hline
2 & 8 & 0.261 & 2.81e-03 & 2.61 & 0.103 & 1.57 \\
2 & 16 & 0.135 & 3.70e-04& 2.93 & 0.0277 & 1.89 \\
2 & 32 & 0.0688 & 4.77e-05 & 2.96 & 0.00717 & 1.95 \\
2 & 64 & 0.0353 & 5.91e-06 & 3.01 & 0.00179 & 2.00 \\
\hline
3 & 8 & 0.261 & 1.56e-04 & 3.92 & 5.31e-03 & 2.54 \\
3 & 16 & 0.135 & 9.44e-06 & 4.05 & 7.06e-04& 2.91 \\
3 & 32 & 0.0688 & 5.81e-07 & 4.02 & 9.23e-05 & 2.94 \\
3 & 64 & 0.0353 & 3.57e-08 & 4.02 & 1.15e-05 & 3.00 \\
\hline
4 & 8 & 0.261 & 1.49e-04 & 3.96 & 7.41e-04 & 3.42 \\
4 & 16 & 0.135 & 9.29e-06 & 4.00 & 6.63e-05 & 3.48 \\
4 & 32 & 0.0688 & 5.80e-07 & 4.00 & 5.90e-06 & 3.49 \\
4 & 64 & 0.0353 & 3.63e-08 & 4.00 & 5.22e-07 & 3.50 \\
\hline
5 & 8 & 0.261 & 1.47e-04 & 3.96 & 7.10e-04& 3.41 \\
5 & 16 & 0.135 & 9.27e-06 & 3.99 & 6.44e-05 & 3.46 \\
5 & 32 & 0.0688 & 5.80e-07 & 4.00 &5.77e-06 & 3.48 \\
5 & 64 & 0.0353 & 3.62e-08 & 4.00 & 5.12e-07 & 3.49 \\
\hline
\end{tabular}
\end{center}
\caption{Errors $u_h-u_I$ in $L^2(\Omega_h)$ and $H^1(\Omega_h)$ as a function of
mesh size (hmax) for the the BDT approximation in Section \ref{sec:BDTmeth},
with $\gamma=100$, for various polynomial degrees $k$.
Key: $M$ is the value of the {\tt meshsize} input parameter to the {\tt mshr}
function {\tt circle} used to generate the mesh.
The number of boundary edges was set to $5M$, and
hmax is the maximum mesh size.}
\label{tabl:rateshortbdt}
\end{table}
\section{A new method based on a Robin-type approach}
One issue with the BDT method is that the resulting linear system is not symmetric,
although it is possible to symmetrize the method as we discuss in Section \ref{sec:hoasm}.
Here we develop a technique that leads to a symmetric system.
Moreover, this method does not require the parameter(s) from Nitsche's method.
For Nitsche's method to succeed, $\gamma$ must be chosen appropriately
\cite{lrsBIBih}.
We first separate $\partial \Omega$ to its piecewise linear part and its curvilinear part. We will assume that $\partial \Omega=\Gamma^0 \cup S_1 \cup \ldots S_\ell$ where $\Gamma^0 $ is a piecewise linear segment and $S_i's$ are $C^2$ and no where linear. We let the end points of $S_i$ to be $y_{i-1}, y_{i}$.
For the method in this section we assume that the vertices of $\Omega_h$ belong to $\partial \Omega$ and hence $\Omega_h$ might not be a subdomain of $\Omega$. Hence, we need to define $\delta$ in this case. We assume that for every $x \in \partial \Omega_h$ that is there is a unique smallest number $\delta(x)$ in absolute value such that $\Omega \backslash \Gamma^0 $
\begin{equation*}
x+\delta(x) n(x) \in \partial \Omega.
\end{equation*}
We assume that the approximate domain boundary $\partial\Omega_h$
can be decomposed into three parts, as follows. Let $\mathcal{E}_h$ be the edges of $\partial \Omega_h$.
\begin{equation}\label{eqn:domaindec}
\Gamma^{\pm}= \cup\set{e \in \mathcal{E}_h }{\pm\delta|_{e^o}>0},
\end{equation}
where $e^o$ denotes the interior of $e$. Let $\Gamma=\Gamma^{+}\cup\Gamma^{-}$. We assume the following.
\begin{assumption}\label{assum2}
We assume that all the vertices of $\partial \Omega_h$ belong to $\partial \Omega$ and that each $y_i$ (for $0 \le i \le \ell$) is a vertex of $\partial \Omega_h$ . Finally, we assume that
\begin{equation*}
\partial \Omega_h=\Gamma^0 \cup \Gamma.
\end{equation*}
\end{assumption}
Our method is based on a Robin type of boundary condition on $\Gamma$. In fact, our method will be based on the closely related problem:
\begin{alignat*}{2}
-\Delta w=&f, \quad && \text{ on } \Omega, \\
w=&0, \quad && \text{ on } \Gamma^0, \\
w+\delta \frac{\partial w}{\partial n}=&\hat{g}, \quad && \text{ on } \Gamma.
\end{alignat*}
Here we define $\hat{g}(x)= g(x+\delta(x) n(x))$ for $x \in \Gamma$ and not a vertex of $\partial \Omega_h$. The key here is that, using that $u$ vanishes on $\partial \Omega$, for $x \in \Gamma$ ($x$ not a vertex of $\partial \Omega_h$) we have
\begin{equation}\label{taylor}
u(x)+\delta \frac{\partial u}{\partial n}(x)=\hat{g}(x)-\frac{\delta^2}{2} \partial_{nn} u(z),
\end{equation}
where $z$ lies in the line segment with end points $x$ and $x+\delta(x) n(x)$.
Now we can write the method. We start by defining the finite element space we will use
\begin{equation*}
V_h^k=\{W_h^k: v=0 \text{ on } \Gamma^0, v(x)=0 \text{ for all vertices of } x \text{ of } \partial \Omega_h\}.
\end{equation*}
Also define
\begin{equation*}
V_h^k(g)=\{W_h^k: v=g_I \text{ on } \Gamma^0, v(x)=Ig(x) \text{ for all vertices of } x \text{ of } \partial \Omega_h\}.
\end{equation*}
where $g_I \in C(\partial \Omega_h)$ is a suitable approximation of $g$ and is a piecewise polynomial of degree at most $k$ on $\partial \Omega_h$.
The bilinear form is given by
\begin{equation*}
b_h(u,v):=a_h(u,v)+c_h(u,v),
\end{equation*}
where
\begin{equation*}
c_h(u,v)=\int_{\Gamma}\delta^{-1}{u}v \,ds.
\end{equation*}
Then the method solves:
Find $u_h \in V_h^k(g)$ such that
\begin{equation}\label{fem}
b_h(u_h, v)= \int_{\Omega_h} F v+ \int_{\Gamma}\delta^{-1}{\hat{g}}v \,ds. \, \quad \text{ for all } v \in V_h^k.
\end{equation}
Here
$$
F=\begin{cases} f &\hbox{on}\;\Omega \cap \Omega_h \\
I^1f &\hbox{on}\;\Omega_h \backslash \Omega,\end{cases}
$$
where $I^1$ is the linear interpolant onto $W_h^1$. Note that we can define $I^1 f$ only knowing $f$ on $\Omega$. Alternatively, if we have an analytic representation of $f$ we can define $F$ as a smooth extension of $f$ outside of $\Omega$.
\section{Error Analysis}
\label{sec:newproof}
\subsection{Stability Analysis}
Unfortunately, the bilinear form $b_h$ is not positive definite. However, we will be able to prove stability of method. In order to do so, we need to decompose the space $V_h^k$ into its boundary contribution and interior contribution. More precisely, we can write
\begin{equation*}
V_h^k=\mathring{W}_h^k \oplus \mathcal{B}_h^k,
\end{equation*}
where $\mathcal{B}_h^k =\{ v\in V_h^k: v(x)=0 \text{ for all interior Lagrange points } x \}$.
We will define a norm on $V_h^k$:
\begin{equation*}
\|v\|_a^2:=a_h(v,v)
\end{equation*}
and a semi-norm
\begin{equation*}
|v|_c^2:=\int_{\Gamma} \frac{v^2}{|\delta|} \, ds.
\end{equation*}
Note that $|\cdot |_c$ is in fact a norm on $\mathcal{B}_h^k$.
The following crucial lemma will allow us to prove stability.
\begin{lemma}
\label{lem:lemseven}
There exists a constant $c_1>0$ such that
\begin{equation}\label{lemma1}
\|v\|_{a} \le c_1 \sqrt{h} |v|_c \text{ for all } v\in \mathcal{B}_h^k.
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathcal{E}_h^\Gamma$ be the collection of edges that are a subset of $\Gamma$ and let $\mathcal{T}_h^\Gamma$ be triangles $T$ such that $T$ has an edge in $\mathcal{E}_h^\Gamma$. Then, if $v\in \mathcal{B}_h^k$ and using inverse estimates we have
\begin{equation*}
\|v\|_{a}^2=\sum_{T \in \mathcal{T}_h^\Gamma} \|\nabla v\|_{L^2(T)}^2
\le \sum_{T \in \mathcal{T}_h^\Gamma} \frac{C}{h_T^2} \|v\|_{L^2(T)}^2
\le \sum_{e \in \mathcal{E}_h^\Gamma} \frac{C}{h_e} \|v\|_{L^2(e)}^2.
\end{equation*}
The result is complete after we use that $\max_{x \in e} |\delta(x)| \le C h_e^2$ for $e \in \mathcal{E}_h^\Gamma$.
\end{proof}
We note that $c_h(u,v)$ may not be well defined for all $u, v \in V_h^k$. Therefore, we need to make an assumption on $\delta$ such that this is not the case.
\begin{assumption}\label{assumption3}
We assume that $\delta$ is such that
\begin{equation}\label{aux321}
|c_h(u,v)| < \infty \qquad \forall u, v\in V_h^k.
\end{equation}
\end{assumption}
For example, if $\delta$ has a lower bound as follows, then \eqref{aux321} will hold.
Suppose that the end points of $e \in \mathcal{E}_h^\Gamma$ are $x_0$ and $x_1$.
Then we assume that there exists a constant $c>0$ and a $p<3$ such that
\begin{equation*}
|x-x_0|^p |x-x_1|^p \le c |\delta(x)| \quad \text{ for all } x\in e,
\end{equation*}
where $c$ is independent of $e \in \mathcal{E}_h^\Gamma$.
Under these conditions, Assumption \ref{assumption3} holds.
We can now prove the stability result.
\begin{theorem}\label{stability}
We assume that Assumption \ref{assum2} and Assumption \ref{assumption3} hold. Suppose that $G$ is a bounded linear function on $V_h^k$ and suppose that $u_h \in V_h^k$ solves
\begin{equation*}
b_h(u_h, v)= G(v), \quad \text{ for all } v \in V_h^k.
\end{equation*}
Then, assuming $c_1 \sqrt{h} \le \frac{1}{2}$ we have
\begin{equation*}
\|u_h\|_a \le 2 \left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right)+ \frac{11}{3} c_1 \sqrt{h} \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right).
\end{equation*}
and
\begin{equation*}
|u_h|_c \le \frac{3}{2}\left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right) + \frac{5}{3} \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right).
\end{equation*}
\end{theorem}
\begin{proof}
We know we can write $u_h =w_h+ s_h$ where $w_h \in \mathring{W}_h^k$ and $s_h \in \mathcal{B}_h^k$. Define $\phi_h\in \mathcal{B}_h^k$ by
$$
\phi_h=\begin{cases} s_h &\hbox{on}\;\Gamma^{+}\\
-s_h &\hbox{on}\;\Gamma^{-}\\
0 &\hbox{on}\;\Gamma^0 .\end{cases}
$$
Note that $|\phi_h|_c=|s_h|_c$. Now we can estimate $s_h$.
\begin{alignat*}{1}
|s_h|_c^2=c_h(s_h, \phi_h)=b_h(u_h, \phi_h)-a_h(u_h, \phi_h)=G(\phi_h)-a_h(u_h, \phi_h).
\end{alignat*}
Hence, we have
\begin{alignat*}{1}
|s_h|_c^2 \le & \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right) |\phi_h|_c+ \|u_h\|_a \|\phi_h\|_a \\
\le & \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right) |s_h|_c
+ c_1 \sqrt{h} (\|w_h\|_a + c_1 \sqrt{h} |s_h|_c) |s_h|_c.
\end{alignat*}
Here we used \eqref{lemma1} twice. In particular, we used $\|u_h\|_a \le \|w_h\|_a+\|s_h\|_a \le \|w_h\|_a + c_1 \sqrt{h} |s_h|_c$. Assuming $h c_1^2 \le \frac{1}{4}$ we have
\begin{alignat*}{1}
\frac{3}{4}|s_h|_c^2 \le \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right) |s_h|_c
+ \frac{1}{2} \|w_h\|_a |s_h|_c.
\end{alignat*}
Hence,
\begin{equation}\label{341}
|s_h|_c \le \frac{4}{3}\left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right)
+ \frac{2}{3} \|w_h\|_a
\end{equation}
Next,
\begin{equation*}
\|w_h\|_a^2= a_h(w_h, w_h)=a_h(u_h, w_h)-a_h(s_h, w_h)=b_h(u_h,w_h)-a_h(s_h, w_h)=G(w_h)-a_h(s_h, w_h).
\end{equation*}
We therefore have
\begin{equation*}
\|w_h\|_a^2 \le \left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{|v_h|_a}\right) \|w_h\|_a+ \|s_h\|_a \|w_h\|_a.
\end{equation*}
Hence, we obtain using \eqref{341}
\begin{alignat*}{1}
\|w_h\|_a \le & \left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right) + \|s_h\|_a \\
& \le \left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right) + c_1 \sqrt{h}\|s_h\|_c \\
& \le \left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right) + \frac{4}{3} c_1 \sqrt{h} \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right) + \frac{1}{3} \|w_h\|_a
\end{alignat*}
Thus we arrive at
\begin{equation*}
\|w_h\|_a \le \frac{3}{2} \left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right)+ 2 c_1 \sqrt{h} \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right).
\end{equation*}
From this and \eqref{341} we get
\begin{equation*}
|u_h|_c=|s_h|_c \le \frac{3}{2}\left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right) + \frac{5}{3} \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right).
\end{equation*}
Finally,
\begin{alignat*}{1}
\|u_h\|_a \le & \|w_h\|_a + \|s_h\|_a \le \|w_h\|_a + c_1 \sqrt{h}\|s_h\|_c \\
\le & 2 \left(\sup_{v_h \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a}\right)
+ \frac{11}{3} c_1 \sqrt{h} \left(\sup_{v_h \in \mathcal{B}_h^k} \frac{|G(v_h)|}{|v_h|_c}\right).
\end{alignat*}
\end{proof}
We can now prove error estimates after we make an assumption more stringent
than Assumption \ref{assumption3}.
\begin{assumption}\label{assumption1}
Suppose that the end points of $e \in \mathcal{E}_h^\Gamma$ are $x_0$ and $x_1$.
Then we assume that there exists a constant $\beta>0$ such that
\begin{equation*}
|x-x_0| |x-x_1| \le \beta |\delta(x)| \quad \text{ for all } x \in e,
\end{equation*}
where $\beta$ is independent of $e \in \mathcal{E}_h^\Gamma$.
\end{assumption}
Note that this assumption does not allow $\partial \Omega$ and $\partial \Omega_h $ to be tangent on the vertices of $\Gamma$.
Assumption \ref{assumption1} implies Assumption \ref{assumption3};
in particular, the example after Assumption \ref{assumption3} holds with $p=1$.
\begin{theorem}\label{errorestimates}
We assume Assumptions \ref{assum2} and \ref{assumption1} hold. We assume that $u$ solves \eqref{eqn:simplpder} and belongs to $u \in W^{s,\infty}(\Omega)$ where $s =\max \{k+1, 4\}$. We assume that $g_I= u_I|_{\partial \Omega_h}$ where $u_I \in W_h^k$ is the Lagrange interpolant of $u$. Let $u_h \in V_h^k(g)$ solve \eqref{fem} and assume that $u$ solves \eqref{eqn:simplpder} then we have
\begin{alignat*}{1}
\|u-u_h\|_{a} \le & C h^{k}\|u\|_{H^{k+1}(\hat{\Omega})} + C h^{k+1/2} \|u\|_{W^{k+1,\infty}(\Gamma)} \\
&+C\left(h^4 \|u\|_{W^{4,\infty}(\hat{\Omega})}+ h^{7/2} \|u\|_{W^{2,\infty}(\hat{\Omega})}\right).
\end{alignat*}
and
\begin{alignat*}{1}
|u-u_h|_{c} \le & C h^{k}\|u\|_{H^{k+1}(\hat{\Omega})} + C h^{k} \|u\|_{W^{k+1,\infty}(\Gamma)} \\
&+C\left(h^4 \|u\|_{W^{4,\infty}(\hat{\Omega})}+ h^{3} \|u\|_{W^{2,\infty}(\hat{\Omega})}\right).
\end{alignat*}
\end{theorem}
\begin{proof}
We let $e_h=u_I-u_h \in V_h^k$. Then we see that
\begin{equation*}
b_h(e_h, v)=G(v) \quad \text{ for all } v \in V_h^k,
\end{equation*}
where $G(v)=G_1(v)+ G_2(v)$, $G_1(v)=\int_{\Omega_h} F v dx -b_h(u, v)$ and $G_2(v)= b_h(u-u_I, v)$.
Note that using integration by parts we have
\begin{alignat*}{1}
G_1(v)=& \int_{\Omega_h} F v+ \int_{\Gamma}\delta^{-1}{\hat{g}}v \,ds-\int_{\Omega_h} (-\Delta u) v dx - \int_{\Gamma} \left(\frac{\partial u}{\partial n} + \frac{1}{\delta} (u-\hat{g})\right) v \\
=& \int_{\Omega_h \backslash \Omega} (I^1 (-\Delta u)-(-\Delta u) )v dx - \int_{\Gamma} \left(\frac{\partial u}{\partial n} + \frac{1}{\delta} (u-\hat{g})\right) v .
\end{alignat*}
First consider $v \in \mathring{W}_h^k$ then we have
\begin{alignat*}{1}
|G_1(v)| \le h^2 \|u\|_{W^{4,\infty}(\hat{\Omega})} \|v\|_{L^1(\Omega_h \backslash \Omega)}
\end{alignat*}
However, we have
\begin{alignat*}{1}
\|v\|_{L^1(\Omega_h \backslash \Omega)} \le& C h^2 \|v\|_{L^\infty(\Omega_h \backslash \Omega)} \\
\le & C h^3 \|\nabla v\|_{L^\infty(\Omega)} \\
\le &C \, h^2 \|\nabla v\|_{L^2(\Omega)}= C \, h^2 \| v\|_{a}.
\end{alignat*}
Therefore, we get
\begin{equation*}
\sup_{v \in \mathring{W}_h^k} \frac{|G_1(v)|}{|v|_a} \le C h^4 \|u\|_{W^{4,\infty}(\hat{\Omega})}.
\end{equation*}
\end{proof}
Now consider $v \in \mathcal{B}_h^k$.
\begin{alignat*}{1}
G_1(v)=& h^4 \|u\|_{W^{4,\infty}(\hat{\Omega})} \|v\|_{L^\infty(\Gamma)}+\|\delta^{-1/2}( \delta \frac{\partial u}{\partial n}+ u-\hat{g})\|_{L^\infty(\Gamma)} \|v\|_{c} \\
\le & h^{7/2}\|u\|_{W^{4,\infty}(\hat{\Omega})} \|v\|_{L^2(\Gamma)}+ h^3 \|u\|_{W^{2,\infty}(\hat{\Omega})} \|v\|_{c} \\
\le & h^{9/2}\|u\|_{W^{4,\infty}(\hat{\Omega})} |v|_c+ h^3 \|u\|_{W^{2,\infty}(\hat{\Omega})} \|v\|_{c}.
\end{alignat*}
Here we used \eqref{taylor}.
Hence,
\begin{equation*}
\sqrt{h} (\sup_{v \in \mathcal{B}_h^k} \frac{|G_1(v)|}{|v|_c}) \le C \,( h^{7/2} \|u\|_{W^{2,\infty}(\hat{\Omega})}+ h^5 \|u\|_{W^{4,\infty}(\hat{\Omega})}).
\end{equation*}
Now lets consider $G_2$. If we let $v \in \mathring{W}_h^k$ then
\begin{equation*}
G_2(v)=a_h(u-u_I, v) \le \|u-u_I\|_a \|v\|_a
\end{equation*}
Hence,
\begin{equation*}
\sup_{v \in \mathring{W}_h^k} \frac{|G_2(v)|}{\|v\|_a} \le C h^{k}\|u\|_{H^k(\hat{\Omega})}.
\end{equation*}
Now let $v \in \mathcal{B}_h^k$ we then have
\begin{equation*}
G_2(v)= \|u-u_I\|_a \|v\|_a + |u-u_I|_c |v|_c \le c_1 \sqrt{h} \|u-u_I\|_a \|v\|_c + |u-u_I|_c |v|_c.
\end{equation*}
Let $e \in \mathcal{E}_h$, $e \subset \Gamma$ with end points $x_0$ and $x_1$. Then, we have $|(u-u_I)(x)|^2 \le C |x-x_0||x-x_1| \|\partial_t (u-u_I)\|_{L^\infty(e)} $. Hence, using Assumption \ref{assumption1} we get
\begin{equation*}
\frac{(u-u_I)^2(x)}{|\delta(x)|} \le C \beta \|\partial_t (u-u_I)\|_{L^\infty(e)}.
\end{equation*}
Thus,
\begin{equation*}
\int_e \frac{(u-u_I)^2}{|\delta|} ds \le C \beta |e| \|\partial_t (u-u_I)\|_{L^\infty(e)}^2.
\end{equation*}
We then obtain the following estimate, after summing over all edges $e \subset \Gamma$,
\begin{equation*}
|u-u_I|_c^2 \le C \|\partial_t (u-u_I)\|_{L^\infty(\Gamma)}^2.
\end{equation*}
We get the following inequality after using approximation properties of the Lagrange interpolant:
\begin{equation*}
|u-u_I|_c \le C h^{k} \|u\|_{W^{k+1,\infty}(\Gamma)}.
\end{equation*}
Therefore, we have
\begin{equation*}
\sqrt{h} \sup_{v \in \mathcal{B}_h^k} \frac{|G_2(v)|}{|v|_c} \le C h^{k+1/2} (\|u\|_{W^{k+1,\infty}(\Gamma)}+ \|u\|_{H^{k+1}(\hat{\Omega})}).
\end{equation*}
Combining the above results we get
\begin{equation*}
\sup_{v \in \mathring{W}_h^k} \frac{|G(v_h)|}{\|v_h\|_a} \le C \left(h^{k}\|u\|_{H^{k+1}(\hat{\Omega})} + h^4 \|u\|_{W^{4,\infty}(\hat{\Omega})}\right).
\end{equation*}
\begin{alignat*}{1}
\sqrt{h} \sup_{v \in \mathcal{B}_h^k} \frac{|G(v)|}{|v|_c} \le & C \,\left( h^{7/2} \|u\|_{W^{2,\infty}(\hat{\Omega})}+ h^5 \|u\|_{W^{4,\infty}(\hat{\Omega})}\right) \\
&+ C h^{k+1/2} \left(\|u\|_{W^{k+1,\infty}(\Gamma)}+ \|u\|_{H^{k+1}(\hat{\Omega})}\right).
\end{alignat*}
The result now follows from Theorem \ref{stability}.
\section{Implementation}
One feature of Nitsche's method, that is preserved with BDT, is that
one uses the full space $W^k_h$ of piecewise polynomials without restriction
at the boundary. The modification of $W^k_h$ to obtain the space $V_h^k$ of
piecewise polynomials vanishing at boundary vertices is not trivial to
implement in automated systems like FEniCS \cite{fenicsbook}.
Thus it is of interest to consider a simplification to the Robin-type
method \eqref{fem} which removes this constraint.
Thus we define, for $\epsilon>0$,
$$
b_h^\epsilon(u,v)= a_h(u,v)
+c_h^\epsilon(u,v),
$$
where $c_h^\epsilon(u,v):= \int_{\Gamma}(\epsilon\,\hbox{sign}(\delta)+\delta)^{-1}{u}v \,ds$.
We then define $\hat{W}_h^k= \{v \in W_h^k: v=0 \text{ on } \Gamma^0\}$ and $\hat{W}_h^k(g)= \{v \in W_h^k: v=g_I \text{ on } \Gamma^0\}$.
For implementation issues we solve $u_h \in \hat{W}^k_h(g)$ by
\begin{equation}\label{eqn:epsrobimet}
b_h^\epsilon(u_h,v_h) =\int_{\Omega_h} Fv\,dx+ c_h^\epsilon(\hat{g},v) \quad \forall\, v\in \hat{W}^k_h.
\end{equation}
The computational experiments used this approach.
The answers do not depend on $\epsilon$ for $\epsilon$ small,
as indicated in Table \ref{tabl:epsrobin}. We were even able to have $\epsilon=0$ for \eqref{eqn:epsrobimet} using {\tt dolfin}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c||c|c|c|}
\hline
$k$ & $M$ & segs & hmax &$\epsilon$& L2 err& H1 err& bdry err \\
\hline
2 & 64 & 320 & 3.5e-02 & 1.0e-04 & 1.1e-03 & 2.1e-03 & 1.3e-01 \\
2 & 64 & 320 & 3.5e-02 & 1.0e-05 & 1.1e-04 & 1.8e-03 & 2.5e-02 \\
2 & 64 & 320 & 3.5e-02 & 1.0e-06 & 1.2e-05 & 1.8e-03 & 3.2e-03 \\
2 & 64 & 320 & 3.5e-02 & 1.0e-07 & 6.0e-06 & 1.8e-03 & 3.2e-04 \\
2 & 64 & 320 & 3.5e-02 & 1.0e-08 & 5.9e-06 & 1.8e-03 & 4.3e-05 \\
2 & 64 & 320 & 3.5e-02 & 1.0e-09 & 5.9e-06 & 1.8e-03 & 3.1e-05 \\
2 & 64 & 320 & 3.5e-02 & 1.0e-10 & 5.9e-06 & 1.8e-03 & 3.1e-05 \\
\hline
2 & 128 & 640 & 1.8e-02 & 1.0e-07 & 1.3e-06 & 4.4e-04 & 6.4e-04 \\
2 & 128 & 640 & 1.8e-02 & 1.0e-08 & 7.3e-07 & 4.4e-04 & 6.5e-05 \\
2 & 128 & 640 & 1.8e-02 & 1.0e-09 & 7.2e-07 & 4.4e-04 & 7.3e-06 \\
2 & 128 & 640 & 1.8e-02 & 1.0e-10 & 7.2e-07 & 4.4e-04 & 3.9e-06 \\
2 & 128 & 640 & 1.8e-02 & 1.0e-11 & 7.2e-07 & 4.4e-04 & 3.9e-06 \\
\hline
2 & 256 & 1280 & 9.0e-03 & 1.0e-09 & 8.9e-08 & 1.1e-04 & 1.3e-05 \\
2 & 256 & 1280 & 9.0e-03 & 1.0e-10 & 8.9e-08 & 1.1e-04 & 1.3e-06 \\
2 & 256 & 1280 & 9.0e-03 & 1.0e-11 & 8.9e-08 & 1.1e-04 & 4.9e-07 \\
2 & 256 & 1280 & 9.0e-03 & 1.0e-12 & 8.9e-08 & 1.1e-04 & 4.9e-07 \\
\hline
\end{tabular}
\end{center}
\vspace{0mm}
\caption{Errors $\norm{u_h-u_I}_{L^2(\Omega_h)}$,
$\norm{u_h-u_I}_{H^1(\Omega_h)}$, and
$\norm{\,|\delta|^{-1/2}(u_h-u_I)}_{L^2(\partial\Omega_h)}$ as a function of $\epsilon$ and
maximum mesh size (hmax) for the Robin-like approximation \eqref{fem}
but modified as in \eqref{eqn:epsrobimet}, for piecewise quadratic polynomials ($k=2$).
Key: $M$ is the value of the {\tt meshsize} input parameter to the {\tt mshr}
function {\tt circle} used to generate the mesh;
segs is the number of boundary edges.}
\label{tabl:epsrobin}
\end{table}
\section{Computational Experiments}
\subsection{Example of a circle}
We return now to the computational test problem described in Section \ref{circle}.
It is not difficult to show that Assumption \ref{assumption1} holds for the meshes we used.
We see from Table \ref{tabl:raterobin} that the $H^{1}(\Omega_h)$ error is
optimal order for $k\leq 3$, consistent with Theorem \ref{errorestimates}.
In these cases, the $L^{2}(\Omega_h)$ error is also optimal order,
and the boundary error is higher order for quadratics. For $k \ge 4$ our numerical experiments seem to predict the error
$$
\|u-u_h\|_{H^1(\Omega_h)}\approx C\big(h^{7/2}+h^k\big),
$$
which coincides with Theorem \ref{errorestimates}.
It appears from Table \ref{tabl:raterobin} that the boundary error term
$$
\norm{\,|\delta|^{-1/2}(u-u_h)}_{L^2(\partial\Omega_h)} \approx Ch^3, \quad \text{ for all } k \ge 2,
$$
which is consistent with Theorem \ref{errorestimates}.
\begin{figure}
\centerline{(a)\includegraphics[width=3.0in]{plotrobincorL2.pdf}
(b)\includegraphics[width=3.0in]{plotrobincorH1.pdf}}
\caption{Errors $u_h-u_I$ in (a) $L^2(\Omega_h)$ and (b) $H^1(\Omega_h)$
as a function of the maximum mesh size for the method \eqref{fem}.
The asterisks indicate data for (a) $k=4$ and (b) $k=5$.}
\label{fig:plotrobincor}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c||c|c||c|c||c|c|}
\hline
$k$&$M$ &hmax& L2 error&rate& H1 error&rate&bdry err & rate\\
\hline
1 & 16 & 0.135 & 0.0264 & 1.95 & 0.545 & 0.96 & 0.292 & 1.04 \\
1 & 32 & 0.0688 & 0.00683 & 1.95 & 0.277 & 0.98& 0.145 & 1.01 \\
1 & 64 & 0.0353 & 0.00169 & 2.01 & 0.137 & 1.02& 0.0724& 1.00 \\
\hline
2 & 16 & 0.135 & 3.71e-04 & 2.88 & 0.0278 & 1.90 & 0.00177 & 2.71 \\
2 & 32 & 0.0688 & 4.80e-05 & 2.95 & 0.00719 & 1.95& 2.52e-04 & 2.81 \\
2 & 64 & 0.0353 & 5.94e-06 & 3.02 & 0.00179 & 2.00& 3.12e-05 & 3.02 \\
\hline
3 & 16 & 0.135 & 8.43e-06 & 3.94 & 7.07e-04 & 2.91& 5.22e-04 &2.98 \\
3 & 32 & 0.0688 & 5.39e-07 & 3.97 & 9.25e-05 & 2.93& 6.52e-05 &3.00 \\
3 & 64 & 0.0353 & 3.35e-08 & 4.00 & 1.15e-05 & 3.01& 8.13e-06 &3.01 \\
\hline
4 & 16 & 0.135 & 8.43e-06 & 3.99 & 7.07e-05 & 3.45& 5.34e-04 & 2.97 \\
4 & 32 & 0.0688 & 5.27e-07 & 4.00 & 6.38e-06 & 3.47& 6.74e-05 & 2.99 \\
4 & 64 & 0.0353 & 3.29e-08 & 4.00 & 5.69e-07 & 3.49& 8.47e-06 & 2.99 \\
\hline
5 & 16 & 0.135 & 8.43e-06 & 3.99 & 6.80e-05 & 3.45 & 5.35e-04 & 2.97 \\
5 & 32 & 0.0688 & 5.27e-07 & 4.00 & 6.11e-06 & 3.48 &6.75e-05 & 2.99 \\
5 & 64 & 0.0353 & 3.30e-08 & 4.00 & 5.45e-07 & 3.49 & 8.47e-06 & 2.99\\
\hline
\end{tabular}
\end{center}
\vspace{0mm}
\caption{Errors $\norm{u_h-u_I}_{L^2(\Omega_h)}$,
$\norm{u_h-u_I}_{H^1(\Omega_h)}$, and
$\norm{\,|\delta^{-1/2}(u_h-u_I)}_{L^2(\partial\Omega_h)}$ as a function of
mesh size (hmax) for the method \eqref{eqn:epsrobimet}
for various polynomial degrees $k$. The fudge factor $\epsilon$ was taken to be $10^{-13}$.
Results were insignificantly different for smaller values.
Key: $M$ is the value of the {\tt meshsize} input parameter to the {\tt mshr}
function {\tt circle} used to generate the mesh.
The number of boundary edges was set to $5M$, and
hmax is the maximum mesh size.}
\label{tabl:raterobin}
\end{table}
\subsection{An example with $\delta<0$}
\label{sec:testprobis}
Now consider the case where $\Omega$ is a disc of radius $1$ centered at
the origin, having a concentric disc of radius $R<1$ removed. Again, it is not difficult to show that Assumption \ref{assumption1} holds for our meshes.
For boundary value problem, we take $R={\textstyle{1\over 2}}$ and $-\Delta u=f$, with
$$
u(x,y)=(x^2+y^2) -5(x^2+y^2)^2+4(x^2+y^2)^3,
\qquad f=-4 +80 (x^2+y^2) -144 (x^2+y^2)^2
$$
in the computational experiments described in Table \ref{tabl:rateshortrobin}.
Note that $u$ vanishes on both boundary arcs. Note that the error estimates are consistent with Theorem \ref{errorestimates}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}\hline
$k$ & $M$ & hmax & L2 error & H1 error& bdry error\\
\hline
2 & 16 & 0.132 & 8.76e-04 & 6.87e-02 & 1.39e-04 \\
2 & 32 & 0.070 & 1.20e-04 & 1.84e-02 & 9.64e-06 \\
2 & 64 & 0.036 & 1.54e-05 & 4.68e-03 & 6.51e-07 \\
\hline
3 & 16 & 0.132 & 2.90e-05 & 2.29e-03 & 6.59e-05 \\
3 & 32 & 0.070 & 1.89e-06 & 3.07e-04 & 4.13e-06 \\
3 & 64 & 0.036 & 1.17e-07 & 3.93e-05 & 2.47e-07 \\
\hline
4 & 16 & 0.132 & 2.23e-05 & 3.37e-04 & 7.24e-05 \\
4 & 32 & 0.070 & 1.39e-06 & 2.97e-05 & 4.57e-06 \\
4 & 64 & 0.036 & 8.10e-08 & 2.61e-06 & 2.76e-07 \\
\hline
\end{tabular}
\end{center}
\vspace{0mm}
\caption{Errors $u_h-u_I$ measured in $L^2(\Omega_h)$ (L2 error),
$H^1(\Omega_h)$ (H1 error), and $L^2(\partial\Omega_h)$ (bdry error)
as a function of mesh size (hmax) for the the Robin approximation
in \eqref{eqn:epsrobimet}, for selected polynomial degrees $k$.
$\epsilon=10^{-9}$.
Key: $M$ is the value of the {\tt meshsize} input parameter to the {\tt mshr}
function {\tt circle} used to generate the mesh.
The number of boundary edges for the outer boundary was set to $4M$, and
the number of boundary edges for the inner boundary was set to $2M$.}
\label{tabl:rateshortrobin}
\end{table}
\section{Boundary layers}
It is natural to expect the error with various boundary approximations might be
limited to a boundary layer, with the interior error of a smaller magnitude.
Our observations indicate something like this, but the behavior is more complex.
In Figure \ref{fig:bdrywhat}, we see two computations done on the same mesh
based on a triangulation of $\Omega_h$ with $\partial\Omega_h$ having 80 segments
and using piecewise-quadratic approximation.
In Figure \ref{fig:bdrywhat}(a), we see the simple polygonal approximation \eqref{eqn:polyapprocirkl}.
In this case, the error is somewhat larger near the boundary, but it does
not decay to zero in the interior.
Thus there is a significant pollution effect away from the boundary.
On the other hand, Figure \ref{fig:bdrywhat}(b) shows what happens if the
Robin-like method \eqref{fem}.
Now we see that the error does decay towards zero in the interior, with the
majority of the error concentrated at the boundary.
\begin{figure}
\centerline{(a)\includegraphics[width=3.0in]{jpcircleP2M16new.png} \quad (b)
\hspace{-9pt} \includegraphics[width=3.2in]{epsrobinP2M16new.png}}
\caption{Error with piecewise quadratics
on a mesh with $\partial\Omega_h$ having 80 segments.
The mesh is drawn in the plane corresponding to zero error.
(a) The method \eqref{eqn:polyapprocirkl}, no boundary integral corrections.
The error is uniformly positive.
(b) The Robin-like method \eqref{fem}.
The error oscillates around zero.
Note the factor of ten difference in scales in the error plots.}
\label{fig:bdrywhat}
\end{figure}
\section{Higher order and symmetric methods}
\label{sec:hoasm}
The Robin-type method presented in the previous section is at most of $O(h^{7/2})$.
High-order methods using the same techinique do not lead to symmetric systems.
For simplicity assume that $g \equiv 0$. Using that
$$
\Big|u|_{\partial\Omega_h}+\delta\derdir{u}{n}\big|_{\partial\Omega_h}
+\frac{\delta^2}{2}\derdirtwo{u}{n}\big|_{\partial\Omega_h}\Big|
\leq C\delta^3\norm{u}_{W^3_\infty(\Omega)},
$$
we define
\begin{equation}\label{eqn:presecndordr}
b_h(u,v)= a_h(u,v)+\int_{\partial\Omega_h}\delta^{-1}{u}v \,ds
+\int_{\partial\Omega_h} \frac{\delta}{2}\derdirtwo{u}{n}v \,ds .
\end{equation}
Unfortunately, $b_h$ is not symmetric.
One way to have higher-order, symmetric methods is by symmetrizing
the approach of Bramble-Dupont-Thom\'ee.
Recall that Bramble et al. \cite{bramble1972projection} developed arbitrary order methods,
but that the bilinear forms are not symmetric.
The lowest order method was presented in Section \ref{sec:BDTmeth} where the bilinear $N_h$
is given by \eqref{eqn:toddformr}.
One way to symmetrize $N_h$ and mainting the same convergence rates is by introducing the
bilinear form:
\begin{equation*}
M_h(u,v)=N_h(u,v)
+ \int_{\partial\Omega_h} \gamma \delta h^{-1} \derdir{v}{n} \Big(u+\delta \derdir{u}{n}\Big) \,ds.
\end{equation*}
This is precisely what is done in \cite[(2.31)]{ref:CutFEMbasedonBDT}.
We see that
\begin{alignat*}{1}
M_h(u,v)=&a_h(u,v)+\int_{\partial\Omega_h} \Big(\gamma \frac{\delta}{h}-1\Big)
\Big( \delta \derdir{u}{n} \derdir{v}{n} + \derdir{u}{n} v+\derdir{v}{n} u\Big) \,ds
+ \frac{\gamma}{h} \int_{\partial\Omega_h} u v \,ds.
\end{alignat*}
Note that $M_h$ is symmetric. We will investigate this and similar methods in the near future.
| {
"timestamp": "2020-01-10T02:13:51",
"yymm": "2001",
"arxiv_id": "2001.03082",
"language": "en",
"url": "https://arxiv.org/abs/2001.03082",
"abstract": "We study two techniques for correcting the geometrical error associated with domain approximation by a polygon. The first was introduced some time ago \\cite{bramble1972projection} and leads to a nonsymmetric formulation for Poisson's equation. We introduce a new technique that yields a symmetric formulation and has similar performance. We compare both methods on a simple test problem.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Obtaining higher-order Galerkin accuracy when the boundary is polygonally approximated",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357622402971,
"lm_q2_score": 0.8080672158638528,
"lm_q1q2_score": 0.7911267026246618
} |
https://arxiv.org/abs/2211.09530 | Rainbow even cycles | We prove that every family of (not necessarily distinct) even cycles $D_1, \dotsc, D_{\lfloor 1.2(n-1) \rfloor+1}$ on some fixed $n$-vertex set has a rainbow even cycle (that is, a set of edges from distinct $D_i$'s, forming an even cycle). This resolves an open problem of Aharoni, Briggs, Holzman and Jiang. Moreover, the result is best possible for every positive integer $n$. | \section{Introduction} \label{sec:intro}
Let $\mathcal{F}$ be a set family. A rainbow set with respect to $\mathcal{F}$ is a subset $R$ (without repeated elements) of $\bigcup_{F \in \mathcal{F}} F$ such that there exists an injection $\sigma \colon R \to \mathcal{F}$ with $r \in \sigma(r)$ for all $r \in R$. In other words, each element $r \in R$ comes from a distinct $F \in \mathcal{F}$. We think about each set in $\mathcal{F}$ as a different color class, and hence use the term ``rainbow''. An important remark here is that a ``family'' is considered as a ``multiset'', since one element under different colors may appear more than once in $\bigcup_{F \in \mathcal{F}} F$.
Suppose each $F \in \mathcal{F}$ satisfies property $\mathcal{P}$. What is the minimum size of $\mathcal{F}$ such that a rainbow subset of $\bigcup_{F \in \mathcal{F}} F$ with $\mathcal{P}$ always exists? One famous result of this type is the colorful Carath\'{e}odory's theorem due to B\'{a}r\'{a}ny \cite{barany}, which asserts that every family of $n+1$ subsets of $\R^n$, each containing a point $p$ in its convex hull, has a rainbow subset whose convex hull contains $p$ as well. Such problems are also studied in graph theory. Aharoni and Berger \cite{aharoni_berger} proved that any family of $2n - 1$ matchings of size $n$ in a bipartite graph contains a rainbow matching of size $n$. Other results of this type on cycles and triangles can be found in \cite{aharoni_briggs_holzman_jiang,gyori,goorevitch_holzman}.
There are studies of rainbow graphs in a different context: Given an edge-colored graph, what conditions guarantee a certain subgraph whose edges have distinct colors? Due to the relation between Latin squares, rainbow matchings have received extensive attention. See \cite{aharoni_berger_chudnovsky_zerbib,correia_pokrovskiy_sudakov} for recent works. As a starting point to generalize Tur\'{a}n's theorem, the existence of rainbow triangles are analyzed in \cite{aharoni_devos_gonzalez_montejano_samal,aharoni_devos_holzman}. A rainbow version of Dirac's theorem on Hamiltonian cycles can be found in \cite{joos_kim}.
\smallskip
Throughout this paper, a colored edge $\mathsf{e}$ is a $2$-tuple $(uv, \alpha)$, where $uv \eqdef \{u, v\}$ is its vertex set and $\alpha$ is its color. We thus denote $V(\mathsf{e}) \eqdef \{u,v\}$ and $\chi(\mathsf{e}) \eqdef \alpha$. Two edges $\mathsf{e}_1$ and $\mathsf{e}_2$ are \emph{coincident} if $V(\mathsf{e}_1) = V(\mathsf{e}_2)$. Without further specifications, a graph is an edge-colored simple graph $\mathsf{G} \eqdef (V, \mathsf{E})$, where ``simple'' means no coincident edges exist. Write $V(\mathsf{G}) \eqdef V$ for the vertex set, $\mathsf{E}(\mathsf{G}) \eqdef \mathsf{E}$ for the edge set, and $\chi(\mathsf{G}) \eqdef \{\chi(\mathsf{e}) : \mathsf{e} \in \mathsf{E}\}$ for the color set. For two graphs $\mathsf{G}_1, \mathsf{G}_2$, we call them \emph{coincident} if there exists a bijection $\varphi \colon \mathsf{E}(\mathsf{G}_1) \to \mathsf{E}(\mathsf{G}_2)$ such that $\mathsf{e}$ is coincident to $\varphi(\mathsf{e})$ for all $\mathsf{e} \in \mathsf{E}(\mathsf{G}_1)$.
For any set $\mathsf{E}$ of colored edges, we write $V(\mathsf{E}) \eqdef \bigcup_{\mathsf{e} \in \mathsf{E}} V(\mathsf{e})$, and so $\mathsf{E}$ naturally generates a graph $\mathsf{G}(\mathsf{E}) \eqdef (V(\mathsf{E}), \mathsf{E})$. On the other hand, isolated vertices do not affect our analysis. \textbf{We thus do not distinguish a graph from an edge set.} In this way, $\mathsf{H} \subseteq \mathsf{G}$ means $\mathsf{H}$ is a subgraph of $\mathsf{G}$. We still need uncolored (simple) graphs. If $G = (V, E)$ is an uncolored graph, then $V(G) \eqdef V$, $E(G) \eqdef E$. Note that the letters $\mathsf{G}, \mathsf{E}, \mathsf{e}, \dotsc$, instead of $G, E, e, \dotsc$, highlight the underlying edge-coloring.
This paper is devoted to the existence of a rainbow even cycle in a family of even cycles. A cycle is an edge set $\mathsf{C}$ such that the graph $\mathsf{C}$ (i.e.~$\mathsf{G}(\mathsf{C})$) is a cycle. In other words, $\mathsf{C} = \{v_1v_2, \dotsc, v_{\ell-1}v_{\ell}, v_{\ell}v_1\}$ where $v_1, \dotsc, v_{\ell}$ are distinct and $\ell \ge 3$ is called the \emph{length} of $\mathsf{C}$. For any $A \subseteq \{3, 4, 5, \dotsc\}$, an $A$-\emph{cycle} is a cycle whose length is some number from $A$. For example, an \emph{odd cycle}, a cycle of odd length, is a $\{3, 5, 7, \dotsc\}$-cycle. Similarly, an \emph{even cycle}, a cycle of even length, is a $\{4, 6, 8, \dotsc\}$-cycle. For any integer $k \ge 3$, a $k$-cycle refers to a $\{k\}$-cycle.
Hereafter a family $\mathcal{F} = \{\mathsf{E}_1, \dotsc, \mathsf{E}_m\}$ is a family of cycles. We remark that $\mathcal{F}$ is a family implicitly implies that $\chi(\mathsf{E}_i) = \{\alpha_i\}$ while $\alpha_1, \dotsc, \alpha_m$ are distinct. Since each $\mathsf{E}_i$ is a monochromatic cycle, we view $\mathcal{F}$ as an edge-colored multigraph. A subgraph of $\mathcal{F}$ is then a \textbf{simple} edge-colored graph $(V, \mathsf{E})$ where $V \subseteq \bigcup_{i=1}^m V(\mathsf{E}_i)$ and $\mathsf{E} \subseteq \bigcup_{i=1}^m \mathsf{E}_i$. In \hyperlink{figone}{Figure~1}, the family $\mathcal{D} = \{\mathsf{D}_1, \mathsf{D}_2, \mathsf{D}_3, \mathsf{D}_4\}$ consists of four $4$-cycles on seven vertices where $\mathsf{D}_2$ and $\mathsf{D}_3$ are coincident. Let $\chi(\mathsf{D}_i) = \alpha_i$ for $i = 1, 2, 3, 4$. Then $\mathsf{D} \eqdef \{(v_0v_1, \alpha_1), (v_1v_2, \alpha_2), (v_2v_3, \alpha_3), (v_3v_0, \alpha_4)\}$ is a rainbow $4$-cycle subgraph of the multigraph $\mathcal{D}$.
\begin{center}
\begin{tikzpicture}[x=2cm,y=1.2cm]
\clip(-4,-1.9) rectangle (4,1.4);
\draw [color=red] (-0.6,-1) -- (-1.2,0) -- (-0.6,1) -- (0,0) -- (-0.6,-1);
\draw [color=blue] (0.6,-1) -- (1.2,0) -- (0.6,1) -- (0,0) -- (0.6,-1);
\draw [color=green] (-0.625,-1.05) -- (-0.625,1.05) -- (0.625,1.05) -- (0.625,-1.05) -- (-0.625,-1.05);
\draw [color=violet] (-0.575,-0.95) -- (-0.575,0.95) -- (0.575,0.95) -- (0.575,-0.95) -- (-0.575,-0.95);
\draw [fill=black] (-1.2,0) circle (4pt);
\draw [fill=black] (-0.6,-1) circle (4pt);
\draw [fill=black] (-0.6,1) circle (4pt);
\draw [fill=black] (0,0) circle (4pt);
\draw [fill=black] (0.6,-1) circle (4pt);
\draw [fill=black] (0.6,1) circle (4pt);
\draw [fill=black] (1.2,0) circle (4pt);
\node at (0,-0.3) {$v_0$};
\node at (-0.75,1.15) {$v_1$};
\node at (-0.75,-1.15) {$v_2$};
\node at (0.75,-1.15) {$v_3$};
\node at (0.75,1.15) {$v_4$};
\node at (-1.375,0) {$v_5$};
\node at (1.375,0) {$v_6$};
\begin{footnotesize}
\node at (-1,0.65) {$\textcolor{red}{\mathsf{D}_1}$};
\node at (0,0.775) {$\textcolor{violet}{\mathsf{D}_2}$};
\node at (0,1.225) {$\textcolor{green}{\mathsf{D}_3}$};
\node at (1,0.65) {$\textcolor{blue}{\mathsf{D}_4}$};
\end{footnotesize}
\node at (0,-1.7) {\textbf{\hypertarget{figone}{Figure 1:}} An example family $\mathcal{D}$ viewed as an edge-colored multigraph. };
\end{tikzpicture}
\end{center}
\newpage
\begin{theorem} \label{thm:rainbow_oc}
\emph{(\cite{aharoni_briggs_holzman_jiang})} Every family of $2\left\lceil\frac{n}{2}\right\rceil-1$ odd cycles on $n$ vertices contains a rainbow odd cycle.
\end{theorem}
The tightness of \Cref{thm:rainbow_oc} is witnessed by a family of $2\bigl(\left\lceil\frac{n}{2}\right\rceil-1\bigr)$ many coincident odd cycles on $2\left\lceil\frac{n}{2}\right\rceil-1$ vertices. As for even cycles, Aharoni, Briggs, Holzman and Jiang also deduced in \cite{aharoni_briggs_holzman_jiang} that the maximum size of a family on $n$ vertices containing no rainbow even cycle is between roughly $6n/5$ and $3n/2$, and left the determination of the exact extremal number as an open problem. We answer this question by proving the following result:
\begin{theorem} \label{thm:rainbow_ec}
Every family of $\left\lfloor\frac{6(n-1)}{5}\right\rfloor+1$ even cycles on $n$ vertices contains a rainbow even cycle.
\end{theorem}
The tightness of \Cref{thm:rainbow_ec} for each $n \ge 4$ (no even cycle exists when $n \le 3$) is seen as follows: The families $\mathcal{D}_4, \mathcal{D}_5, \mathcal{D}_6, \mathcal{D}_7, \mathcal{D}_8$ in \hyperlink{figtwo}{Figure~2} are tight examples for $n = 4, 5, 6, 7, 8$, respectively. For larger $n$, we observe that by gluing together $\mathcal{D}_{n-5}$ (a tight example for $n-5$) and $\mathcal{D}_6$ at exactly one vertex (edge-disjoint henceforth) the resulted family $\mathcal{D}_n$ is tight for $n$. We remark that the family $\mathcal{D}_6$ and the inductive argument are already presented in \cite{aharoni_briggs_holzman_jiang}.
\begin{center}
\begin{tikzpicture}[x=1.0cm,y=1cm]
\clip(-1.5,-2) rectangle (1.5,1.5);
\draw [color=red] (-1,-1)--(-1,1);
\draw [color=red] (-1.1,-1)--(-1.1,1);
\draw [color=red] (-0.9,-1)--(-0.9,1);
\draw [color=red] (-1,1)--(1,1);
\draw [color=red] (-1,0.9)--(1,0.9);
\draw [color=red] (-1,1.1)--(1,1.1);
\draw [color=red] (1,1)--(1,-1);
\draw [color=red] (1.1,1)--(1.1,-1);
\draw [color=red] (0.9,1)--(0.9,-1);
\draw [color=red] (1,-1)--(-1,-1);
\draw [color=red] (1,-1.1)--(-1,-1.1);
\draw [color=red] (1,-0.9)--(-1,-0.9);
\draw [fill=black] (-1,-1) circle (5pt);
\draw [fill=black] (-1,1) circle (5pt);
\draw [fill=black] (1,-1) circle (5pt);
\draw [fill=black] (1,1) circle (5pt);
\draw [color=black] (0,-1.7) node {$\mathcal{D}_4$};
\end{tikzpicture}
\begin{tikzpicture}[x=1.0cm,y=1cm]
\clip(-1.5,-2) rectangle (1.5,1.5);
\draw [color=red] (-1.2,-0.95)--(0,-0.95);
\draw [color=red] (-1.2,-1.05)--(0,-1.05);
\draw [color=blue] (0,-0.95)--(1.2,-0.95);
\draw [color=blue] (0,-1.05)--(1.2,-1.05);
\draw [color=red] (-0.6,1.12)--(0.6,1.12);
\draw [color=red] (-0.6,1.04)--(0.6,1.04);
\draw [color=blue] (-0.6,0.96)--(0.6,0.96);
\draw [color=blue] (-0.6,0.88)--(0.6,0.88);
\draw [color=red] (-1.255,-1)--(-0.655,1);
\draw [color=red] (-1.145,-1)--(-0.545,1);
\draw [color=red] (-0.055,-1)--(0.545,1);
\draw [color=red] (0.055,-1)--(0.655,1);
\draw [color=blue] (1.255,-1)--(0.655,1);
\draw [color=blue] (1.145,-1)--(0.545,1);
\draw [color=blue] (-0.055,-1)--(-0.655,1);
\draw [color=blue] (0.055,-1)--(-0.545,1);
\draw [fill=black] (-1.2,-1) circle (5pt);
\draw [fill=black] (0,-1) circle (5pt);
\draw [fill=black] (1.2,-1) circle (5pt);
\draw [fill=black] (-0.6,1) circle (5pt);
\draw [fill=black] (0.6,1) circle (5pt);
\draw [color=black] (0,-1.7) node {$\mathcal{D}_5$};
\end{tikzpicture}
\begin{tikzpicture}[x=1.0cm,y=1cm]
\clip(-1.5,-2) rectangle (1.5,1.5);
\foreach \x in {-0.1,0,0.1}
\foreach \y in {-1,1}
\draw [color=red] (-1.2,\x+\y)--(0,\x+\y);
\foreach \x in {-0.12,0,0.12}
\draw [color=red] (-1.2+\x,-1)--(\x,1);
\foreach \x in {-0.12,0,0.12}
\draw [color=red] (-1.2+\x,1)--(\x,-1);
\foreach \x in {-0.1,0,0.1}
\foreach \y in {0,1.2}
\draw [color=blue] (\x+\y,-1)--(\x+\y,1);
\foreach \x in {-0.12,0,0.12}
\draw [color=blue] (1.2+\x,-1)--(\x,1);
\foreach \x in {-0.12,0,0.12}
\draw [color=blue] (1.2+\x,1)--(\x,-1);
\draw [fill=black] (-1.2,-1) circle (5pt);
\draw [fill=black] (0,-1) circle (5pt);
\draw [fill=black] (1.2,-1) circle (5pt);
\draw [fill=black] (-1.2,1) circle (5pt);
\draw [fill=black] (0,1) circle (5pt);
\draw [fill=black] (1.2,1) circle (5pt);
\draw [color=black] (0,-1.7) node {$\mathcal{D}_6$};
\end{tikzpicture}
\begin{tikzpicture}[x=1.0cm,y=1cm]
\clip(-1.5,-2) rectangle (1.5,1.5);
\foreach \x in {-0.12,0,0.12}
\foreach \y in {-1.2,0}
\foreach \z in {-1,1}
\draw [color=red] (\x+\y,0)--(-0.6+\x,\z);
\foreach \x in {-0.12,0,0.12}
\foreach \y in {1.2,0}
\foreach \z in {-1,1}
\draw [color=blue] (\x+\y,0)--(0.6+\x,\z);
\draw [color=green] (-0.6,-1)--(-0.6,1);
\draw [color=green] (-0.6,1)--(0.6,1);
\draw [color=green] (0.6,1)--(0.6,-1);
\draw [color=green] (0.6,-1)--(-0.6,-1);
\draw [fill=black] (-1.2,0) circle (5pt);
\draw [fill=black] (-0.6,-1) circle (5pt);
\draw [fill=black] (-0.6,1) circle (5pt);
\draw [fill=black] (0,0) circle (5pt);
\draw [fill=black] (0.6,-1) circle (5pt);
\draw [fill=black] (0.6,1) circle (5pt);
\draw [fill=black] (1.2,0) circle (5pt);
\draw [color=black] (0,-1.7) node {$\mathcal{D}_7$};
\end{tikzpicture}
\begin{tikzpicture}[x=1.0cm,y=1cm]
\clip(-1.5,-2) rectangle (1.5,1.5);
\foreach \x in {-0.1,-0.05,0,0.05,0.1}
\foreach \y in {-1.2,-0.4}
\draw [color=red] (\x+\y,-0.3)--(\x+\y,0.3);
\foreach \x in {-0.11,-0.055,0,0.055,0.11}
\draw [color=red] (-1.2+\x,0.3)--(-0.8+\x,1);
\foreach \x in {-0.11,-0.055,0,0.055,0.11}
\draw [color=red] (-0.4+\x,0.3)--(-0.8+\x,1);
\foreach \x in {-0.11,-0.055,0,0.055,0.11}
\draw [color=red] (-1.2+\x,-0.3)--(-0.8+\x,-1);
\foreach \x in {-0.11,-0.055,0,0.055,0.11}
\draw [color=red] (-0.4+\x,-0.3)--(-0.8+\x,-1);
\foreach \x in {-0.07,0,0.07}
\foreach \y in {-1,1}
\draw [color=blue] (-0.8+\x,\y)--(1.2+\x,-\y);
\foreach \x in {-0.05,0,0.05}
\foreach \y in {-1,1}
\draw [color=blue] (-0.8,\x+\y)--(1.2,\x+\y);
\draw [fill=black] (-1.2,-0.3) circle (5pt);
\draw [fill=black] (-1.2,0.3) circle (5pt);
\draw [fill=black] (-0.8,-1) circle (5pt);
\draw [fill=black] (-0.8,1) circle (5pt);
\draw [fill=black] (-0.4,-0.3) circle (5pt);
\draw [fill=black] (-0.4,0.3) circle (5pt);
\draw [fill=black] (1.2,-1) circle (5pt);
\draw [fill=black] (1.2,1) circle (5pt);
\draw [color=black] (0,-1.7) node {$\mathcal{D}_8$};
\end{tikzpicture}
\begin{tikzpicture}
\node at (0, 0) {\textbf{\hypertarget{figtwo}{Figure 2:}} Tight examples of \Cref{thm:rainbow_ec} for small $n$. };
\end{tikzpicture}
\end{center}
\paragraph{Proof strategy.} To explain the strategy of our proof, we begin with an easier version of \Cref{thm:rainbow_ec} whose tightness is witnessed by a family of $n-1$ coincident Hamiltonian cycles.
\begin{theorem} \label{thm:rainbow_c}
\emph{(\cite[Proposition~3.2]{aharoni_briggs_holzman_jiang})} Every family of $n$ cycles on $n$ vertices contains a rainbow cycle.
\end{theorem}
\begin{proof}
Let $\mathcal{F}$ be such a family and $\mathsf{F}$ be a maximal rainbow forest subgraph of $\mathcal{F}$. Then $|\mathsf{F}| \le n-1$, and so there is another edge $\mathsf{e}$, not coincident to any edge of $\mathsf{F}$, whose color does not appear in $\mathsf{F}$. The maximality of $\mathsf{F}$ implies that $\mathsf{e}$ completes a rainbow cycle in $\mathsf{F}+\mathsf{e}$.
\end{proof}
All these proofs proceed by first finding a \emph{spanning structure} $S$ (the rainbow forest $\mathsf{F}$ in the proof above) and then analyzing another edge with an absent color in $S$. The proof of \Cref{thm:rainbow_oc} also uses a maximal rainbow forest as $S$. However, to prove \Cref{thm:rainbow_ec} we need some new spanning structure.
It turns out that $5$-cycles play a central role in the $6n/5$ bound. We thus call a cycle \emph{long} if its length is at least $6$. In particular, a rainbow $\{7, 9, 11, \dotsc\}$-cycle is a long rainbow odd cycle. Then our spanning structure, which we call \emph{Frankenstein graphs}, are (informally speaking) obtained by recursively gluing together at some single vertex a collection of long rainbow odd cycles, rainbow trees, and another class of graphs named \emph{bad pieces}. We shall formally define and characterize bad pieces and Frankenstein graphs in \Cref{sec:fgraph}. Then we shall prove \Cref{thm:rainbow_ec} in \Cref{sec:proof_ec}.
\section{Frankenstein graphs} \label{sec:fgraph}
Recall that a path graph of length $k$ is a simple graph $P = (V, E)$ in which $V = \{v_0, v_1, \dotsc, v_k\}$ and $E = \{v_0v_1, v_1v_2, \dotsc, v_{k-1}v_k\}$. We call $v_0$ and $v_k$ \emph{terminals} of $P$. Here the definition is for uncolored graphs, and an edge-colored graph is called a \emph{path} if, after dropping the edge colors, its uncolored copy is a path. Similar conventions between colored and uncolored graphs will be directly applied.
For graphs $\mathsf{G}_1 \eqdef (V_1, \mathsf{E}_1)$ and $\mathsf{G}_2 \eqdef (V_2, \mathsf{E}_2)$, let $\mathsf{G}_1 \bullet \mathsf{G}_2 \eqdef (V_1 \bullet V_2, \mathsf{E}_1 \bullet \mathsf{E}_2)$ where $\bullet$ is $\cup$ or $\cap$. Note that $\mathsf{G}_1 \cup \mathsf{G}_2$ is not necessarily a graph, as coincident edges may appear in $\mathsf{G}_1 \cup \mathsf{G}_2$. Indeed, for $\mathsf{e} \eqdef (uv, \alpha)$ and $\sf \eqdef (uv, \beta)$, we have $\{\mathsf{e}\} \cup \{\sf\} = \{\mathsf{e}\}$ if $\alpha = \beta$, and $\{\mathsf{e}\} \cup \{\sf\} = \{\mathsf{e}, \sf\}$ if $\alpha \neq \beta$. Note that these notations are natural since we identify $\mathsf{E}_i$ with $\mathsf{G}_i$, as mentioned in \Cref{sec:intro}.
A \emph{theta graph} is a union of $3$ paths that share and only share their terminals. In other words, $\mathsf{G}$ is a theta graph if $\mathsf{G} = \mathsf{P}_1 \cup \mathsf{P}_2 \cup \mathsf{P}_3$ where $\mathsf{P}_1, \mathsf{P}_2, \mathsf{P}_3$ are paths with the same terminals $s, t$ such that
\begin{align*}
V(\mathsf{P}_1 \cap \mathsf{P}_2) = V(\mathsf{P}_2 \cap \mathsf{P}_3) &= V(\mathsf{P}_3 \cap \mathsf{P}_1) = \{s, t\}, \\
\mathsf{P}_1 \cap \mathsf{P}_2 = \mathsf{P}_2 \cap \mathsf{P}_3 &= \mathsf{P}_3 \cap \mathsf{P}_1 = \varnothing.
\end{align*}
We use the name ``theta'' because one natural drawing of such a graph looks exactly like the Greek letter $\theta$. See \hyperlink{figthree}{Figure~3} below as an illustration.
\vspace{-0.5em}
\begin{center}
\begin{tikzpicture}
[x=0.8cm,y=0.8cm]
\clip(-5,-2.05) rectangle (11,1.5);
\draw [fill=black] (0,0) circle (1pt);
\draw [fill=black] (1.5,0) circle (1pt);
\draw [fill=black] (3,0) circle (1pt);
\draw [fill=black] (4.5,0) circle (1pt);
\draw [fill=black] (6,0) circle (1pt);
\draw [fill=black] (2,1) circle (1pt);
\draw [fill=black] (4,1) circle (1pt);
\draw [fill=black] (1.2,-1) circle (1pt);
\draw [fill=black] (2.4,-1) circle (1pt);
\draw [fill=black] (3.6,-1) circle (1pt);
\draw [fill=black] (4.8,-1) circle (1pt);
\draw (0,0) -- (2,1);
\draw (2,1) -- (4,1);
\draw (4,1) -- (6,0);
\draw (6,0) -- (4.8,-1);
\draw (4.8,-1) -- (3.6,-1);
\draw (3.6,-1) -- (2.4,-1);
\draw (2.4,-1) -- (1.2,-1);
\draw (1.2,-1) -- (0,0);
\draw (0,0) -- (1.5,0);
\draw (1.5,0) -- (3,0);
\draw (3,0) -- (4.5,0);
\draw (4.5,0) -- (6,0);
\node at (3, -1.8) {\textbf{\hypertarget{figthree}{Figure 3:}} A theta graph on paths of lengths $3,4,5$, respectively. };
\end{tikzpicture}
\end{center}
\begin{observation} \label{obs:theta_ec}
Every rainbow theta graph has a rainbow even cycle subgraph.
\end{observation}
\begin{proof}
Suppose $\mathsf{P}_1 \cup \mathsf{P}_2 \cup \mathsf{P}_3$ is a theta graph where $\mathsf{P}_1, \mathsf{P}_2, \mathsf{P}_3$ are paths of common terminals. Then two of the paths, say $\mathsf{P}_1$ and $\mathsf{P}_2$, have lengths of the same parity, and so $\mathsf{P}_1 \cup \mathsf{P}_2$ is an even cycle.
\end{proof}
We call a graph $\mathsf{G}$ \emph{almost rainbow} if $|\chi(\mathsf{G})| = |\mathsf{G}|-1$. That is, exactly two edges receive a same color, and the color of every other edge is unique. We call $\mathsf{B}$ a \emph{bad piece} if $\mathsf{B}$ is an almost rainbow theta graph on $3$ rainbow paths (sharing terminals) such that $|V(\mathsf{B})| = |\chi(\mathsf{B})| \ge 6$.
\begin{center}
\begin{tikzpicture}[x=1.0cm,y=0.8cm]
\clip(-2.5,-2) rectangle (2.5,4.5);
\draw (-1.8,1.5)-- (0,4);
\draw (0,4)-- (1.8,1.5);
\draw (1.8,1.5)-- (0,-1);
\draw (0,-1)-- (-1.8,1.5);
\draw (-1.8,1.5)-- (1.8,1.5);
\node at (0, -1.7) {\textbf{\hypertarget{figfourA}{Figure 4.A}}};
\draw [fill=black] (-1.8,1.5) circle (1.0pt);
\draw[color=black] (-2.05,1.5) node {$v_1$};
\draw [fill=black] (1.8,1.5) circle (1.0pt);
\draw[color=black] (2.05,1.5) node {$v_3$};
\draw [fill=black] (0,-1) circle (1.0pt);
\draw[color=black] (0,-1.21) node {$v_2$};
\draw [fill=black] (0,4) circle (1.0pt);
\draw[color=black] (0,4.2) node {$v_4$};
\draw[color=black] (0,1.7) node {$a$};
\draw[color=black] (0.8,0.4) node {$c$};
\draw[color=black] (-0.8,0.45) node {$b$};
\draw[color=black] (-0.825,2.6) node {$c$};
\draw[color=black] (0.75,2.6) node {$d$};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=1.0cm,y=0.8cm]
\clip(-2.5,-2) rectangle (2.5,4.5);
\draw (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw (0.,4.)-- (0.,2.);
\draw (0.,4.)-- (2.,1.);
\draw [fill=black] (-1.,-1.) circle (1.0pt);
\draw[color=black] (-0.8,-0.8) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (1.0pt);
\draw[color=black] (0.8,-0.8) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (1.0pt);
\draw[color=black] (-1.7,0.9) node {$v_2$};
\draw [fill=black] (2.,1.) circle (1.0pt);
\draw[color=black] (1.7,0.9) node {$v_5$};
\draw [fill=black] (0.,2.) circle (1.0pt);
\draw[color=black] (0,1.75) node {$v_1$};
\draw[color=black] (-0.9,1.35) node {$a$};
\draw[color=black] (-1.325,0.075) node {$b$};
\draw[color=black] (0,-0.8) node {$c$};
\draw[color=black] (1.325,0.075) node {$d$};
\draw[color=black] (0.9, 1.35) node {$e$};
\draw [fill=black] (0.,4.) circle (1.0pt);
\draw[color=black] (0.,4.2) node {$v_6$};
\draw[color=black] (-0.175,3.) node {$a$};
\draw[color=black] (1.175,2.65) node {$g$};
\node at (0, -1.7) {\textbf{\hypertarget{figfourB}{Figure 4.B}}};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[x=1.0cm,y=0.8cm]
\clip(-2.5,-2) rectangle (2.5,4.5);
\draw (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw (0.,4.)-- (0.,2.);
\draw (0.,4.)-- (2.,1.);
\draw [fill=black] (-1.,-1.) circle (1.0pt);
\draw[color=black] (-0.8,-0.8) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (1.0pt);
\draw[color=black] (0.8,-0.8) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (1.0pt);
\draw[color=black] (-1.7,0.9) node {$v_2$};
\draw [fill=black] (2.,1.) circle (1.0pt);
\draw[color=black] (1.7,0.9) node {$v_5$};
\draw [fill=black] (0.,2.) circle (1.0pt);
\draw[color=black] (0,1.75) node {$v_1$};
\draw[color=black] (-0.9,1.35) node {$a$};
\draw[color=black] (-1.325,0.075) node {$b$};
\draw[color=black] (0,-0.8) node {$c$};
\draw[color=black] (1.35,0.075) node {$b$};
\draw[color=black] (0.9, 1.35) node {$e$};
\draw [fill=black] (0.,4.) circle (1.0pt);
\draw[color=black] (0.,4.2) node {$v_6$};
\draw[color=black] (-0.175,3.) node {$f$};
\draw[color=black] (1.175,2.65) node {$g$};
\node at (0, -1.7) {\textbf{\hypertarget{figfourC}{Figure 4.C}}};
\end{tikzpicture}
\end{center}
For example, \hyperlink{figfourA}{Figure~4.A} is not a bad piece because it contains only $4$ vertices; \hyperlink{figfourB}{Figure~4.B} is a bad piece on $6$ vertices and $7$ edges consisting of rainbow paths $v_1v_5$, $v_1v_2v_3v_4v_5$ and $v_1v_6v_5$; \hyperlink{figfourC}{Figure~4.C} is not a bad piece because $v_1v_2v_3v_4v_5$ is not rainbow (as witnessed by $(v_2v_3, b)$ and $(v_4v_5, b)$).
\begin{observation} \label{obs:bestimate}
If $\mathsf{B}$ is a bad piece, then $|\chi(\mathsf{B})| \le \frac{6}{5}(|V(\mathsf{B})|-1)$.
\end{observation}
\begin{proof}
By definition we have $n \eqdef |\chi(\mathsf{B})| = |V(\mathsf{B})| \ge 6$, and hence $\frac{|\chi(\mathsf{B})|}{|V(\mathsf{B})|-1} = \frac{n}{n-1} \le \frac{6}{5}$.
\end{proof}
\begin{observation} \label{obs:bflip}
If $\mathsf{B}$ is a bad piece, then for any distinct $v_1, v_2 \in V(\mathsf{B})$, there exists in $\mathsf{B}$ a rainbow path subgraph whose terminals are $v_1$ and $v_2$.
\end{observation}
\begin{proof}
Since $|\chi(\mathsf{B})| = |\mathsf{B}|-1$, it suffices to show that $v_1, v_2$ are vertices of a cycle in $\mathsf{B}$. Suppose $\mathsf{B}$ consists of three rainbow paths $\mathsf{P}_1, \mathsf{P}_2, \mathsf{P}_3$. If $v_1$ and $v_2$ are on a same path, say $\mathsf{P}_1$, then $\mathsf{P}_1 \cup \mathsf{P}_2$ is such a cycle. If $v_1$ and $v_2$ are on different paths, say $\mathsf{P}_1$ and $\mathsf{P}_2$, then $\mathsf{P}_1 \cup \mathsf{P}_2$ is such a cycle.
\end{proof}
We call $\mathcal{P}(\mathsf{G}) = \{\mathsf{G}_1, \dotsc, \mathsf{G}_m\}$ a \emph{partition} of the graph $\mathsf{G}$ if $\mathsf{G} = \bigcup_{i=1}^m \mathsf{G}_i$, and $|V(\mathsf{G}_i) \cap V(\mathsf{G}_j)| \le 1$, $\chi(\mathsf{G}_i) \cap \chi(\mathsf{G}_j) = \varnothing$ for every distinct $\mathsf{G}_i, \mathsf{G}_j$. We call $\mathfrak{F}$ a \emph{Frankenstein graph} if $\mathfrak{F}$ admits a partition
\[
\mathcal{P}(\mathfrak{F}) = \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{T}_1, \dotsc, \mathsf{T}_t\} \qquad (c \ge 0, \, b \ge 0, \, t \ge 0, \, c+b+t \ge 1)
\]
where $\mathsf{C}$'s are long rainbow odd cycles, $\mathsf{B}$'s are bad pieces, $\mathsf{T}$'s are rainbow trees, such that
\begin{enumerate}[label=(F\arabic*), ref=(F\arabic*)]
\item\label{Fr_tree} $V(\mathsf{T}_p) \cap V(\mathsf{T}_q) = \varnothing$ for any distinct $p, q$, and
\item\label{Fr_ecfree} no rainbow even cycle subgraph exists in $\mathfrak{F}$.
\end{enumerate}
\begin{theorem} \label{thm:fsteps}
For any Frankenstein graph $\mathfrak{F}$ with $\mathcal{P}(\mathfrak{F}) = \{\mathsf{G}_1, \dotsc, \mathsf{G}_m\}$, there exists a permutation $\sigma$ on $[m]$ such that $\mathfrak{F}_i \eqdef \mathsf{G}_{\sigma(1)} \cup \dotsb \cup \mathsf{G}_{\sigma(i)}$ satisfies $|V(\mathfrak{F}_i) \cap V(\mathsf{G}_{\sigma(i+1)})| \le 1$ for each $i \in [m-1]$.
\end{theorem}
\Cref{thm:fsteps} suggests the following way to think about a connected Frankenstein graph $\mathfrak{F}$: Suppose the partition of $\mathfrak{F}$ is $\mathcal{P}(\mathfrak{F}) = \{\mathsf{G}_1, \dotsc, \mathsf{G}_m\}$. Then one can order the parts as $\mathfrak{F}_1 \eqdef \mathsf{G}_1', \mathsf{G}_2', \dotsc, \mathsf{G}_m'$, and recursively glue together $\mathsf{G}_{i+1}'$ and the $i$'th graph $\mathfrak{F}_i$ at some single vertex to make the $(i+1)$'th graph $\mathfrak{F}_{i+1}$, such that eventually $\mathfrak{F}_m$ is exactly $\mathfrak{F}$. To prove \Cref{thm:fsteps}, we need some preparations.
\begin{lemma} \label{lem:ectest}
Let $\mathsf{X}$ be a rainbow cycle or a bad piece. Suppose $\mathsf{e}$ is an edge not coincident to any edge of $\mathsf{X}$, and $\mathsf{C}$ is a rainbow cycle containing $\mathsf{e}$ such that $\mathsf{X}$ and $\mathsf{C} \setminus \mathsf{X}$ are color-disjoint. If $|V(\mathsf{C}) \cap V(\mathsf{X})| \ge 2$, then $\mathsf{C} \cup \mathsf{X}$ contains a rainbow even cycle subgraph.
\end{lemma}
Informally speaking, this technical result tells us that a rainbow cycle is likely to form a rainbow even cycle together with a long rainbow odd cycle or a bad piece.
\begin{proof} Assume to the contrary that there exists $\mathsf{P}_0$, a subpath (i.e.~a path subgraph) of $\mathsf{C}$ containing $\mathsf{e}$ with terminals $s_0, t_0 \in V(\mathsf{C})$ such that $V(\mathsf{P}_0) \cap V(\mathsf{X}) = \{s_0, t_0\}$. Indeed, $\mathsf{P}_0$ can be found as follows: Starting from $\mathsf{e}$ and moving along $\mathsf{C}$ in opposite directions, the first vertices to meet on $\mathsf{X}$ are $s_0$ and $t_0$, thanks to $|V(\mathsf{C}) \cap V(\mathsf{X})| \ge 2$. We claim the existence of a rainbow theta subgraph in $\mathsf{P}_0 \cup \mathsf{X}$, and hence \Cref{obs:theta_ec} guarantees a rainbow even cycle subgraph in $\mathsf{C} \cup \mathsf{X}$.
If $\mathsf{X}$ is a rainbow cycle, then $\mathsf{X} \cup \mathsf{P}_0$ is a rainbow theta graph.
If $\mathsf{X}$ is a bad piece which consists of rainbow paths $\mathsf{P}_1, \mathsf{P}_2, \mathsf{P}_3$ that share terminals $s$ and $t$, then $\mathsf{X} \cup \mathsf{P}_0$ is almost rainbow. Indeed, we can always remove a subpath containing one of the repeated-color edges on one of $\mathsf{P}_1, \mathsf{P}_2, \mathsf{P}_3$ to get a rainbow theta graph. To be more specific, we assume without loss that the repeated color happens on $\mathsf{P}_1$ and $\mathsf{P}_3$. Given that $x,y \in V(\mathsf{P}_i)$ for some fixed $i \in [3]$, we denote by $\mathsf{P}_{x, y}$ the subpath of $\mathsf{P}_i$ with terminals $x$ and $y$.
\begin{itemize}
\item If $s_0$ and $t_0$ lie on a same $\mathsf{P}_i$, then one of $\mathsf{P}_1$ and $\mathsf{P}_3$ is disjoint from $\mathsf{P}_0$, say $\mathsf{P}_1$. By removing $\mathsf{P}_1$ from $\mathsf{X} \cup \mathsf{P}_0$, the remaining $\mathsf{P}_2 \cup \mathsf{P}_3 \cup \mathsf{P}_0$ is a rainbow theta graph.
\item Otherwise, at least one of $s_0$ and $t_0$ lies on $\mathsf{P}_1 \cup \mathsf{P}_3$, say $s_0 \in V(\mathsf{P}_1)$. We further assume that the repeated-color edge appears on $\mathsf{P}_{s, s_0}$ rather than $\mathsf{P}_{t, s_0}$ in $\mathsf{P}_1$. (See \hyperlink{figfive}{Figure~5}.)
\begin{itemize}
\item If $t_0 \in V(\mathsf{P}_2)$, then by removing $\mathsf{P}_3$ from $\mathsf{X} \cup \mathsf{P}_0$ we are left with a rainbow theta graph.
\item If $t_0 \in V(\mathsf{P}_3)$, then by removing $\mathsf{P}_{s, s_0}$ from $\mathsf{X} \cup \mathsf{P}_0$ we are left with a rainbow theta graph.
\end{itemize}
\end{itemize}
\begin{center}
\begin{tikzpicture}
[x=1.0cm,y=1.0cm]
\clip(-0.5,-1.6) rectangle (6.5,1.6);
\draw [color=blue] (1.5,0) -- (1.5,0.5) -- (2.2,0.5) -- (2.9,0.5) -- (3.6,0.5) -- (3.6,1);
\draw [fill=black] (0,0) circle (1pt);
\draw [fill=black] (1.5,0) circle (1pt);
\draw [fill=black] (3,0) circle (1pt);
\draw [fill=black] (4.5,0) circle (1pt);
\draw [fill=black] (6,0) circle (1pt);
\draw [fill=black] (1.2,1) circle (1pt);
\draw [fill=black] (2.4,1) circle (1pt);
\draw [fill=black] (3.6,1) circle (1pt);
\draw [fill=black] (4.8,1) circle (1pt);
\draw [fill=black] (1.2,-1) circle (1pt);
\draw [fill=black] (2.4,-1) circle (1pt);
\draw [fill=black] (3.6,-1) circle (1pt);
\draw [fill=black] (4.8,-1) circle (1pt);
\draw [fill=black] (1.5,0.5) circle (1pt);
\draw [fill=black] (2.2,0.5) circle (1pt);
\draw [fill=black] (2.9,0.5) circle (1pt);
\draw [fill=black] (3.6,0.5) circle (1pt);
\draw (0,0) -- (1.2,1) -- (2.4,1) -- (3.6,1) -- (4.8,1) -- (6,0);
\draw[dash pattern=on 3pt off 3pt] (0,0) -- (1.2,-1) -- (2.4,-1) -- (3.6,-1) -- (4.8,-1) -- (6,0);
\draw (0,0) -- (1.5,0);
\draw (1.5,0) -- (3,0);
\draw (3,0) -- (4.5,0);
\draw (4.5,0) -- (6,0);
\node at (-0.2,0) {$s$};
\node at (6.2,0) {$t$};
\node at (3.6,1.2) {$s_0$};
\node at (1.5,-0.2) {$t_0$};
\node at (3,1) {$\mathsf{P}_1$};
\node at (3,0) {$\mathsf{P}_2$};
\node at (3,-1) {$\mathsf{P}_3$};
\node at (2.55,0.5) {\textcolor{blue}{$\mathsf{P}_0$}};
\node at (1.8,1.15) {$*$};
\node at (4.2,-0.85) {$*$};
\end{tikzpicture}
\begin{tikzpicture}
[x=1.0cm,y=1.0cm]
\clip(-0.5,-1.6) rectangle (7.5,1.6);
\draw [color=blue] (3.6,1) -- (5.3,1.5) -- (7,1) -- (7,-1) -- (5.3,-1.5) -- (3.6,-1);
\draw [fill=black] (0,0) circle (1pt);
\draw [fill=black] (1.5,0) circle (1pt);
\draw [fill=black] (3,0) circle (1pt);
\draw [fill=black] (4.5,0) circle (1pt);
\draw [fill=black] (6,0) circle (1pt);
\draw [fill=black] (1.2,1) circle (1pt);
\draw [fill=black] (2.4,1) circle (1pt);
\draw [fill=black] (3.6,1) circle (1pt);
\draw [fill=black] (4.8,1) circle (1pt);
\draw [fill=black] (1.2,-1) circle (1pt);
\draw [fill=black] (2.4,-1) circle (1pt);
\draw [fill=black] (3.6,-1) circle (1pt);
\draw [fill=black] (4.8,-1) circle (1pt);
\draw [fill=black] (5.3,1.5) circle (1pt);
\draw [fill=black] (7,1) circle (1pt);
\draw [fill=black] (7,-1) circle (1pt);
\draw [fill=black] (5.3,-1.5) circle (1pt);
\draw[dash pattern=on 3pt off 3pt] (0,0) -- (1.2,1) -- (2.4,1) -- (3.6,1);
\draw (3.6,1) -- (4.8,1) -- (6,0);
\draw (0,0) -- (1.2,-1) -- (2.4,-1) -- (3.6,-1) -- (4.8,-1) -- (6,0);
\draw (0,0) -- (1.5,0);
\draw (1.5,0) -- (3,0);
\draw (3,0) -- (4.5,0);
\draw (4.5,0) -- (6,0);
\node at (-0.2,0) {$s$};
\node at (6.2,0) {$t$};
\node at (3.6,1.2) {$s_0$};
\node at (3.6,-1.2) {$t_0$};
\node at (3,1) {$\mathsf{P}_1$};
\node at (3,0) {$\mathsf{P}_2$};
\node at (3,-1) {$\mathsf{P}_3$};
\node at (7,0) {\textcolor{blue}{$\mathsf{P}_0$}};
\node at (1.8,1.15) {$*$};
\node at (4.2,-0.85) {$*$};
\end{tikzpicture}
\begin{tikzpicture}
\node at (0,0) {\textbf{\hypertarget{figfive}{Figure 5:}} Path-removal operations where $*$ indicates the repeated color. };
\end{tikzpicture}
\end{center}
The casework above verifies our claim, and so the proof is complete.
\end{proof}
Let $\mathfrak{F}$ be a Frankenstein graph with $\mathcal{P}(\mathfrak{F}) = \{\mathsf{G}_1, \dotsc, \mathsf{G}_m\}$. To understand its structure better, we associate with it an auxiliary uncolored bipartite graph $G(\mathfrak{F}) \eqdef (V_1 \cup V_2, E)$, in which
\begin{itemize}
\item $V_1 \eqdef \{\mathsf{G}_1, \dotsc, \mathsf{G}_m\}$, $V_2 \eqdef \{\text{the unique common vertex of some $\mathsf{G}_i, \mathsf{G}_j$ ($i \neq j$)}\}$, and
\item $E \eqdef \{(\mathsf{G}, v) \in V_1 \times V_2 : v \in V(\mathsf{G})\}$.
\end{itemize}
\begin{lemma} \label{lem:Fr_aux}
For any Frankenstein $\mathfrak{F}$, $G(\mathfrak{F})$ is acyclic, and so is a forest.
\end{lemma}
\begin{proof}
Assume to the contrary that $v_1 - \mathsf{G}_1 - v_2 - \mathsf{G}_2 - \dotsb - v_k - \mathsf{G}_k - v_1$ forms a cycle in $G$, without loss of generality. From \Cref{obs:bflip} we deduce that there exists for each $i \in [k]$ a rainbow path $\mathsf{P}_i$ with terminals $v_i, v_{i+1}$ in $\mathsf{G}_i$ ($v_{k+1} = v_1$). Since different parts in $\mathfrak{F}$ are edge-disjoint and color-disjoint, the union $\mathsf{Q} \eqdef \mathsf{P}_1 \cup \dotsb \cup \mathsf{P}_k$ is a rainbow circuit, and so there exists a rainbow cycle $\widetilde{\mathsf{C}} \subseteq \mathsf{Q}$. Since $\widetilde{\mathsf{C}}$ cannot be a subgraph of any part of $\mathfrak{F}$, it takes edges from at least two consecutive ones among $\mathsf{G}_1, \dotsc, \mathsf{G}_k$, say $\mathsf{G}_j$ and $\mathsf{G}_{j+1}$ ($j \in [k], \, \mathsf{G}_{k+1} = \mathsf{G}_1$). It follows from \ref{Fr_tree} that either $\mathsf{G}_j$ or $\mathsf{G}_{j+1}$ (say $\mathsf{G}_j$) is not a rainbow tree. However, \Cref{lem:ectest} then implies the existence of a rainbow even cycle subgraph in $\widetilde{\mathsf{C}} \cup \mathsf{G}_j$, which contradicts \ref{Fr_ecfree}.
\end{proof}
\Cref{lem:ectest,lem:Fr_aux} will be applied not only in the proof of \Cref{thm:fsteps}, but also later at many places.
\begin{proof}[Proof of \Cref{thm:fsteps}]
We induct on $m$. The theorem is vacuously true when $m = 1$. Suppose $m \ge 2$ and take a leaf vertex $w$ of $G(\mathfrak{F})$. (If no leaf exists, then $E = \varnothing$ and any permutation $\sigma$ satisfies the theorem.) It is easily seen from the definition that no leaf exists in $V_2$, and so we assume without loss that $w = \mathsf{G}_m$. Since the partition $\{\mathsf{G}_1, \dotsc, \mathsf{G}_{m-1}\}$ also defines a Frankenstein graph, the inductive hypothesis on $m-1$ tells us that a permutation $\sigma$ on $[m-1]$ satisfying $|V(\mathfrak{F}_i) \cap V(\mathsf{G}_{\sigma(i+1)})| \le 1$ for all $i \in [m-2]$. Then $\mathsf{G}_m$ is a leaf implies that $|V(\mathfrak{F}_{m-1}) \cap V(\mathsf{G}_m)| \le 1$. So, by defining $\sigma(m) \eqdef m$ to extend the definition of $\sigma$, the inductive proof is complete.
\end{proof}
The following corollaries of \Cref{thm:fsteps} will be useful in the proof of \Cref{thm:rainbow_ec}.
\begin{corollary} \label{coro:fgraph}
If $\mathfrak{F}$ is a Frankenstein graph, then $|\chi(\mathfrak{F})| \leq \frac{6}{5}(|V(\mathfrak{F})|-1)$.
\end{corollary}
\begin{corollary} \label{coro:cycleinpart}
If $\mathfrak{F}$ is a Frankenstein graph with $\mathcal{P}(\mathfrak{F}) = \{\mathsf{G}_1, \dotsc, \mathsf{G}_m\}$ and $\mathsf{C} \subseteq \mathfrak{F}$ is a cycle, then there exists $i \in [m]$ such that $\mathsf{C} \subseteq \mathsf{G}_i$.
\end{corollary}
\begin{proof}
Write $V \eqdef V(\mathfrak{F})$. We prove \Cref{coro:fgraph,coro:cycleinpart} by induction on $m$.
If $m=1$, then \Cref{coro:cycleinpart} is trivially true. To see that \Cref{coro:fgraph} holds, it suffices to check the cases when $\mathfrak{F}$ is a long rainbow odd cycle or a bad piece or a rainbow tree. Indeed, we have
\[
\begin{cases}
|\chi(\mathfrak{F})| = |V| < \frac{6}{5}(|V|-1) \quad &\text{when $\mathfrak{F}$ is a long rainbow odd cycle (hence $|V| \ge 7$)}, \\
|\chi(\mathfrak{F})| \le \frac{6}{5}|(|V|-1) \quad &\text{when $\mathfrak{F}$ is a bad piece (by \Cref{obs:bestimate})}, \\
|\chi(\mathfrak{F})| = |V|-1 < \frac{6}{5}(|V|-1) \quad &\text{when $\mathfrak{F}$ is a rainbow tree}.
\end{cases}
\]
Suppose $m \ge 2$ then. Assume without loss that the identity $\sigma(i) \eqdef i$ satisfies \Cref{thm:fsteps}. Then
\[
|\chi(\mathfrak{F})| = |\chi(\mathfrak{F}_{m-1} \cup \mathsf{G}_m)| = |\chi(\mathfrak{F}_{m-1})|+|\chi(\mathsf{G}_m)| \le \frac{6}{5}(|V(\mathfrak{F}_{m-1})|+|V(\mathsf{G}_m)|-2) \le \frac{6}{5}(|V|-1),
\]
by applying inductive hypothesis to $\mathfrak{F}_{m-1}$ and noticing that $|V(\mathfrak{F}_{m-1}) \cap V(\mathsf{G}_m)| \le 1$. Also, we have $\mathsf{C} \subseteq \mathfrak{F}_{m-1}$ or $\mathsf{C} \subseteq \mathsf{G}_m$ because the shared vertex of $\mathfrak{F}_{m-1}$ and $\mathsf{G}_m$, if exists, is a cut vertex of $\mathfrak{F}$. By applying the inductive hypothesis to $\mathfrak{F}_{m-1}$, we can find some $i \in [m]$ such that $\mathsf{C} \subseteq \mathsf{G}_i$.
\end{proof}
To prove \Cref{thm:rainbow_ec}, we need another technical result on Frankenstein graphs.
\begin{proposition} \label{prop:fpath}
Suppose $\mathfrak{F}$ is a Frankenstein graph and $\mathsf{P} \subseteq \mathfrak{F}$ is a path with terminals $s$ and $t$. Then there exists a rainbow path $\mathsf{P}' \subseteq \mathfrak{F}$ with the same terminals $s$ and $t$.
\end{proposition}
\begin{proof}
The existence of $\mathsf{P}$ implies that $s, t$ are in the same connected component of $\mathfrak{F}$. We thus assume without loss that $\mathfrak{F}$ is connected. Then there exists a path in the uncolored graph $G(\mathfrak{F})$ such that
\[
s \in \mathsf{G}_{i_1} - v_1 - \mathsf{G}_{i_2} - \dotsb - v_{\ell-1} -\mathsf{G}_{i_{\ell}} \ni t
\]
where $\ell \ge 1$. It then follows from \Cref{obs:bflip} that there exists a rainbow trail $\mathsf{Q}$ joining $s$ and $t$. Obviously, any path $\mathsf{P}' \subseteq \mathsf{Q}$ with terminals $s$ and $t$ satisfies \Cref{prop:fpath}.
\end{proof}
For a Frankenstein graph $\mathfrak{F}$ given by the partition $\mathcal{P}(\mathfrak{F}) = \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{T}_1, \dotsc, \mathsf{T}_t\}$, we associate with it counting parameters $c(\mathfrak{F}) \eqdef c$ and $b(\mathfrak{F}) \eqdef b$. We still need another depth parameter.
For any tree $\mathsf{T}$ with $V(\mathsf{T}) \subset \N_+$, let its \emph{root} be $r \eqdef \min V(\mathsf{T})$. For any vertex $v \in V(\mathsf{T})$, define its \emph{relative depth} in $\mathsf{T}$ as $\depth_{\mathsf{T}}(v) \eqdef \dist_{\mathsf{T}}(r, v)$, which is the length of the unique path with terminals $r$ and $v$. We henceforth define for any forest $\mathsf{F}$ with $V(\mathsf{F}) \subset \N_+$ its \emph{total depth} as
\[
\Depth(\mathsf{F}) \eqdef \sum_{i=1}^t \sum_{v \in V(\mathsf{T}_i)} \depth_{\mathsf{T}_i}(v)
\]
where $\mathsf{T}_1, \dotsc, \mathsf{T}_t$ are the connected components of $\mathsf{F}$. For any Frankenstein graph $\mathfrak{F}$ with $V(\mathfrak{F}) \subset \N_+$, we refer to its \emph{total depth} as the total depth of its forest part, i.e.~$\Depth(\mathfrak{F}) \eqdef \Depth(\mathsf{T}_1 \cup \dotsb \cup \mathsf{T}_t)$.
Later in practice, we shall often construct a Frankenstein graph by a ``partition''
\[
\mathcal{P}(\mathfrak{F}) = \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{F}\}
\]
where $\mathsf{C}$'s are long rainbow odd cycles, $\mathsf{B}$'s are bad pieces, $\mathsf{F} = \mathsf{T}_1 \cup \dotsb \cup \mathsf{T}_t$ is a vertex-disjoint and color-disjoint union of rainbow trees, such that $\chi(\mathsf{G}_i) \cap \chi(\mathsf{G}_j) = \varnothing$ for any distinct $\mathsf{G}_i, \mathsf{G}_j \in \mathcal{P}(\mathfrak{F})$. Indeed, this $\mathcal{P}(\mathfrak{F})$ is formally not a partition since $\mathsf{F}$ and $\mathsf{C}_i$ or $\mathsf{B}_j$ may share more than one vertex. However, \ref{Fr_tree} implies, up to a relabel of the rainbow tree parts of $\mathfrak{F}$, there is no difference between exposing the trees $\mathsf{T}_1, \dotsc, \mathsf{T}_t$ and exposing the forest $\mathsf{F}$.
\section{Proof of \texorpdfstring{\Cref{thm:rainbow_ec}}{Theorem~\ref{thm:rainbow_ec}}} \label{sec:proof_ec}
We prove \Cref{thm:rainbow_ec} indirectly. Suppose $\mathcal{D} = (\mathsf{D}_1, \dotsc, \mathsf{D}_m)$ is a family of $m \eqdef \left\lfloor\frac{6(n-1)}{5}\right\rfloor+1 > \frac{6(n-1)}{5}$ even cycles on the ambient vertex set $[n]$ without any rainbow even cycle subgraph.
\smallskip
Let $\mathfrak{F}_*$ be a Frankenstein subgraph of the family $\mathcal{D}$ satisfying the following maximal conditions:
\begin{enumerate}[label=(M\arabic*), ref=(M\arabic*)]
\item\label{max:cycle} The number of long rainbow odd cycles $c(\mathfrak{F}_*)$ is maximized.
\item\label{max:bpiece} The number of bad pieces $b(\mathfrak{F}_*)$ is maximized under \ref{max:cycle}.
\item\label{max:edges} The number of edges $|\mathfrak{F}_*|$ is maximized under \ref{max:bpiece}.
\item\label{min:depth} The total depth $\Depth(\mathfrak{F}_*)$ is minimized under \ref{max:edges}.
\end{enumerate}
Suppose the partition of $\mathfrak{F}_*$ is
\[
\mathcal{P}(\mathfrak{F}_*) = \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{T}_1, \dotsc, \mathsf{T}_t\} \quad \text{with} \quad \mathsf{F} \eqdef \mathsf{T}_1 \cup \dotsb \cup \mathsf{T}_t,
\]
where $\mathsf{C}$'s are long rainbow odd cycles, $\mathsf{B}$'s are bad pieces, $\mathsf{T}$'s are vertex-disjoint rainbow trees.
\subsection{Outer edges and outer cycles}
From \Cref{coro:fgraph} we deduce that $|\chi(\mathfrak{F}_*)| \le \frac{6}{5}(n-1) < |\mathcal{D}|$. Assume without loss that the even cycle $\mathsf{D}_{\lambda}$ is of color $\lambda$. Then there exists a nonempty color set $\Lambda \subset [m]$ such that $\chi(\mathfrak{F}_*) \cap \Lambda = \varnothing$. In other words, every edge of the multigraph $\mathcal{D}_{\Lambda} \eqdef \bigcup_{\lambda \in \Lambda} \mathsf{D}_{\lambda}$ is absent in $\mathfrak{F}_*$.
We call $\sf$ in $\mathcal{D}_{\Lambda}$ an \emph{outer edge} if no coincident edge of $\sf$ is in $\mathfrak{F}_*$. We call a rainbow $\{3, 5\}$-cycle containing the outer edge $\sf$ in the graph $\mathfrak{F}_* + \sf$ an \emph{outer cycle} of $\sf$. Hereafter $\mathsf{G}+\mathsf{e}$ is the graph generated by adding $\mathsf{e}$ to $\mathsf{G}$ (i.e.~$\mathsf{G} + \mathsf{e} \eqdef \mathsf{G} \cup \{\mathsf{e}\}$). Similarly, $\mathsf{G} - \mathsf{e}$ (assuming $\mathsf{e} \in \mathsf{G}$) refers to the graph obtained by deleting $\mathsf{e}$ from $\mathsf{G}$. Recall that we do not distinguish a graph from its edge set.
\smallskip
The next propositions are devoted to the existence of outer edges and outer cycles.
\begin{proposition} \label{prop:outeredge}
For any $\lambda \in \Lambda$, an outer edge exists in $\mathsf{D}_{\lambda}$.
\end{proposition}
\begin{proof}
Assume for the sake of contradiction that $\mathsf{D}_{\lambda}$ is covered by $\mathfrak{F}_*$. In other words, each $\mathsf{e} \in \mathsf{D}_{\lambda}$ has exactly one coincident edge $\mathsf{e}^* \in \mathfrak{F}_*$. Define $\mathsf{D}_{\lambda}^* \eqdef \{\mathsf{e}^* : \mathsf{e} \in \mathsf{D}_{\lambda}\} \subseteq \mathfrak{F}_*$. Since long rainbow odd cycles and rainbow trees contain no even cycle, it follows from \Cref{coro:cycleinpart} that $\mathsf{D}_{\lambda}^* \subseteq \mathsf{B}_j$ is contained in some bad piece, and so $|\mathsf{D}_{\lambda}^*| - |\chi(\mathsf{D}_{\lambda}^*)| \in \{0, 1\}$. Since no rainbow even cycle exists in $\mathcal{D}$ (hence $\mathfrak{F}_*$), we obtain $|\mathsf{D}_{\lambda}^*| - |\chi(\mathsf{D}_{\lambda}^*)| = 1$. So, there exists a unique pair of distinct edges $\mathsf{e}_1^*, \mathsf{e}_2^*$ in $\mathsf{D}_{\lambda}^*$ such that $\chi(\mathsf{e}_1^*) = \chi(\mathsf{e}_2^*)$. Thus, $\mathsf{D}_{\lambda}^* - \mathsf{e}_1^* + \mathsf{e}_1$ is a rainbow even cycle in $\mathcal{D}$, a contradiction.
\end{proof}
\begin{proposition} \label{prop:outercycle}
For any outer edge $\sf$, an outer cycle of $\sf$ exists.
\end{proposition}
\begin{proof}
Let $V(\sf) \eqdef \{u, v\}$. Observe that $u, v$ are in a same connected component of $\mathfrak{F}_*$, for otherwise
\[
\mathcal{P}(\mathfrak{F}_* + \sf) \eqdef \{ \mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{F}+\sf \}
\]
gives another Frankenstein subgraph of $\mathcal{D}$ with one more edge than $\mathfrak{F}_*$, which contradicts \ref{max:edges}. It follows from \Cref{prop:fpath} that $\sf$ completes a rainbow (hence odd) cycle $\mathsf{C}_{\sf}$ in $\mathsf{F}+\sf$.
It then suffices to disprove that $\mathsf{C}_{\sf}$ is long. Assume to the contrary that $\mathsf{C}_{\sf}$ is long. Since $\sf \notin \mathfrak{F}_*$, from \Cref{lem:ectest} we deduce that $|V(\mathsf{C}_{\sf}) \cap V(\mathsf{C}_i)| \le 1$. So, $\mathcal{P}(\mathfrak{F}_{+}) \eqdef \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{C}_{\sf}\}$ presents another Frankenstein subgraph of $\mathcal{D}$ with $c(\mathfrak{F}_{+}) > c(\mathfrak{F}_*)$, which contradicts \ref{max:cycle}.
\end{proof}
For any tree $\mathsf{T}$ with $v \in V(\mathsf{T}) \subseteq [n]$, we define
\[
\child_{\mathsf{T}}(v) \eqdef \bigl\{ w \in V(\mathsf{T}) : \text{$(vw,\alpha) \in \mathsf{T}$ for some $\alpha$ and $\depth_{\mathsf{T}}(w) = \depth_{\mathsf{T}}(v)+1$} \bigr\}.
\]
The following properties characterize behaviors of outer $3$-cycles.
\begin{proposition} \label{prop:outerthree}
Let $\sf$ be an outer edge with $V(\sf) = \{u, v\}$. If $\mathsf{C}$ is an outer $3$-cycle of $\sf$ on vertices $u, v, w$, then there exists $k \in [t]$ such that $u, v, w \in V(\mathsf{T}_k)$ and $u, v \in \child_{\mathsf{T}_k}(w)$.
\end{proposition}
\begin{proof}
Since $\sf \notin \mathfrak{F}_*$, \Cref{lem:ectest} implies that $u, v, w \in V(\mathsf{T}_k)$ for some $k$.
We prove $u, v \in \child_{\mathsf{T}_k}(w)$ then. Suppose $\mathsf{e}_1 \eqdef (uw, \alpha)$ and $\mathsf{e}_2 \eqdef (vw, \beta)$ are edges in $\mathsf{T}_k$. The existence of $\mathsf{e}_1, \mathsf{e}_2$ tells us that $\abs{\depth_{\mathsf{T}_k}(u) - \depth_{\mathsf{T}_k}(v)}$ is either $0$ or $2$. If $\depth_{\mathsf{T}_k}(u) \neq \depth_{\mathsf{T}_k}(v)$, then we assume $\depth_{\mathsf{T}_k}(u) = \depth_{\mathsf{T}_k}(v)+2$ without loss. Note that $\mathsf{T}'_k \eqdef \mathsf{T}_k+\sf-\mathsf{e}_1$ is another \linebreak tree with $\Depth(\mathsf{T}'_k) < \Depth(\mathsf{T}_k)$. (Because $\depth_{\mathsf{T}_k'}(u) < \depth_{\mathsf{T}_k}(u)$ and $\depth_{\mathsf{T}'_k}(x) \le \depth_{\mathsf{T}_k}(x)$ for all $x \in V(\mathsf{T}_k) = V(\mathsf{T}'_k)$.) Then the partition
\[
\mathcal{P}(\mathfrak{F}') \eqdef \mathcal{P}(\mathfrak{F}_*+\sf-\mathsf{e}_2) = \{ \mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{T}_1, \dotsc, \mathsf{T}_k', \dotsc, \mathsf{T}_t \}
\]
gives another Frankenstein subgraph of $\mathcal{D}$ with $\Depth(\mathfrak{F}') < \Depth(\mathfrak{F}_*)$,
which contradicts \ref{min:depth}.
\end{proof}
\begin{proposition} \label{prop:onlyouterthree}
Suppose no outer $5$-cycle exists. If $\sf = (uv, \alpha)$ is an outer edge with outer cycle $\mathsf{C}$ on vertices $u, v, w \in \mathsf{T}_k$ (by \Cref{prop:outerthree}), then $\mathsf{D}_{\alpha}$, the even cycle of color $\alpha$ from $\mathcal{D}$ containing $\sf$, satisfies $V(\mathsf{D}_{\alpha}) \subseteq \{w\} \cup \child_{\mathsf{T}_k}(w)$. (See \hyperlink{figsix}{Figure~6} for an illustration.)
\end{proposition}
\begin{center}
\begin{tikzpicture}
\draw [fill=black] (-2,1) circle (1pt);
\draw [fill=black] (-4,0) circle (1pt);
\draw [fill=black] (0,0) circle (1pt);
\draw [fill=black] (-5,-1) circle (1pt);
\draw [fill=black] (-4,-1) circle (1pt);
\draw [fill=black] (-3,-1) circle (1pt);
\draw [fill=black] (-1.5,-1) circle (1pt);
\draw [fill=black] (-0.5,-1) circle (1pt);
\draw [fill=black] (0.5,-1) circle (1pt);
\draw [fill=black] (1.5,-1) circle (1pt);
\draw [fill=black] (-5.4,-2) circle (1pt);
\draw [fill=black] (-5,-2) circle (1pt);
\draw [fill=black] (-4.6,-2) circle (1pt);
\draw [fill=black] (-3.2,-2) circle (1pt);
\draw [fill=black] (-2.8,-2) circle (1pt);
\draw [fill=black] (-1.7,-2) circle (1pt);
\draw [fill=black] (-1.3,-2) circle (1pt);
\draw [fill=black] (0.1,-2) circle (1pt);
\draw [fill=black] (0.5,-2) circle (1pt);
\draw [fill=black] (0.9,-2) circle (1pt);
\draw [fill=black] (1.5,-2) circle (1pt);
\draw (-2,1) -- (-4,0);
\draw (-2,1) -- (0,0);
\draw (-4,0) -- (-5,-1);
\draw (-4,0) -- (-4,-1);
\draw (-4,0) -- (-3,-1);
\draw (0,0) -- (-1.5,-1);
\draw (0,0) -- (-0.5,-1);
\draw (0,0) -- (0.5,-1);
\draw (0,0) -- (1.5,-1);
\draw (-5,-1) -- (-5.4,-2);
\draw (-5,-1) -- (-5,-2);
\draw (-5,-1) -- (-4.6,-2);
\draw (-3,-1) -- (-3.2,-2);
\draw (-3,-1) -- (-2.8,-2);
\draw (-1.5,-1) -- (-1.7,-2);
\draw (-1.5,-1) -- (-1.3,-2);
\draw (0.5,-1) -- (0.1,-2);
\draw (0.5,-1) -- (0.5,-2);
\draw (0.5,-1) -- (0.9,-2);
\draw (1.5,-1) -- (1.5,-2);
\draw[color=blue] (-0.5,-1) -- (0.5,-1);
\draw[color=black] (-0.7,-1) node {$u$};
\draw[color=black] (0.7,-1) node {$v$};
\draw[color=black] (0.2,0.1) node {$w$};
\draw[color=blue] (0,-0.8) node {$\sf$};
\draw[dash pattern=on 3pt off 3pt] (-1.7,-1.25) rectangle (1.7,0.25);
\draw[dash pattern=on 3pt off 3pt] (-5.9,-1.5) rectangle (1.9,0.5);
\draw[dash pattern=on 3pt off 3pt] (-5.9,1.2) -- (-5.9,0.5);
\draw[dash pattern=on 3pt off 3pt] (1.9,1.2) -- (1.9,0.5);
\draw[dash pattern=on 3pt off 3pt] (-5.9,-2.2) -- (-5.9,-1.5);
\draw[dash pattern=on 3pt off 3pt] (1.9,-2.2) -- (1.9,-1.5);
\node at (-5.6,0.75) {$A^-$};
\node at (-5.65,0.25) {$A'$};
\node at (-5.6,-1.7) {$A^+$};
\node at (-1.5,0.05) {$A$};
\node at (-2, -2.7) {\textbf{\hypertarget{figsix}{Figure 6:}} $V(\sf) \subseteq \child_{\mathsf{T}_k}(w)$ implies $V(\mathsf{D}_{\alpha}) \subseteq A$. };
\end{tikzpicture}
\end{center}
\begin{proof}
We first show that $V(\mathsf{D}_{\alpha}) \subseteq V(\mathsf{T}_k)$. Define $\tau \colon \mathsf{D}_{\alpha} \to \mathcal{P}(\mathfrak{F}_*)$ as follows: For any edge $\mathsf{e} \in \mathsf{D}_{\alpha}$,
\begin{itemize}
\item if $\mathsf{e}$ is an outer edge, then $V(\mathsf{e}) \subseteq V(\mathsf{T}_{\ell})$ for some $\ell$ (by \Cref{prop:outerthree}), and we set $\tau(\mathsf{e}) \eqdef \mathsf{T}_{\ell}$;
\item if $\mathsf{e}$ is coincident to $\mathsf{e}' \in \mathfrak{F}_{*}$, then we set $\tau(\mathsf{e}) \eqdef \mathsf{X}$ where $\mathsf{X}$ is the part of $\mathfrak{F}_{*}$ that contains $\mathsf{e}'$.
\end{itemize}
By applying $\tau$ on $\mathsf{D}_{\alpha}$, we locate a circuit $Q \subseteq G(\mathfrak{F}_*)$ as follows:
\begin{enumerate}
\item[(1)] Put the edges of $\mathsf{D}_{\alpha}$ on a circle $\mathcal{O}$ in order. Replace $\mathsf{e}$ by $\tau(\mathsf{e})$ for each $\mathsf{e} \in \mathsf{D}_{\alpha}$.
\item[(2)] If two consecutive objects on $\mathcal{O}$ are the same, then remove one of them. Repeat.
\item[(3)] If $\mathsf{G}_i, \mathsf{G}_j \in \mathcal{P}(\mathfrak{F}_*)$ are adjacent on $\mathcal{O}$, then we plug in $v_{ij} \in V(\mathsf{G}_i) \cap V(\mathsf{G}_j)$ between them.
\end{enumerate}
The resulted arrangement on $\mathcal{O}$ forms a circuit $Q \subseteq G(\mathfrak{F}_*)$, since $\mathsf{D}_{\alpha}$ passes through each $v_{ij}$ and hence $v_{ij}$'s are pairwise distinct. For instance, we assume $\mathsf{D}_{\alpha}$ consists of $\mathsf{e}_1, \dotsc, \mathsf{e}_8$ in order such that
\[
\tau(\mathsf{e}_1, \mathsf{e}_2, \mathsf{e}_3, \mathsf{e}_4, \mathsf{e}_5, \mathsf{e}_6, \mathsf{e}_7, \mathsf{e}_8) = (\mathsf{G}_3, \mathsf{G}_3, \mathsf{G}_2, \mathsf{G}_2, \mathsf{G}_1, \mathsf{G}_3, \mathsf{G}_4, \mathsf{G}_5).
\]
Then the steps (1) through (3) generates
\[
\begin{tikzpicture}[scale = 0.5]
\clip (-2.5,-2.7) rectangle (2.5,2.7);
\draw (0,0) circle (2);
\node at (0,0) {$\mathcal{O}$};
\node at (0,2) {$\mathsf{G}_3$};
\node at (-1,1.732) {$v_{23}$};
\node at (-1.732,1) {$\mathsf{G}_2$};
\node at (-2,0) {$v_{12}$};
\node at (-1.732,-1) {$\mathsf{G}_1$};
\node at (-1,-1.732) {$v_{13}$};
\node at (0,-2) {$\mathsf{G}_3$};
\node at (1,-1.732) {$v_{34}$};
\node at (1.732,-1) {$\mathsf{G}_4$};
\node at (2,0) {$v_{45}$};
\node at (1.732,1) {$\mathsf{G}_5$};
\node at (1,1.732) {$v_{35}$};
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 0.5]
\clip(-1.2,-2.7) rectangle (1.2,2.7);
\draw[line width = 0.8pt][-stealth] (-1,0) -- (1,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 0.5]
\draw (0,0) -- (-1,1.732) -- (-3,1.732) -- (-4,0) -- (-3,-1.732) -- (-1,-1.732) -- (0,0) -- (1,1.732) -- (3,1.732) -- (4,0) -- (3,-1.732) -- (1,-1.732) -- (0,0);
\clip (-6,-2.7) rectangle (5.2,2.7);
\foreach \x in {-3,-1,1,3}
\foreach \y in {-1.732,1.732}
\draw[fill=black] (\x,\y) circle (0.1);
\foreach \x in {-4,0,4}
\draw[fill=black] (\x,0) circle (0.1);
\node at (0.6,0) {$\mathsf{G}_3$};
\node at (-1,-2.2) {$v_{13}$};
\node at (-1,2.2) {$v_{23}$};
\node at (1,-2.2) {$v_{34}$};
\node at (1,2.2) {$v_{35}$};
\node at (-3.3,0) {$v_{12}$};
\node at (4.7,0) {$v_{45}$};
\node at (-3,2.2) {$\mathsf{G}_2$};
\node at (-3,-2.2) {$\mathsf{G}_1$};
\node at (3,2.2) {$\mathsf{G}_5$};
\node at (3,-2.2) {$\mathsf{G}_4$};
\node at (-5.2,0) {$Q =$};
\end{tikzpicture}
\]
\vspace{-0.75em}
\noindent However, \Cref{lem:Fr_aux} asserts that $\mathfrak{F}_*$ is acyclic, and so $Q$ has to be degenerate. Thus, $V(\mathsf{D}_{\alpha}) \subseteq V(\mathsf{T}_k)$.
Use abbreviations $V \eqdef V(\mathsf{T}_k)$ and $d(t) = \depth_{\mathsf{T}_k}(t)$. Partition $V$ into $A \eqdef \{w\} \cup \child_{\mathsf{T}_k}(w)$, $A^+ \eqdef \{t \in V: d(t) > d(v)\}$, $A^- \eqdef \{t \in V: d(t) < d(w)\}$ and $A' \eqdef V \setminus (A \cup A^+ \cup A^-)$. Let $T_k$ be the uncolored copy of $\mathsf{T}_k$. For any $z \in V(T_k)$ and any distinct $x, y \in \child_{T_k}(z)$, we add the edge $xy$ into $T_k$ to generate a new graph $\overline{T}_k$. Due to the absence of outer $5$-cycles, from \Cref{prop:outerthree} we deduce that $D_{\alpha}$, the uncolored copy of $\mathsf{D}_{\alpha}$, is a subgraph of $\overline{T}_k$. We identify $V(T_k)$, $V(\overline{T}_k)$ and $V$.
Observe that any subpath of $T_k$ with one terminal in $A$ and the other in $A'$ must go thorugh $A^+$ or $A^-$. It then suffices to show that $V(D_{\alpha})$ and $A^+ \cup A^-$ are disjoint. This breaks down to exclude the situation that $d(z_+) \ge d(z_-)+2$ for some $z_+, z_- \in V(D_{\alpha})$. If such $z_+, z_-$ exists, then $D_{\alpha}$ consists of two subpaths $P_1, P_2$ with terminals $z_+$ and $z_-$. Let $z_i$ be the vertex on $P_i$ with $d(z_i) = d(z_+)-1$ that is nearest to $z_+$. The crucial observation is that $z_i$ is the parent of $z_+$, which is the unique vertex in $V$ such that $z_+ \in \child_{T_k}(z_i)$. Indeed, this follows from the fact that $z_i$ is a cut vertex of $\overline{T}_k$ which separates $z_+$ from all vertices of smaller depths. However, the observation implies that $z_1 = z_2$, which is absurd. We conclude that $V(\mathsf{D}_{\alpha}) = V(D_{\alpha}) \subseteq A$.
\end{proof}
\subsection{Finishing the proof}
\begin{lemma} \label{lem:f'_5cycle}
There exists a Frankenstein subgraph $\mathfrak{F}_0$ of $\mathcal{D}$ whose partition is given by
\[
\mathcal{P}(\mathfrak{F}_0) = \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{F}_0\} \quad \text{with} \quad |\mathsf{F}_0| = |\mathsf{F}|,
\]
and an edge $\sf_0$ in $\mathcal{D}$ such that
$\chi(\sf_0) \notin \chi(\mathfrak{F}_0)$ and $\sf_0$ completes a rainbow $5$-cycle in $\mathfrak{F}_0+\sf_0$.
\end{lemma}
\begin{proof}
According to \Cref{prop:outeredge,prop:outercycle}, we assume that $\sf$ is an outer edge and $\mathsf{C}_{\sf}$ is an outer cycle of $\sf$. If $\mathsf{C}_{\sf}$ happens to be a $5$-cycle, then $(\mathfrak{F}_0, \sf_0) \eqdef (\mathfrak{F}_*, \sf)$ with $\mathsf{F}_0 \eqdef \mathsf{F}$ satisfies \Cref{lem:f'_5cycle}.
We thus assume no outer $5$-cycle exists. Suppose $\sf \eqdef (uv, \alpha)$ and $\mathsf{D}_{\alpha}$ is the monochromatic even cycle from $\mathcal{D}$ that contains $\sf$. Assume $V(\mathsf{C}_{\sf}) \eqdef \{u, v, w\}$. It follows from \Cref{prop:outerthree} that $u, v, w$ lie in a same rainbow tree $\mathsf{T}_k \in \mathcal{P}(\mathfrak{F}_*)$ and $u, v \in \child_{\mathsf{T}_k}(w)$. From \Cref{prop:onlyouterthree} we deduce that $V(\mathsf{D}_{\alpha}) \subseteq \{w\} \cup \child_{\mathsf{T}_k}(w)$. Since $\mathsf{D}_{\alpha}$ consists of at least $4$ edges, at least one of the two adjacent edges of $\sf$ on $\mathsf{D}_{\alpha}$ does not take the vertex $w$. Assume without loss that $\sf' \eqdef (uv', \alpha)$ is such an edge, and hence $v' \in \child_{\mathsf{T}_k}(w)$. This implies that $v$ and $v'$ are symmetric despite our definition.
\begin{center}
\begin{tikzpicture}[scale = 1.8]
\draw (-2,0.025) -- (0,1.025) -- (2,0.025);
\draw (-1,0) -- (0,1);
\draw (1,0) -- (0,1);
\draw[color=green] (0,0) -- (-1,0) -- (-2,0) -- (0,1) -- (2,0) -- (1,0) -- (0,0);
\draw[color=blue] (0,0) -- (0,1) -- (-1,1.5) -- (-2,1.5) -- (-3,1) -- (-3,0) -- (-2,-0.5) -- (-1,-0.5) -- (0,0);
\draw[color=red] (-3,1) -- (0,1);
\draw[color=red] (-3,0) arc[start angle=210,end angle=330,radius=1.732cm];
\draw[fill=black] (-2,0) circle (1.5pt);
\draw[fill=black] (-1,0) circle (1.5pt);
\draw[fill=black] (0,0) circle (1.5pt);
\draw[fill=black] (1,0) circle (1.5pt);
\draw[fill=black] (2,0) circle (1.5pt);
\draw[fill=black] (0,1) circle (1.5pt);
\draw[fill=black] (-1,1.5) circle (1.5pt);
\draw[fill=black] (-2,1.5) circle (1.5pt);
\draw[fill=black] (-3,1) circle (1.5pt);
\draw[fill=black] (-3,0) circle (1.5pt);
\draw[fill=black] (-2,-0.5) circle (1.5pt);
\draw[fill=black] (-1,-0.5) circle (1.5pt);
\node at (0.27,-0.15) {$x_0 = u$};
\node at (0.45,1.1) {$x_{2k+1} = w$};
\node at (-1,-0.125) {$v'$};
\node at (1,-0.15) {$v$};
\node at (-1,-0.65) {$x_1$};
\node at (-1,1.65) {$x_{2k}$};
\node at (-2,-0.65) {$\cdots$};
\node at (-2,1.65) {$\cdots$};
\node at (-3.2,-0.125) {$x_{\lambda-1}$};
\node at (-3.15,1.125) {$x_{\lambda}$};
\node at (0.5,0.1) {\textcolor{green}{$\sf$}};
\node at (-0.47,0.11) {\textcolor{green}{$\sf'$}};
\node at (1,0.4) {\textcolor{green}{$\mathsf{D}_{\alpha}$}};
\node at (-0.075,0.5) {\textcolor{blue}{$\mathsf{g}$}};
\node at (0.4,0.5) {$\mathsf{k}$};
\node at (-2.925,0.5) {\textcolor{blue}{$\mathsf{h}$}};
\node at (-1.5,1.38) {\textcolor{blue}{$\mathsf{D}_{\beta}$}};
\node at (-1.5,0.9) {$\textcolor{red}{\mathsf{P}_w}$};
\node at (-1.5,-0.75) {$\textcolor{red}{\mathsf{P}_u}$};
\draw[color=violet,dash pattern=on 3pt off 3pt] (-3.6,0.35) -- (-2.4,0.35) -- (-2.4,-0.25) -- (-0.6,-0.25) -- (-0.6,0.35) -- (0.6,0.35) -- (0.6,-0.9);
\node at (-3.4,0.55) {\textcolor{violet}{$\mathsf{S}_w$}};
\node at (-3.4,0.15) {\textcolor{violet}{$\mathsf{S}_u$}};
\node at (-0.8,-1.2) {\textbf{\hypertarget{figseven}{Figure 7:}} An illustration of the proof of \Cref{lem:f'_5cycle}. };
\end{tikzpicture}
\end{center}
Let $\mathsf{g} \eqdef (uw, \beta)$ be the edge of $\mathfrak{F}_*$ on vertices $u$ and $w$. Suppose
\[
\mathsf{D}_{\beta} = \mathsf{g} + (x_0x_1, \beta) + (x_1x_2, \beta) + \dotsb + (x_{2k}x_{2k+1}, \beta) \qquad (x_0 \eqdef u, \, x_{2k+1} \eqdef w, \, k \in \N_+)
\]
is the monochromatic even cycle from $\mathcal{D}$ containing $\mathsf{g}$. From \Cref{lem:Fr_aux} we deduce that there are two connected components $\mathsf{S}_u$ and $\mathsf{S}_w$ in the graph $\mathfrak{F}_* - \mathsf{g}$ such that $u \in V(\mathsf{S}_u)$ and $w \in V(\mathsf{S}_w)$. Define $\lambda$ as the smallest index such that $x_{\lambda} \notin V(\mathsf{S}_u)$ and write $\mathsf{h} \eqdef (x_{\lambda-1}x_{\lambda}, \beta)$. Then $\mathsf{h} \notin \mathfrak{F}_*$.
We claim that $x_{\lambda} \in \mathsf{S}_w$. If not, then $\mathsf{h}$ cannot complete any cycle in $\widehat{\mathsf{F}} \eqdef \mathsf{F} + \sf - \mathsf{g} + \mathsf{h}$, and so $\widehat{\mathsf{F}}$ is a rainbow forest. Observe that the trees in $\widehat{\mathsf{F}}$ containing $\sf$ or $\mathsf{h}$ (they are possibly the same) share at most one vertex with any of $\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b$. This implies that the partition
\[
\mathcal{P}(\mathfrak{F}_* + \sf - \mathsf{g} + \mathsf{h}) \eqdef \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \widehat{\mathsf{F}}\}
\]
presents another Frankenstein subgraph of $\mathcal{D}$ on $|\mathfrak{F}_*|+1$ edges, which contradicts \ref{max:edges}.
By \Cref{prop:fpath}, we can find a rainbow path $\mathsf{P}_u \subseteq \mathsf{S}_u$ with terminals $u, x_{\lambda-1}$ and a rainbow path $\mathsf{P}_{w} \subseteq \mathsf{S}_w$ with terminals $w, x_{\lambda}$. Here we allow $\mathsf{P}_u$ to be empty if $x_0 = x_{\lambda-1}$, and allow $\mathsf{P}_w$ to be empty if $x_{2k+1} = x_{\lambda}$. Note that $\mathsf{P}_u = \mathsf{P}_w = \varnothing$ cannot happen, since $|\mathsf{D}_{\beta}| \ge 4$. Assume further that the length of $\mathsf{P}_w$ is minimized, and so $v \notin V(\mathsf{P}_w)$ or $v' \notin V(\mathsf{P}_w)$, say $v \notin V(\mathsf{P}_w)$. Thus, $\widetilde{\mathsf{C}} \eqdef \sf + \mathsf{P}_u +\mathsf{h} + \mathsf{P}_w + \mathsf{k}$ is a rainbow odd cycle with $|\widetilde{\mathsf{C}}| \ge 5$. Here $\mathsf{k}$ denotes the edge of $\mathsf{T}_k$ with $V(\mathsf{k}) = \{v, w\}$.
Since $V(\mathsf{T}_k+\sf-\mathsf{g}) = V(\mathsf{T}_k)$, we can define another Frankenstein subgraph $\mathfrak{F}_0 \eqdef \mathfrak{F}_*+\sf-\mathsf{g}$ by
\[
\mathcal{P}(\mathfrak{F}_0) \eqdef \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{T}_1, \dotsc, \mathsf{T}_k+\sf-\mathsf{g}, \dotsc, \mathsf{T}_t\}.
\]
We claim that $\sf_0 \eqdef \mathsf{h}$ satisfies \Cref{lem:f'_5cycle}. It suffices to show that $\widetilde{\mathsf{C}}$ is a rainbow $5$-cycle. Since $\mathsf{h} \in \widetilde{\mathsf{C}}$ and $\beta \notin \chi(\mathfrak{F}_0)$, \Cref{lem:ectest} tells us that $|V(\widetilde{\mathsf{C}}) \cap V(\mathsf{C}_i)| \le 1$. If $\widetilde{\mathsf{C}}$ is long, then $\mathcal{P}(\mathfrak{F}') \eqdef \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \widetilde{\mathsf{C}}\}$ gives another Frankenstein subgraph of $\mathcal{D}$ with $c(\mathfrak{F}') > c$, which contradicts \ref{max:cycle}. This together with $|\widetilde{\mathsf{C}}| \ge 5$ implies that $\widetilde{\mathsf{C}}$ is a rainbow $5$-cycle in $\mathfrak{F}_0+\mathsf{h}$. The proof of \Cref{lem:f'_5cycle} is complete.
\end{proof}
\begin{lemma} \label{lem:5cyclegrowth}
Suppose the rainbow $5$-cycle found in \Cref{lem:f'_5cycle} is $\widetilde{\mathsf{C}} \eqdef \{(v_iv_{i+1}, \alpha_i) : i \in [5]\}$, with the convention $v_{\ell+5} = v_{\ell}$. Write $\mathsf{e}_i \eqdef (v_iv_{i+1}, \alpha_i)$. Then there exists a shifting parameter $j \in \{0, 1, 2, 3, 4\}$, a set of five edges $\mathsf{e}_i' \eqdef (v_iv_{i+1}, \alpha_{i+j})$ from $\mathcal{D}$, a vertex $v^* \in [n] \setminus \{v_1, \dotsc, v_5\}$, and an index $k \in [5]$, such that at least one of the edges $(v^*v_k, \alpha_{k+j-1})$ and $(v^*v_k, \alpha_{k+j})$ appears in $\mathcal{D}$.
\end{lemma}
Informally speaking, \Cref{lem:5cyclegrowth} is dedicated to ``grow'' one more edge from the $5$-cycle guaranteed by \Cref{lem:f'_5cycle}. That is, after a possible cyclic shift of the colors on $\widetilde{\mathsf{C}}$, we would like to find out another edge on one of the monochromatic even cycles in $\mathcal{D}$ ``leaving'' $\widetilde{\mathsf{C}}$ (i.e.~incident to $v^* \notin \{v_1, \dotsc, v_5\}$). Such a configuration will then help us to locate another bad piece in $\mathcal{D}$, which contradicts \ref{max:bpiece}. For ease of notations, we write $(a, b, c, d, e) \eqdef (\alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5)$ in the coming example and in the proof of \Cref{lem:5cyclegrowth}. \hyperlink{figeight}{Figure~8} illustrates one possible output of \Cref{lem:5cyclegrowth} in which $(j, k) = (4, 3)$.
\begin{center}
\begin{tikzpicture}[x=1.0cm,y=1.0cm]
\clip(-2.5,-1.5) rectangle (2.5,2.2);
\draw (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw [fill=black] (-1.,-1.) circle (3pt);
\draw[color=black] (-0.8,-0.8) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (3pt);
\draw[color=black] (0.8,-0.8) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (3pt);
\draw[color=black] (-1.7,0.9) node {$v_2$};
\draw [fill=black] (2.,1.) circle (3pt);
\draw[color=black] (1.7,0.9) node {$v_5$};
\draw [fill=black] (0.,2.) circle (3pt);
\draw[color=black] (0,1.75) node {$v_1$};
\draw[color=black] (-0.9,1.35) node {$a$};
\draw[color=black] (-1.325,0.075) node {$b$};
\draw[color=black] (0,-0.8) node {$c$};
\draw[color=black] (1.325,0.075) node {$d$};
\draw[color=black] (0.9, 1.35) node {$e$};
\end{tikzpicture}
\begin{tikzpicture}
\clip(-1.2,-1.2) rectangle (1.2,2.2);
\draw[line width = 1pt][-stealth] (-1,0.5) -- (1,0.5);
\node at (-0.05,0.9) {\Cref{lem:5cyclegrowth}};
\end{tikzpicture}
\begin{tikzpicture}[x=1.0cm,y=1.0cm]
\clip(-3.2,-1.5) rectangle (2.5,2.2);
\draw (0,1.95) -- (-1.95,1) -- (-0.95,-0.95) -- (-0.95,-0.97) -- (0.95,-0.97) -- (0.95,-0.95) -- (1.95,1) -- (0,1.95);
\draw (0,2.05) -- (-1.95,1.1) -- (-2.05,1) -- (-1.05,-0.95) -- (-1.05,-1.05) -- (1.05,-1.05) -- (1.05,-0.95) -- (2.05,1) -- (1.95,1.1) -- (0,2.05);
\draw (-1,-1) -- (-3,-1);
\draw [fill=black] (-1.,-1.) circle (3pt);
\draw[color=black] (-0.8,-0.8) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (3pt);
\draw[color=black] (0.8,-0.8) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (3pt);
\draw[color=black] (-1.7,0.9) node {$v_2$};
\draw [fill=black] (2.,1.) circle (3pt);
\draw[color=black] (1.7,0.925) node {$v_5$};
\draw [fill=black] (0.,2.) circle (3pt);
\draw[color=black] (0.025,1.75) node {$v_1$};
\draw [fill=black] (-3.,-1.) circle (3pt);
\draw[color=black] (-2.925,-0.7) node {$v^*$};
\draw[color=black] (-0.9,1.35) node {$a$};
\draw[color=black] (-1.3,0.1) node {$b$};
\draw[color=black] (0,-0.8) node {$c$};
\draw[color=black] (1.3,0.1) node {$d$};
\draw[color=black] (0.9, 1.35) node {$e$};
\draw[color=black] (-1.1,1.7) node {$e$};
\draw[color=black] (-1.675,-0.075) node {$a$};
\draw[color=black] (0,-1.25) node {$b$};
\draw[color=black] (1.65,-0.075) node {$c$};
\draw[color=black] (1.125, 1.75) node {$d$};
\draw[color=black] (-2,-0.85) node {$a$};
\end{tikzpicture}
\begin{tikzpicture}
\node at (0, -1.7) {\textbf{\hypertarget{figeight}{Figure 8}:} The ``growth'' of a rainbow $5$-cycle.};
\end{tikzpicture}
\end{center}
\begin{proof}[Proof of \Cref{lem:5cyclegrowth}]
Assume without loss that $\mathsf{e}_i \in \mathsf{D}_i \in \mathcal{D}$. Suppose $\mathsf{e}_i^+$ and $\mathsf{e}_i^-$ are the edges in $\mathsf{D}_i$ satisfying $V(\mathsf{e}_i) \cap V(\mathsf{e}_i^+) = \{v_{i+1}\}$ and $V(\mathsf{e}_i) \cap V(\mathsf{e}_i^-) = \{v_i\}$, respectively.
Write $V \eqdef \{v_1, \dotsc, v_5\}$ for brevity. If there exists $v \in V(\mathsf{e}_i^{\bullet}) \setminus V$ for some $i \in [5]$ and $\bullet \in \{+, -\}$, say $i = 1$ and $\bullet = +$, then by choosing $v^* \eqdef v$ and $(j,k) = (0,1)$ the proof is done.
We thus assume that $V(\mathsf{e}_i^{\bullet}) \subseteq V$ for any $i \in [5], \, \bullet \in \{+, -\}$, and claim that this is impossible. To see this, we prove by contradiction. The following observation is quite useful:
\hypertarget{fact}{\textbf{Fact.}} $V(\mathsf{e}_i^-) = \{v_{i-1}, v_i\}$ or $\{v_i, v_{i+3}\}$, and $V(\mathsf{e}_i^+) = \{v_{i+1, v_{i+2}}\}$ or $\{v_{i+1}, v_{i+3}\}$.
\emph{Proof.} Let $V(\mathsf{e}_i^-) \eqdef \{v_{i-1}, v'\}$, then v' is forced to be one of $v_{i-1}, v_{i+2}, v_{i+3}$. However, $v' \neq v_{i+2}$, for otherwise $\mathsf{e}_i^-, \mathsf{e}_{i+2}, \mathsf{e}_{i+3}, \mathsf{e}_{i+4}$ form a rainbow $4$-cycle in $\mathcal{D}$. The $V(\mathsf{e}_i^+)$ case is similar. \qed
\smallskip
If $\mathsf{e}_i^+$ and $\mathsf{e}_{i+1}$ are coincident for all $i \in [5]$, then we cyclically shift the vertices via increasing $j$ by $1$. This does not change the situation, and so we may assume without loss that $V(\mathsf{e}_1^+) \neq \{v_2, v_3\}$. It follows from the \hyperlink{fact}{\underline{fact}} that $V(\mathsf{e}_1^+) = \{v_2, v_4\}$, which forces $V(\mathsf{e}^-_1) = \{v_1, v_5\}$, as shown in \hyperlink{fignineA}{Figure~9.A}.
We next look at $\mathsf{e}^{\pm}_2$. If $V(\mathsf{e}^-_2) = \{v_1,v_2\}$, then $\mathsf{e}_2^-, \mathsf{e}_1^+, \mathsf{e}_4, \mathsf{e}_5$ form a rainbow even cycle, a contradiction. So, $V(\mathsf{e}^-_2) = \{v_2,v_5\}$, and hence $V(\mathsf{e}^+_2) = \{v_3, v_4\}$ by the \hyperlink{fact}{\underline{fact}}, as shown in \hyperlink{fignineB}{Figure~9.B}.
We turn to $\mathsf{e}^{\pm}_3$ and $\mathsf{e}^{\pm}_5$ then. At this moment, we have a configuration that is symmetric in $(a, e)$ and $(b, c)$ (as seen in \hyperlink{fignineB}{Figure~9.B}). If $V(\mathsf{e}^-_3) = \{v_1, v_3\}$, then $\mathsf{e}_3^-, \mathsf{e}_2^+, \mathsf{e}_4, \mathsf{e}_5$ form a rainbow even cycle, a contradiction. So, the \hyperlink{fact}{\underline{fact}} implies $V(\mathsf{e}^-_3) = \{v_2,v_3\}$. By symmetry, $V(\mathsf{e}^+_5) = \{v_1, v_2\}$. We thus arrive at \hyperlink{fignineC}{Figure~9.C}. If $V(\mathsf{e}^+_3) = \{v_1, v_4\}$, then $\mathsf{e}_1, \mathsf{e}_2^-, \mathsf{e}_4, \mathsf{e}_3^+$ form a rainbow $4$-cycle, which is impossible. It then follows from the \hyperlink{fact}{\underline{fact}} and symmetry that $V(\mathsf{e}^+_3) = V(\mathsf{e}^-_5) = \{v_4, v_5\}$, as illustrated in \hyperlink{fignineD}{Figure~9.D}.
\begin{center}
\begin{tikzpicture}[x=0.8cm,y=0.8cm]
\clip(-2.5,-2.3) rectangle (2.5,2.5);
\draw (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw (-2,1) -- (1,-1);
\draw [fill=black] (-1.,-1.) circle (1.0pt);
\draw[color=black] (-1,-1.2) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (1.0pt);
\draw[color=black] (1,-1.2) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (1.0pt);
\draw[color=black] (-2.25,1) node {$v_2$};
\draw [fill=black] (2.,1.) circle (1.0pt);
\draw[color=black] (2.25,1) node {$v_5$};
\draw [fill=black] (0.,2.) circle (1.0pt);
\draw[color=black] (0,2.2) node {$v_1$};
\draw[color=black] (-1.05,1.65) node {$a$};
\draw[color=black] (-1.75,-0.05) node {$\cancel{a}b$};
\draw[color=black] (0,-1.15) node {$c$};
\draw[color=black] (1.625,-0.025) node {$d$};
\draw[color=black] (1.2, 1.65) node {$ae$};
\draw[color=black] (-0.6,-0.125) node {$a$};
\node at (0,-2) {\textbf{\hypertarget{fignineA}{Figure 9.A}}};
\end{tikzpicture}
\begin{tikzpicture}[x=0.8cm,y=0.8cm]
\clip(-2.5,-2.3) rectangle (2.5,2.5);
\draw (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw (-2,1) -- (1,-1);
\draw (-2,1) -- (2,1);
\draw [fill=black] (-1.,-1.) circle (1.0pt);
\draw[color=black] (-1,-1.2) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (1.0pt);
\draw[color=black] (1,-1.2) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (1.0pt);
\draw[color=black] (-2.25,1) node {$v_2$};
\draw [fill=black] (2.,1.) circle (1.0pt);
\draw[color=black] (2.25,1) node {$v_5$};
\draw [fill=black] (0.,2.) circle (1.0pt);
\draw[color=black] (0,2.2) node {$v_1$};
\draw[color=black] (-1.2,1.7) node {$a\cancel{b}$};
\draw[color=black] (-1.75,-0.05) node {$\cancel{a}b$};
\draw[color=black] (0,-1.2) node {$bc$};
\draw[color=black] (1.625,-0.025) node {$d$};
\draw[color=black] (1.2, 1.65) node {$ae$};
\draw[color=black] (-0.6,-0.125) node {$a$};
\draw[color=black] (0,1.2) node {$b$};
\node at (0,-2) {\textbf{\hypertarget{fignineB}{Figure 9.B}}};
\end{tikzpicture}
\begin{tikzpicture}[x=0.8cm,y=0.8cm]
\clip(-2.5,-2.3) rectangle (2.5,2.5);
\draw (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw (-2,1) -- (1,-1);
\draw (-2,1) -- (2,1);
\draw[dash pattern=on 3pt off 3pt] (0,2) -- (-1,-1);
\draw [fill=black] (-1.,-1.) circle (1.0pt);
\draw[color=black] (-1,-1.2) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (1.0pt);
\draw[color=black] (1,-1.2) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (1.0pt);
\draw[color=black] (-2.25,1) node {$v_2$};
\draw [fill=black] (2.,1.) circle (1.0pt);
\draw[color=black] (2.25,1) node {$v_5$};
\draw [fill=black] (0.,2.) circle (1.0pt);
\draw[color=black] (0,2.2) node {$v_1$};
\draw[color=black] (-1.2,1.65) node {$ae$};
\draw[color=black] (-1.7,-0.025) node {$bc$};
\draw[color=black] (0,-1.2) node {$bc$};
\draw[color=black] (1.625,-0.025) node {$d$};
\draw[color=black] (1.2, 1.65) node {$ae$};
\draw[color=black] (-0.6,-0.125) node {$a$};
\draw[color=black] (0,1.2) node {$b$};
\draw[color=black] (-0.7,0.65) node {$\cancel{c}\cancel{e}$};
\node at (0,-2) {\textbf{\hypertarget{fignineC}{Figure 9.C}}};
\end{tikzpicture}
\begin{tikzpicture}[x=0.8cm,y=0.8cm]
\clip(-2.5,-2.3) rectangle (2.5,2.5);
\draw (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw (-2,1) -- (1,-1);
\draw (-2,1) -- (2,1);
\draw[dash pattern=on 3pt off 3pt] (0,2) -- (1,-1);
\draw[dash pattern=on 3pt off 3pt] (-1,-1) -- (2,1);
\draw [fill=black] (-1.,-1.) circle (1.0pt);
\draw[color=black] (-1,-1.2) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (1.0pt);
\draw[color=black] (1,-1.2) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (1.0pt);
\draw[color=black] (-2.25,1) node {$v_2$};
\draw [fill=black] (2.,1.) circle (1.0pt);
\draw[color=black] (2.25,1) node {$v_5$};
\draw [fill=black] (0.,2.) circle (1.0pt);
\draw[color=black] (0,2.2) node {$v_1$};
\draw[color=black] (-1.2,1.65) node {$ae$};
\draw[color=black] (-1.7,-0.025) node {$bc$};
\draw[color=black] (0,-1.2) node {$bc$};
\draw[color=black] (1.825,-0.025) node {$cde$};
\draw[color=black] (1.2, 1.65) node {$ae$};
\draw[color=black] (-0.6,-0.125) node {$a$};
\draw[color=black] (0,1.2) node {$b$};
\draw[color=black] (0.65,0.625) node {$\cancel{c}$};
\draw[color=black] (0.45,-0.175) node {$\cancel{e}$};
\node at (0,-2) {\textbf{\hypertarget{fignineD}{Figure 9.D}}};
\end{tikzpicture}
\end{center}
Finally, we focus on $\mathsf{e}^{\pm}_4$. Indeed, we have $V(\mathsf{e}_4^-) = \{v_2, v_4\}$ or $\{v_3, v_4\}$ by the \hyperlink{fact}{\underline{fact}}. \hyperlink{fignineD}{Figure~9.D} shows that the former case generates a rainbow $4$-cycle on $\mathsf{e}_1, \mathsf{e}_4^-, \mathsf{e}_3^+, \mathsf{e}_5$ while the latter generates a rainbow $4$-cycle on $\mathsf{e}_2^-, \mathsf{e}_3^-, \mathsf{e}_4^-, \mathsf{e}_5^-$. We thus obtain the desired contradiction.
\end{proof}
Assume $\mathfrak{F}_0$ and $\sf_0$ satisfy \Cref{lem:f'_5cycle}. Let $\mathsf{F}_0$ be the forest part of $\mathfrak{F}_0$. That is,
\[
\mathcal{P}(\mathfrak{F}_0) = \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{F}_0\} \quad \text{with} \quad |\mathsf{F}_0| = |\mathsf{F}|.
\]
From \Cref{lem:5cyclegrowth} we can find a subgraph of $\mathcal{D}$ on six vertices $v_1, \dotsc, v_5$ and $v_*$. By assuming without loss that $(j, k) = (0, 1)$ and $(v^*v_1, \alpha_1)$ is an edge in $\mathcal{D}$, this subgraph consists of $\widetilde{\mathsf{C}}$ and $\sp$ in which
\begin{itemize}
\item $\widetilde{\mathsf{C}} \eqdef \{\mathsf{e}_i = (v_iv_{i+1}, \alpha_i) : i \in [5]\}$ is a rainbow $5$-cycle in $\mathfrak{F}_0+\sf_0$, and
\item $\sp \eqdef (v^*v_1, \alpha_1)$ is an edge of color $\alpha_1$ on vertices $v^*$ and $v_1$.
\end{itemize}
Since $\sf_0 \notin \mathfrak{F}_0$, it follows from \Cref{lem:ectest} that $\widetilde{\mathsf{C}}$ is edge-disjoint from $\mathsf{C}_1, \dotsc, \mathsf{C}_c$ and $\mathsf{B}_1, \dotsc, \mathsf{B}_b$. In particular, $\widetilde{\mathsf{C}} - \sf_0 \subseteq \mathsf{F}_0$. We claim that $\sp \notin \mathfrak{F}_0$. If $\sf_0 = \mathsf{e}_1$, then \Cref{lem:f'_5cycle} shows $\chi(\sp) = \chi(\sf_0) \notin \chi(\mathfrak{F}_0)$, and so $\sp \notin \mathfrak{F}_0$. If $\sf_0 \in \{\mathsf{e}_2, \mathsf{e}_3, \mathsf{e}_4, \mathsf{e}_5\}$, then $\mathsf{e}_1 \in \mathsf{F}_0$. This implies that $\sp \notin \mathsf{F}_0$ since $\mathsf{F}_0$ is rainbow, and that $\sp \notin \mathsf{C}_i$, $\sp \notin \mathsf{B}_j$ since $\mathsf{C}_i, \mathsf{B}_j$ are color-disjoint from $\mathsf{F}_0$. We conclude that $\sp \notin \mathfrak{F}_0$.
Let $\widetilde{\mathsf{C}} - \sf_0$ be a subgraph of $\mathsf{T}_k \in \mathcal{P}(\mathfrak{F}_0)$. Set $\mathfrak{F}_0' \eqdef \mathfrak{F}_0 + \sf_0 - \mathsf{e}_5$ and $\mathsf{F}_0' \eqdef \mathsf{F}_0 + \sf_0 - \mathsf{e}_5$. Note that $\mathsf{F}_0'$ differs from $\mathsf{F}_0$ only at $\mathsf{T}_k'$, the rainbow tree from $\mathcal{P}(\mathfrak{F}_0')$ containing $\widetilde{\mathsf{C}}-\mathsf{e}_5$. Since $V(\mathsf{T}_k') = V(\mathsf{T}_k)$,
\[
\mathcal{P}(\mathfrak{F}_0') \eqdef \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{F}_0'\}
\]
shows that $\mathfrak{F}_0'$ is a Frankenstein subgraph of $\mathcal{D}$ with $|\mathsf{F}_0'| = |\mathsf{F}_0| = |\mathsf{F}|$. We remark that $\sp \notin \mathfrak{F}_0'$.
Let $\mathsf{D}_{\alpha_1}$ be the monochromatic even cycle from $\mathcal{D}$ that contains $\mathsf{e}_1$ and $\sp$. Then
\[
\mathsf{D}_{\alpha_1} \eqdef \mathsf{e}_1 + (y_1y_2, \alpha_1) + \dotsb + (y_{2\ell+1}y_{2\ell+2}, \alpha_1) \qquad (y_1 \eqdef v_2, \, y_{2\ell+1} \eqdef v^*, \, y_{2\ell+2} \eqdef v_1, \, \ell \in \N_+)
\]
By \Cref{lem:Fr_aux}, there are two connected components $\mathsf{S}_1$ and $\mathsf{S}_2$ in $\mathfrak{F}_0' - \mathsf{e}_1$ such that $v_1 \in V(\mathsf{S}_1)$ and $v_2 \in V(\mathsf{S}_2)$. Define~$\mu$ as the smallest index with $y_{\mu} \notin V(\mathsf{S}_2)$. For similar reasons as ``$x_{\lambda-1} \in \mathsf{S}_u$ and $x_{\lambda} \in \mathsf{S}_w$'' in the proof of \Cref{lem:f'_5cycle}, we have that $y_{\mu-1} \in \mathsf{S}_2$ and $y_{\mu} \in \mathsf{S}_1$. Indeed, if $y_{\mu} \notin \mathsf{S}_1$, then
\[
\mathcal{P}(\mathfrak{F}_0'+\mathsf{e}_5-\mathsf{e}_1+\mathsf{q}) \eqdef \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \mathsf{F}_0'+\mathsf{e}_5-\mathsf{e}_1+\mathsf{q}\}
\]
where $\mathsf{q} \eqdef (y_{\mu-1}y_{\mu}, \alpha_1)$ is another Frankenstein subgraph on $|\mathfrak{F}_*|+1$ edges, which contradicts \ref{max:edges}.
\begin{center}
\begin{tikzpicture}[x=1.0cm,y=1.0cm]
\clip(-6,-2.25) rectangle (4,5.5);
\draw[color=blue] (0.,2.)-- (-2.,1.);
\draw (-2.,1.)-- (-1.,-1.);
\draw (-1.,-1.)-- (1.,-1.);
\draw (1.,-1.)-- (2.,1.);
\draw (2.,1.)-- (0.,2.);
\draw[color=blue] (0.,4.)-- (0.,2.);
\draw[color=blue] (-2,1) -- (-4,2) -- (-4,4) -- (-3,5) -- (-1,5) -- (0,4);
\draw[color=red] (-3,5) -- (0,2);
\draw[color=red] (-4,4) arc[start angle=152,end angle=270,radius=3.4cm];
\draw [fill=black] (-1.,-1.) circle (1.0pt);
\draw[color=black] (-0.8,-0.8) node {$v_3$};
\draw [fill=black] (1.,-1.) circle (1.0pt);
\draw[color=black] (0.8,-0.8) node {$v_4$};
\draw [fill=black] (-2.,1.) circle (1.0pt);
\draw[color=black] (-1.25,0.9) node {$y_1 = v_2$};
\draw [fill=black] (2.,1.) circle (1.0pt);
\draw[color=black] (1.7,0.9) node {$v_5$};
\draw [fill=black] (0.,2.) circle (1.0pt);
\draw[color=black] (0.95,2.1) node {$y_{2\ell+2} = v_1$};
\draw[color=blue] (-0.85,1.35) node {$\mathsf{e}_1$};
\draw[color=blue] (-3.6,4.65) node {$\mathsf{q}$};
\draw[color=black] (0.95,1.35) node {$\mathsf{e}_5$};
\draw[color=black] (-1.3,0.09) node {$\mathsf{e}_2$};
\draw[color=black] (0,-0.8) node {$\mathsf{e}_3$};
\draw[color=black] (1.3, 0.09) node {$\mathsf{e}_4$};
\draw[color=black] (0,-1.25) node {$\widetilde{\mathsf{C}}$};
\draw [fill=black] (0.,4.) circle (1.0pt);
\draw[color=black] (0.95,4.05) node {$y_{2\ell+1} = v^*$};
\draw[color=black] (-1,5.2) node {$\cdots$};
\draw[color=black] (-4.2,2.15) node {$\vdots$};
\draw[color=black] (-3,5.25) node {$y_{\mu}$};
\draw[color=black] (-4.45,4) node {$y_{\mu-1}$};
\draw[color=blue] (0.15,3.) node {$\sp$};
\draw [fill=black] (-4,2) circle (1.0pt);
\draw [fill=black] (-4,4) circle (1.0pt);
\draw [fill=black] (-3,5) circle (1.0pt);
\draw [fill=black] (-1,5) circle (1.0pt);
\draw[color=blue] (-2,4.75) node {$\mathsf{D}_{\alpha_1}$};
\draw[color=red] (-1.7,3.5) node {$\mathsf{P}_1$};
\draw[color=red] (-3.825,1) node {$\mathsf{P}_2$};
\draw[color=violet,dash pattern = on 3pt off 3pt] (-3.8,5.1) -- (0.8,0.5) -- (3,2.7);
\draw[color=violet] (2.4,2.7) node {$\mathsf{S}_1$};
\draw[color=violet] (3,2.1) node {$\mathsf{S}_2$};
\node at (-1, -2) {\textbf{\hypertarget{figten}{Figure 10:}} An illustration of the proof of \Cref{thm:rainbow_ec}. };
\end{tikzpicture}
\end{center}
According to \Cref{prop:fpath}, we can find a rainbow path $\mathsf{P}_1 \subseteq \mathsf{S}_1$ with terminals $y_{\mu}$ and $v_1$. We can also find a rainbow path $\mathsf{P}_2 \subseteq \mathsf{S}_2$ whose terminals are $y_{\mu-1}$ and some $v_t \in \{v_2, v_3, v_4, v_5\}$. Assume further that $\mathsf{P}_2$ is of minimum length. Note that $\mathsf{P}_1 = \varnothing$ if $v_1 = y_{\mu}$, and $\mathsf{P}_2 = \varnothing$ if $v_t = y_{\mu-1}$.
\newpage
\hypertarget{claim}{\textbf{Claim.}} $\mathsf{P}_1, \mathsf{P}_2, \widetilde{\mathsf{C}}$ are pairwise color-disjoint.
\emph{Proof.} Recall that $\widetilde{\mathsf{C}}-\mathsf{e}_5 \subseteq \mathsf{T}_k'$. If $\mathsf{P}_1, \mathsf{P}_2$ intersect a same part $\mathsf{G} \in \mathcal{P}(\mathfrak{F}_0')$, then the definitions of $\mathsf{S}_1, \mathsf{S}_2$ imply that $\mathsf{G} = \mathsf{T}_k'$. Since $\mathsf{T}_k'$ is rainbow, and different parts in $\mathcal{P}(\mathfrak{F}_0')$ are color-disjoint, we conclude that $\chi(\mathsf{P}_1), \, \chi(\mathsf{P}_2), \, \chi(\widetilde{\mathsf{C}})$ are pairwise disjoint. \qed
\smallskip
Decompose $\widetilde{\mathsf{C}}$ into two rainbow paths $\widetilde{\mathsf{P}}_1, \widetilde{\mathsf{P}}_2$ with terminals $v_1$ and $v_t$ such that $\mathsf{e}_1 \in \widetilde{\mathsf{P}}_1$. For instance, in \hyperlink{figten}{Figure~10} we have $v_t = v_3$, $\widetilde{\mathsf{P}}_1 = \mathsf{e}_1+\mathsf{e}_2$ and $\widetilde{\mathsf{P}}_2 = \mathsf{e}_3+\mathsf{e}_4+\mathsf{e}_5$. Set $\widetilde{\mathsf{P}} \eqdef \mathsf{P}_1 \cup \mathsf{P}_2 \cup \{\mathsf{q}\}$, and define $\widetilde{\mathsf{B}} \eqdef \widetilde{\mathsf{P}} \cup \widetilde{\mathsf{P}}_1 \cup \widetilde{\mathsf{P}}_2$. We are going to verify that $\widetilde{\mathsf{B}}$ is a bad piece in three steps.
Firstly, we show that $\widetilde{\mathsf{B}} = \widetilde{\mathsf{P}} \cup \widetilde{\mathsf{P}}_1 \cup \widetilde{\mathsf{P}}_2$ is a theta graph with common terminals $v_1$ and $v_t$. Let $\widetilde{C}, P_1, P_2, q$ be the uncolored copy of $\widetilde{\mathsf{C}}, \mathsf{P}_1, \mathsf{P}_2, \mathsf{q}$, respectively. It suffices to show that $\widetilde{C}, P_1, P_2, \{q\}$ are pairwise disjoint. The definitions of $\mathsf{S}_1, \mathsf{S}_2$ indicate $q \notin P_1$, $q \notin P_2$ and $P_1 \cap P_2 = \varnothing$, $P_1 \cap \widetilde{C} = \varnothing$. The minimum-length assumption on $\mathsf{P}_2$ implies $P_2 \cap \widetilde{C} = \varnothing$. To see that $q \notin \widetilde{C}$, we argue indirectly. If $q \in \widetilde{C}$, then $V(\mathsf{q}) \subseteq V(\widetilde{\mathsf{C}})$ and $y_{\mu} = v_1$. This implies $\mathsf{q}=\sp$ hence $v^* \in \{v_1, \dotsc, v_5\}$, a contradiction.
Secondly, we prove that $\widetilde{\mathsf{P}}$, $\widetilde{\mathsf{P}}_1$, $\widetilde{\mathsf{P}}_2$ are all rainbow, and that $\widetilde{\mathsf{B}}$ is almost rainbow. Indeed, $\widetilde{\mathsf{P}}_1, \widetilde{\mathsf{P}}_2$ are rainbow because $\widetilde{\mathsf{C}}$ is rainbow. Since $\chi(\mathsf{q}) = \chi(\mathsf{e}_1) = \alpha_1$ and $\mathsf{e}_1 \in \widetilde{\mathsf{C}}$, the \hyperlink{claim}{\underline{claim}} then implies that $\widetilde{\mathsf{P}}$ is rainbow and $\widetilde{\mathsf{B}}$ is almost rainbow.
Thirdly, we check that $|\widetilde{\mathsf{B}}| \ge 7$. Since $\mathsf{q} \in \widetilde{\mathsf{B}}$, $\widetilde{\mathsf{C}} \subseteq \widetilde{\mathsf{B}}$ and $\mathsf{q} \notin \widetilde{\mathsf{C}}$, we obtain $|\widetilde{\mathsf{B}}| \ge 6$. If $|\widetilde{\mathsf{B}}| = 6$, then $V(\mathsf{q}) \subseteq \{v_1, \dotsc, v_5\}$ and hence $y_{\mu-1} = v_t, \, y_{\mu} = v_1$, which contradicts $v^* \notin \{v_1, \dotsc, v_5\}$. So, $|\widetilde{\mathsf{B}}| \ge 7$.
\smallskip
If $|V(\widetilde{\mathsf{B}}) \cap V(\mathsf{X})| \le 1$ for all $\mathsf{X} \in \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b\}$, then the partition
\[
\mathcal{P}(\widetilde{\mathfrak{F}}) \eqdef \{\mathsf{C}_1, \dotsc, \mathsf{C}_c, \mathsf{B}_1, \dotsc, \mathsf{B}_b, \widetilde{\mathsf{B}}\}
\]
exposes a Frankenstein subgraph of $\mathcal{D}$ with $c(\widetilde{\mathfrak{F}}) = c$ and $b(\widetilde{\mathfrak{F}}) > b$, which contradicts \ref{max:bpiece}. So, there exists $\mathsf{X}_0 = \mathsf{C}_i$ or $\mathsf{B}_j$ such that $|V(\widetilde{\mathsf{B}}) \cap V(\mathsf{X}_0)| \ge 2$.
Consider the rainbow cycle $\widehat{\mathsf{C}} \eqdef \widetilde{\mathsf{P}} \cup \widetilde{\mathsf{P}}_2$. Since $\mathsf{q} \in \widetilde{\mathsf{C}}$, \Cref{lem:ectest} then implies that $|\widehat{\mathsf{C}} \cap \mathsf{X}_0| \le 1$. Similarly, from $\mathsf{e}_1 \in \widetilde{\mathsf{C}}$ and \Cref{lem:ectest} we see that $|\widetilde{\mathsf{C}} \cap \mathsf{X}_0| \le 1$. We thus obtain $|V(\widetilde{\mathsf{B}}) \cap V(\mathsf{X}_0)| = 2$ and $\widetilde{\mathsf{B}} \cap \mathsf{X}_0 = \varnothing$ by noticing $\widetilde{\mathsf{B}} = \widehat{\mathsf{C}} \cup \widetilde{\mathsf{C}}$.
Suppose $V(\widetilde{\mathsf{B}}) \cap V(\mathsf{X}_0) \eqdef \{u, u_1\}$ with $u \in V(\widetilde{\mathsf{P}}) \setminus V(\widetilde{\mathsf{P}}_2)$ and $u_1 \in V(\widetilde{\mathsf{P}}_1) \setminus V(\widetilde{\mathsf{P}}_2)$. Denote by $\mathsf{P}_{u, v_t}$ the subpath of $\widetilde{\mathsf{P}}$ with terminals $u, v_t$, and by $\mathsf{P}_{u_1, v_t}$ the subpath of $\widetilde{\mathsf{P}}_1$ with terminals $u_1, v_t$. Write $\widehat{\mathsf{P}} \eqdef \mathsf{P}_{u, v_t} \cup \mathsf{P}_{u_1, v_t}$. Then $\widehat{\mathsf{P}}$ is a rainbow path because $\widehat{\mathsf{P}} \subseteq \widetilde{\mathsf{B}}$ and $\mathsf{e}_1 \notin \widehat{\mathsf{P}}$. Since $\widetilde{\mathsf{B}} \cap \mathsf{X}_0 = \varnothing$, from \Cref{lem:ectest} we deduce that $\widehat{\mathsf{P}} \cup \mathsf{X}_0$ contains a rainbow even cycle, a contradiction.
The proof of \Cref{thm:rainbow_ec} is complete.
\section{Concluding remarks}
Write $\bracket{n} \eqdef \{3, 4, \dotsc, n\}$. For any positive integer $n$ and any $A \subseteq \bracket{n}$, let $f(n, A)$ be the minimum positive integer $N$ such that a rainbow $A$-cycle is guaranteed in every family of $N$ many $A$-cycles. It then follows from \Cref{thm:rainbow_c,thm:rainbow_oc,thm:rainbow_ec} that
\[
f(n, A) = \begin{cases}
n \qquad &\text{when $A = \bracket{n}$}, \\
2\left\lceil\frac{n}{2}\right\rceil-1 \qquad &\text{when $A = \bracket{n} \cap (2\Z+1)$}, \\
\left\lfloor\frac{6(n-1)}{5}\right\rfloor+1 \qquad &\text{when $A = \bracket{n} \cap 2\Z$}.
\end{cases}
\]
We were unable to determine $f(n, A)$ when $A = \bracket{n} \cap (a\Z+b)$ in general. Another nice problem is to estimate $f(n, \{k\})$. It was proved independently by Gy\H{o}ri \cite{gyori} and Goorevitch, Holzman \cite{goorevitch_holzman} that $f(n, \{3\}) \approx \frac{n^2}{8}$. In particular, the value of $f(n, \{n\})$ concerning Hamiltonian cycles seems mysterious.
\vspace{-0.75em}
\section*{Acknowledgments}
The first author is grateful to Boris Bukh, Ting-Wei Chao and Zilin Jiang for fruitful discussions. The second author would like to thank Peking University for a pre-admission in his tenth grade, and to thank Beijing National Day School (high school) for allowing him to skip all regular classes in the academic year 2021--2022. These privileges resulted in plenty of free time to study all kinds of exciting new mathematics, especially to work on this problem on rainbow even cycles.
\vspace{-0.75em}
\bibliographystyle{plain}
| {
"timestamp": "2022-11-18T02:12:21",
"yymm": "2211",
"arxiv_id": "2211.09530",
"language": "en",
"url": "https://arxiv.org/abs/2211.09530",
"abstract": "We prove that every family of (not necessarily distinct) even cycles $D_1, \\dotsc, D_{\\lfloor 1.2(n-1) \\rfloor+1}$ on some fixed $n$-vertex set has a rainbow even cycle (that is, a set of edges from distinct $D_i$'s, forming an even cycle). This resolves an open problem of Aharoni, Briggs, Holzman and Jiang. Moreover, the result is best possible for every positive integer $n$.",
"subjects": "Combinatorics (math.CO)",
"title": "Rainbow even cycles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357561234474,
"lm_q2_score": 0.8080672181749422,
"lm_q1q2_score": 0.7911266999444753
} |
https://arxiv.org/abs/2202.01296 | Sidon sets in a union of intervals | We study the maximum size of Sidon sets in unions of integers intervals. If $A\subseteq\mathbb{N}$ is the union of two intervals and if $\left| A \right|=n$ (where $\left| A \right|$ denotes the cardinality of $A$), we prove that $A$ contains a Sidon set of size at least $0, 876\sqrt{n}$. On the other hand, by using the small differences technique, we establish a bound of the maximum size of Sidon sets in the union of $k$ intervals. | \section{Introduction}
A Sidon set of integers is a subset of $\mathbb{N}$ with the property that all sums of two elements are distinct. Working on Fourier series, Simon Sidon \cite{Simon_Sidon} was the first to take an interest in these sets. He sought to bound the size of the largest Sidon set in $\left\llbracket1,n\right\rrbracket$. The question has been intensively studied and today it is well known (see \cite{HalberstamRoth}) that the maximum size of a Sidon set in an interval of size $n$ is asymptotically equivalent to $\sqrt{n}$. We denote by $F\big(\left\llbracket 1,n\right\rrbracket\big)$ this maximum size. The lower bound was obtained independently by Chowla \cite{Chowla} and Erd\H{o}s \cite{ErdosTuran} who etablished
$$\liminf\limits_{\substack{n\rightarrow +\infty}}\dfrac{F\big(\left\llbracket 1,n\right\rrbracket\big)}{\sqrt{n}}\geqslant 1.$$
For the upper bound, Erd\H{o}s and Tur\'an \cite{ErdosTuran} proved that $F\big(\left\llbracket 1,n\right\rrbracket\big)<\sqrt{n}+O\left( n^{1/4}\right)$. This was sharpened by Lindström \cite{Lindstrom_ameliore_E-T} who proved that $F\big(\left\llbracket 1,n\right\rrbracket\big)<n^{1/2}+n^{1/4}+1$.
Finally, very recently, Balogh, Füredi and Roy \cite{BaloghRoyFurediArXiv} obtained
$$F\big(\left\llbracket 1,n\right\rrbracket\big)<\sqrt{n}+0,998n^{1/4} .$$
In this paper we are interested in the size of the largest Sidon set contained in the union of two intervals. For $A\subseteq\mathbb{N}$ we denote by $F(A)$ the maximal cardinality of a Sidon set in $A$. Erd\H{o}s conjectured that $F(A)\geqslant \sqrt{n}$ for all sets $A$ of size $n$. For more readability, if $f$ and $g$ are two fonctions such that $f(n)\geqslant (1-o(1))g(n)$, we will write $f(n)\gtrsim g(n)$. In the same way if $f(n)\leqslant (1+o(1))g(n)$, we will write $f(n)\lesssim g(n)$.
Abbott \cite{Abbott} proved that $F(A)\gtrsim 0,0805\sqrt{n}$ and so, if $I_1$ and $I_2$ are two intervals of respective cardinalities $n_1$ and $n_2$
$$F( I_1\cup I_2 ) \gtrsim 0,0805\sqrt{n_1+n_2}.$$
We will prove (Theorem \ref{S_dans2intervalles}) that
$$F( I_1\cup I_2 ) \gtrsim 0,876\sqrt{n_1+n_2}.$$
Conversely, we will show that $F( I_1\cup I_2 )\lesssim\sqrt{n_1+n_2}$ and more generally we will give a bound for the maximum cardinality of a Sidon set in a union of $k$ intervals (Theorem \ref{S_dans_intervalles}). In our result the number $k$ of intervals can grow up with the size of $A$.
\section{Lower bound for the size of the largest Sidon set contained in the union of two intervals}
Previous works by Singer \cite{Singer}, Chowla \cite{Chowla}, Erd\H{o}s and Tur\'an \cite{ErdosTuran}, lead to
\begin{equation}\label{Sdans(1,n)}
F\left(\left\llbracket 1,n\right\rrbracket\right)\sim \sqrt{n}.
\end{equation}
Since Sidon's property is stable by translation, if $A$ is an interval of size $n$, \eqref{Sdans(1,n)} proves that $F(A)\sim \sqrt{n}$. We shall study the case where $A$ is the union of two intervals. We could simply choose a Sidon set in the largest of the two intervals.
Therefore if $A=I_1\cup I_2$ where $I_1$ and $I_2$ are disjoint intervals of size $n_1$ and $n_2$ such that $n_1\geqslant n_2$, by \eqref{Sdans(1,n)} we get a Sidon set of size $\sqrt{n_1}$ in $I_1$, which yields
$$F(A)\geqslant\sqrt{n_1}=\frac{1}{\sqrt{2}}\sqrt{2n_1}\geqslant\frac{1}{\sqrt{2}}\sqrt{n_1+n_2}> 0,707\sqrt{n_1+n_2}.$$
We shall get a more precise result in the following statement.
\begin{theorem}\label{S_dans2intervalles}
Let $I_1$ and $I_2$ be two disjoint intervals of respective cardinalities $n_1$ and $n_2$. We have
$$F( I_1\cup I_2 ) \gtrsim 0,876\sqrt{n_1+n_2}.$$
\end{theorem}
\begin{proof}
Let $A$ be the union of two disjoint intervals of respective cardinalities $n_1$ and $n_2$. Since Sidon's property is stable by translation and symmetry, even if it means translating and considering $ A '= \max A-A+1 $, we can assume that $ A = I_1 \sqcup I_2 $ where $I_1=\left\llbracket 1,n_1 \right\rrbracket$, and $ I_2 $ is an interval of cardinality $n_2\leqslant n_1$. \\
$$\begin{minipage}[l]{17cm}
\includegraphics[height=1.4cm]{S_2intervalles_I1etI_2}
\end{minipage}$$
$$ $$
\underline{Strategy} : We are going to discuss according to two parameters : the size of $ I_2 $ compared to $ I_1 $, and the distance between $ I_1 $ and $ I_2 $. For that we will consider
$$\alpha=\frac{n_2}{n_1} \ \text{ and } \ \beta=\frac{\min I_2-n_1}{n_1}.$$
We will distinguish several cases. First of all, if $ \alpha $ is less than a certain level $ \alpha_0 $ (which we will have to optimize at the end of the proof) then we will only have to choose a large Sidon set in $ I_1 $. Indeed, if $ \alpha $ is small, then $ I_2 $ is small in front of $ I_1 $. We will therefore not need its contribution to choose our Sidon set. If, on the other hand, $ \alpha $ is greater than $ \alpha_0 $, then we will distinguish two more cases depending on the size of $ \beta $. If $ \beta $ is less than a certain level $ \beta_0 $ (which we will also have to optimize at the end of the proof), then $ I_2 $ is sufficiently close to $ I_1 $. To get a big Sidon set in $ I_1 \cup I_2 $, we will remove the middle elements : those included in $ \left\llbracket n_1+1, \min I_2-1 \right\rrbracket $ to a big Sidon set in $ \left\llbracket 1, \max I_2 \right\rrbracket $. We will use Singer's famous theorem \cite{Singer} (see also \cite{HalberstamRoth} chapter II) to find a large Sidon set in $ \left\llbracket 1, \max I_2 \right\rrbracket $ with few elements in $ \left\llbracket n_1+1, \min I_2-1 \right\rrbracket$. Finally if $ \beta $ is greater than $ \beta_0 $, we will transform a Sidon set in $ \left\llbracket 1, n_1 + n_2 \right\rrbracket $ to obtain a large Sidon set in $ I_1 \cup I_2 $.
\vspace{0.5cm}
Let $\alpha_0\in \left]0,1\right]$, $\beta_0\in\mathbb{R}_+$, $\alpha=\frac{n_2}{n_1}$ et $\beta=\frac{\min I_2-n_1}{n_1}$. \\
\vspace{0.3cm}
$$\begin{minipage}[l]{17cm}
\includegraphics[height=2cm]{S_2intervalles_I1procheI_2}
\end{minipage}$$
$$ $$
\textbf{i)\underline{ If $\alpha\leqslant\alpha_0$}.}
\vspace{0.5cm}
Write $n_2=\alpha n_1$. It suffices then to choose a Sidon set $ S $ in $ I_1 $ of size $ \sqrt{n_1} $. In this way, we have
$$ F(A) \gtrsim \sqrt{n_1} \gtrsim \frac{1}{\sqrt{1+\alpha}}\sqrt{n_1+n_2} \gtrsim \frac{1}{\sqrt{1+\alpha_0}}\sqrt{n_1+n_2} .$$
Finally since $\left| A \right|=n_1+n_2$, in this case we get
\begin{equation}\label{alpha<alpha_0}
F(A)\gtrsim \frac{1}{\sqrt{1+\alpha_0}}\sqrt{\left| A \right|}.
\end{equation}
\vspace{0.5cm}
\textbf{ii)\underline{ If $\alpha\geqslant\alpha_0$ and $\beta\leqslant\beta_0$}.}
\vspace{0.5cm}
We write $n_2=\alpha n_1$ again and we recall that $\beta=\frac{\min I_2-n_1}{n_1}$. In this way, if $n=\max A$, we have
\begin{equation}\label{n=1+alpha+beta)n1}
n= (1+\alpha+\beta)n_1.
\end{equation}
\vspace{0.3cm}
$$\begin{minipage}[l]{17cm}
\includegraphics[height=2.5cm]{S_2intervalles_n=1+alpha+betan1}
\end{minipage}$$
$$ $$
As explained before, we want to use Singer's theorem.
\begin{thm}[Singer, \cite{Singer}]\label{thm_Singer}
Let $p$ be a prime. Then there exist $p+1$ Sidon sets $S_1,...,S_p$ each of size $p+1$ such that
$$\bigcup\limits_{\substack{i=1}}^p S_i=\left\lbrace 1,...,p^2+p+1 \right\rbrace.$$
\end{thm}
We want to use it in $\left\llbracket 1,n \right\rrbracket$, so we need to approach $ n $ by $p^2+p+1$ where $ p $ is a prime number. Let $ p $ and $ p '$ be the two consecutive prime numbers such that
$$p^2+p+1\leqslant n <p'^2+p'+1.$$
Since $ p $ and $ p '$ are consecutive, it is well known for instance that $ p'-p = O(p^{5/8}) $ (see \cite{p-p'=O(p5/8)}). (Actually, better results exist on the distance between two consecutive primes (see \cite{p-p'=O(best)})
but this bound is enough for us).
We have $p^2+p+1\leqslant n$. According to Singer's theorem (theorem \ref{thm_Singer}), there exist $p+1$ Sidon set $S_i $ ($i=1,...,p+1$) each of size $p+1$, whose union is $\left\llbracket 1,p^2+p+1 \right\rrbracket$.
Since $n=p^2+O(p'^2-p^2+p'-p)=p^2+O(p^{13/8})$, we have
$$\min I_2=(1+\beta)n_1=\frac{1+\beta}{1+\alpha+\beta}n=\frac{1+\beta}{1+\alpha+\beta}p^2+O(p^{13/8})\leqslant p^2+p+1,$$
for sufficiently large $n$. Therefore $\left\rrbracket n_1,\min I_2 \right\llbracket\subset\left\llbracket 1,p^2+p+1 \right\rrbracket$, thus
$$\bigcup\limits_{\substack{i=1}}^{p+1}\big( S_i\cap\left\rrbracket n_1,\min I_2 \right\llbracket\big)=\left\rrbracket n_1,\min I_2 \right\llbracket,$$ and
$$\sum\limits_{\substack{i=1}}^{p+1}\big| S_i\cap\left\rrbracket n_1,\min I_2 \right\llbracket\big|=\beta n_1+o(n_1) .$$
So there exists $i\in\left\llbracket 1,p+1\right\rrbracket$ such that $S=S_i$ satisfies
$$\big| S\cap\left\rrbracket n_1,\min I_2 \right\llbracket\big|\leqslant \dfrac{\beta}{p+1}n_1+o\left(\frac{n_1}{p}\right).$$
Finally, with $S'=S\setminus \left\rrbracket n_1,\min I_2 \right\llbracket$, we have $S'\subset A$ and
$$\left| S' \right|\geqslant p+1-\dfrac{\beta}{p+1}n_1+o\left(\frac{n_1}{p}\right).$$
Now $p\sim\sqrt{n}$, and so by \eqref{n=1+alpha+beta)n1} we get
\begin{align*}
F(A) & \geqslant \left| S' \right| \\ & \geqslant\frac{1+\alpha}{1+\alpha+\beta}\sqrt{n}+o(\sqrt{n}) \\ & \geqslant\sqrt{\frac{1+\alpha}{1+\alpha+\beta}}\sqrt{n_1+n_2}+o(\sqrt{n_1+n_2})\\ & \geqslant\sqrt{\frac{1+\alpha}{1+\alpha+\beta_0}}\sqrt{n_1+n_2}+o(\sqrt{n_1+n_2}).
\end{align*}
Moreover the function which associates $\sqrt{\dfrac{1+x}{1+x+\beta_0}}$ to $ x $ is increasing and we are in case $\alpha\geqslant \alpha_0$, so finally
\begin{equation}\label{alpha>_beta<}
F(A)\gtrsim\sqrt{\frac{1+\alpha_0}{1+\alpha_0+\beta_0}}\sqrt{\left| A \right|}.
\end{equation}
\vspace{0.5cm}
\textbf{iii)\underline{ If $\alpha\geqslant\alpha_0$ and $\beta\geqslant\beta_0$}.}
\vspace{0.5cm}
Here, we will distinguish between the cases $\beta_0\geqslant 1$ and $\beta_0<1$.
\vspace{0.5cm}
\textbf{iii.a. If $\beta_0\geqslant 1$.}
\vspace{0.5cm}
Then $ \beta \geqslant 1 $ and therefore $ I_1 $ and $ I_2 $ are sufficiently far apart. \\
$$\begin{minipage}[l]{17cm}
\includegraphics[height=1.5cm]{S_2intervalles_I1loinI_2}
\end{minipage}$$
$$ $$
We then choose a set of Sidon $ S $ in $\left\llbracket 1,n_1+n_2 \right\rrbracket$ and we define the new set $ S '$ by
$$S'=S_1\sqcup S_2,$$
where $S_1=S\cap\left\llbracket 1,n_1 \right\rrbracket$ and $S_2=S\cap\left\rrbracket n_1,n_1+n_2 \right\rrbracket+\min I_2-n_1$. So $S'\subseteq A$ and we will see that $ S' $ is a Sidon set. First note that $ S_1 $ and $ S_2 $ are Sidon sets, $\max S_1\leqslant n_1$ and $\min S_2>\min I_2\geqslant 2n_1$. Therefore for $a,b\in S'$, $a\neq b$, we have
\begin{equation}\label{disjonction1}
a,b\in S_1\Leftrightarrow a+b<2n_1,
\end{equation}
\begin{equation}\label{disjonction2}
a,b\in S_2\Leftrightarrow a+b>4n_1.
\end{equation}
Let $ a, b, c, d \in S '$ be such that $ a + b = c + d $. We will distinguish between the following three cases : $ a $ and $ b $ both belong to $ S_1 $, both to $ S_2 $, or one belongs to $ S_1 $ and the other to $ S_2 $.
\begin{itemize}
\item If $a,b\in S_1$, then by \eqref{disjonction1} $c,d\in S_1$ and since $ S_1 $ is a Sidon set, $\left\lbrace a,b \right\rbrace=\left\lbrace c,d \right\rbrace$.
\item If $a,b\in S_2$, then by \eqref{disjonction2} $c,d\in S_2$ and $ S_2 $ is a Sidon set, so $\left\lbrace a,b \right\rbrace=\left\lbrace c,d \right\rbrace$.
\item If $a\in S_1$ and $b\in S_2$, then as seen in previous arguments, necessarily $ c $ and $ d $ cannot belong both to $ S_1 $ nor both to $ S_2 $. Suppose therefore without lost of generality that $ c \in S_1 $ and $ d \in S_2 $. So we have
$$a+b=c+d \Leftrightarrow a+(b-\min I_2+n_1)=c+(d-\min I_2+n_1),$$
and $a,(b-\min I_2+n_1),c,(d-\min I_2+n_1)\in S$ by construction of $S_1$ and $S_2$. So $\left\lbrace a,(b-\min I_2+n_1) \right\rbrace=\left\lbrace c,d-\min I_2+n_1) \right\rbrace$ because $S$ is a Sidon set. Moreover, since $a,c\in S_1$ and $b,d\in S_2$, we have $a,c\in \left\llbracket 1,n_1 \right\rrbracket$ and $(b-\min I_2+n_1),(d-\min I_2+n_1)\in \left\rrbracket n_1,n_1+n_2 \right\rrbracket$. Hence $a=c$ and $(b-\min I_2+n_1)=(d-\min I_2+n_1)$ so finally $a=c$ and $b=d$.
\end{itemize}
In conclusion, in any case, we get $\left\lbrace a,b \right\rbrace=\left\lbrace c,d \right\rbrace$, which means that $ S ' $ is a Sidon set. It suffices then to notice that $ \left| S' \right|= \left| S \right| $ and to recall that in $ \left\llbracket 1, n_1 + n_2 \right\rrbracket $, we have Sidon sets of size $ \sqrt{ n_1 + n_2} $, to be able to conclude that when $ \min I_2-n_1 \geqslant n_1 $, we have
\begin{equation}\label{alpha>_beta>_a}
F(A)\gtrsim \sqrt{\left| A \right|}.
\end{equation}
\vspace{0.5cm}
\textbf{iii.b. If $\beta_0< 1$ and $\beta>2\alpha-1$.}
\vspace{0.5cm}
Let $S$ be a Sidon set in $\left\llbracket 1,\left\lfloor\frac{1+\beta}{2}n_1\right\rfloor+n_2 \right\rrbracket$, and define $S_1=S\cap\left\llbracket 1,\left\lfloor\frac{1+\beta}{2}n_1\right\rfloor \right\rrbracket$,
$$ S_2=\left( S\cap\left\rrbracket \left\lfloor\frac{1+\beta}{2}n_1\right\rfloor,\left\lfloor\frac{1+\beta}{2}n_1\right\rfloor+n_2 \right\rrbracket\right) +\left\lceil\frac{1+\beta}{2}n_1\right\rceil$$
and $S'=S_1\sqcup S_2$. $S_1\subseteq I_1$ and $S_2\subseteq I_2$ so $ S ' \subseteq A $ and we will see that $ S ' $ is a Sidon set. First note that $ S_1 $ and $ S_2 $ are Sidon sets. Moreover, we have $\max S_1\leqslant\frac{1+\beta}{2}n_1$, \\
$\min S_2\geqslant \min I_2+1=(1+\beta)n_1+1$, and since $\beta >2\alpha-1$,
$$\max S_1+\max S_2\leqslant \left(\frac{3}{2}(1+\beta)+\alpha\right)n_1<\left( 2+2\beta\right) n_1.$$
So we get as in \textbf{iii.a}, for $a,b\in S'$
$$a,b\in S_1 \Leftrightarrow a+b\leqslant (1+\beta)n_1,$$
$$a,b\in S_2 \Leftrightarrow a+b> (2+2\beta)n_1.$$
Therefore if $\beta>2\alpha-1$, as the previous case, we prove that $S'$ is a Sidon set. So if $\beta>2\alpha-1$, $F(A)\geqslant \left| S' \right| =\left| S \right|$ and we know that we can choose $S$ such that
$$
\left| S \right| \gtrsim \sqrt{\frac{1+\beta}{2}n_1+n_2} \gtrsim \sqrt{\frac{1+\beta}{2(1+\alpha)}n+\frac{\alpha}{1+\alpha}n} \gtrsim \sqrt{\dfrac{1+2\alpha+\beta}{2(1+\alpha)}}\sqrt{n},
$$
so finally, if $\beta>2\alpha-1$, we have
\begin{equation}\label{alpha>_beta>_b}
F(A)\gtrsim \sqrt{\dfrac{1+2\alpha_0+\beta_0}{2(1+\alpha_0)}}\sqrt{\left| A \right|}.
\end{equation}
\vspace{0.5cm}
\textbf{iii.c. If $\beta_0< 1$ and $\beta\leqslant 2\alpha-1$.}
\vspace{0.5cm}
This time we choose a Sidon set $S$ in $\left\llbracket 1,\left\lfloor\frac{2}{3}(1+\alpha+\beta)n_1\right\rfloor \right\rrbracket$, and define ${S'=S_1\sqcup S_2}$ where $S_1=S\cap\left\llbracket 1,\left\lfloor\frac{1+\alpha+\beta}{3}n_1\right\rfloor \right\rrbracket$, and
$$S_2=\left( S\cap\left\rrbracket \left\lfloor\frac{1+\alpha+\beta}{3}n_1\right\rfloor,\left\lfloor\frac{2}{3}(1+\alpha+\beta)n_1\right\rfloor \right\rrbracket\right) +\left\lceil\frac{1+\alpha+\beta}{3}n_1\right\rceil .$$
Since $\alpha\leqslant 1$ and in the current case $\beta\leqslant 1$, we have $1+\alpha+\beta\leqslant 3$ and so $S_1\subseteq I_1$. Moreover, $\left\lfloor\frac{1+\alpha+\beta}{3}n_1\right\rfloor+1+\left\lceil\frac{1+\alpha+\beta}{3}n_1\right\rceil\geqslant \frac{2}{3}(1+\alpha+\beta)$ and here $\beta\leqslant 2\alpha-1$ so $\frac{2}{3}(1+\alpha+\beta)\geqslant 1+\beta$ and so $S_2\subseteq I_2$. Therefore $ S ' \subseteq A $ and like in the two previous cases, we prove that $ S ' $ is a Sidon set. Finally, in this case, we get the bound
\begin{equation}\label{alpha>_beta>_b_2}
F(A)\gtrsim \sqrt{2\dfrac{1+\alpha_0+\beta_0}{3(1+\alpha_0)}}\sqrt{\left| A \right|}.
\end{equation}
\newpage
\textbf{iv)\underline{ Conclusion}.}
\vspace{0.5cm}
Whatever the case we are in, by \eqref{alpha<alpha_0}, \eqref{alpha>_beta<}, \eqref{alpha>_beta>_a}, \eqref{alpha>_beta>_b} and \eqref{alpha>_beta>_b_2}, we have
$$F(A)\gtrsim \min\big( m_1(\alpha_0,\beta_0),m_2(\alpha_0,\beta_0) \big)\sqrt{\left| A \right|} ,$$
where
$$m_1(\alpha_0,\beta_0)=\min\limits_{\substack{\beta_0>2\alpha_0-1}}\max \left( \frac{1}{\sqrt{1+\alpha_0}},\sqrt{\frac{1+\alpha_0}{1+\alpha_0+\beta_0}},\sqrt{\dfrac{1+2\alpha_0+\beta_0}{2(1+\alpha_0)}} \right),$$
and
$$m_2(\alpha_0,\beta_0)=\min\limits_{\substack{\beta_0\leqslant2\alpha_0-1}}\max \left( \frac{1}{\sqrt{1+\alpha_0}},\sqrt{\frac{1+\alpha_0}{1+\alpha_0+\beta_0}},\sqrt{2\dfrac{1+\alpha_0+\beta_0}{3(1+\alpha_0)}} \right) .$$
Optimizing the choices of $ \alpha_0 $ and $ \beta_0 $, we get
$$m_1(\alpha_0,\beta_0)\geqslant \sqrt{\dfrac{\sqrt{13}+1}{6}}\geqslant 0,876 ,$$
reached by $(\alpha_0,\beta_0)=\left( \frac{\sqrt{13}-3}{2},4-\sqrt{13} \right)$, and
$$m_2(\alpha_0,\beta_0)\geqslant \left(\frac{2}{3}\right)^{1/4}\geqslant 0,903 ,$$
reached by $(\alpha_0,\beta_0)=\left( \frac{1+\sqrt{6}}{5},\frac{2\sqrt{6}-3}{5} \right)$.
Finally $m_1(\alpha_0,\beta_0)<m_2(\alpha_0,\beta_0)$ which ends the proof.
\end{proof}
\section{Upper bound for the maximum size of a Sidon set in a union of intervals}
In the previous section we gave a lower bound for the maximum size of a Sidon set in a union of two intervals. Conversely, we seek in this section an upper bound for the maximum size of a Sidon set in a union of intervals. If we consider two intervals of size $n/2$ for example, in each of these two intervals, we can only choose at most (asymptotically) $\sqrt{n/2}$ elements because otherwise it would contradict \eqref{Sdans(1,n)}. A trivial asymptotic bound would therefore be $2\sqrt{n/2}=\sqrt{2}\sqrt{n}$. Using the Erd\H{o}s-Tur\'an small difference technique \cite{HalberstamRoth}, we can go down to $\sqrt{n}$.
Actually, we can prove the result for a fixed number of intervals and even for an increasing number of intervals if it remains $o\left( \sqrt{n}\right) $. This is the content of the following theorem.
\begin{theorem}\label{S_dans_intervalles}
If $E$ is a set of cardinality $n\in\mathbb{N}^*$ and $E$ is a union of $k$ intervals, then any Sidon included in $E$ has size at most \\
i) $\left(\alpha+\sqrt{2+\alpha^2} \right)\sqrt{n} +o(\sqrt{n})$ if $\limsup\limits_{\substack{n\rightarrow +\infty}}\dfrac{k}{\sqrt{n}}=\alpha > 0$ \\
ii) $\sqrt{n}+o(\sqrt{n})$ si $k=o(\sqrt{n})$ \\
iii) $\sqrt{n}+\sqrt{k}n^{1/4}+o(n^{1/4})$ if $k=o(n^{1/4})$.
\end{theorem}
\begin{proof}
Let $n,k\in\mathbb{N}^*$ be such that $k\leqslant n$, and
$$E=\bigsqcup\limits_{\substack{i=1}}^k \left\llbracket n^-_i,n^+_i-1 \right\rrbracket ,$$
where $n^-_1<n^+_1<n^-_2<n^+_2<...<n^-_k<n^+_k$ and $\sum\limits_{\substack{i=1}}^k \big( n^+_i-n^-_i\big) =n$. Let $S\subseteq E$ be a Sidon set. For $u$ an integer such that $u<n$, we define the set $\mathcal{M}$ by
$$\mathcal{M}=E+\left\llbracket 1,u \right\rrbracket=\left(\bigsqcup\limits_{\substack{i=1}}^k\left\llbracket n^-_i,n^+_i-1 \right\rrbracket\right)+\left\llbracket 1,u \right\rrbracket.$$
We have $\left|\mathcal{M}\right|\leqslant\sum\limits_{\substack{i=1}}^k \big( u+n^+_i-n^-_i\big)=n+ku$.
For $m\in\mathcal{M}$, we consider the intervals $I_m$ defined by
$$I_m=\left\llbracket m-u,m-1 \right\rrbracket.$$
Let $r=\left| S \right|$. Since each element of $S$ occurs in exactly $u$ intervals of type $I_m$, we have
\begin{equation}\label{II.1.1.1}
\sum\limits_{\substack{m\in \mathcal{M}}}\left|I_m\cap S\right|=ru.
\end{equation}
Thus by the Cauchy-Schwarz inequality, we obtain
\begin{align*}
(ru)^2 & \leqslant\left(\sum\limits_{\substack{m\in \mathcal{M}}} 1\right)\left(\sum\limits_{\substack{m\in \mathcal{M}}}\left|I_m\cap S\right|^2\right) \\ & \leqslant (n+ku)\left(\sum\limits_{\substack{m\in \mathcal{M}}}\left|I_m\cap S\right|^2\right),
\end{align*}
and so
\begin{equation}\label{II.1.1.2}
\sum\limits_{\substack{m\in \mathcal{M}}}\left|I_m\cap S\right|^2\geqslant\dfrac{(ru)^2}{n+ku}.
\end{equation}
For $u<n$ and $m\in \mathcal{M}$, we define
$$T_u(m)=\left|\left\lbrace (s_1,s_2) \ \vert \ s_1,s_2\in \big(S\cap I_m\big) \ , \ s_1<s_2 \right\rbrace\right| ,$$
and
$$T_u=\sum\limits_{\substack{m\in \mathcal{M}}}T_u(m).$$
On the one hand, by \eqref{II.1.1.1} and \eqref{II.1.1.2}, we have
$$ T_u = \sum\limits_{\substack{m\in \mathcal{M}}}T_u(m) = \sum\limits_{\substack{m\in \mathcal{M}}}\binom{\left|I_m\cap S\right|}{2} \geqslant \dfrac{1}{2}\left(\dfrac{(ru)^2}{n+ku}-ru\right),$$
which yields
\begin{equation}\label{min_T_u}
T_u\geqslant \dfrac{ru}{2}\left(\dfrac{ru}{n+ku}-1\right).
\end{equation}
On the other hand, for any couple $(s_1,s_2)$ counted in $T_u$, $s_2-s_1$ is an integer $d$ satisfying $0<d<u$. Moreover, since $S$ is a Sidon set, for each $d$, there is at most one matching $(s_1, s_2)$. Finally, a pair $(s_1, s_2)$ corresponding to a certain $d$ appears in exactly $u-d$ intervals $I_m$. Thereby
$$ T_u \leqslant \sum\limits_{\substack{d=1}}^{u-1}(u-d) =\dfrac{u(u-1)}{2} .$$
Using \eqref{min_T_u}, we get
$$\dfrac{ru}{2}\left(\dfrac{ru}{n+ku}-1\right) \leqslant \dfrac{u(u-1)}{2} ,$$
which leads to
$$ r^2u-(n+ku)r\leqslant (u-1)(n+ku),$$
and finally
\begin{equation}\label{avant_choix_u}
r\leqslant\sqrt{\frac{u-1}{u}(n+ku)+\dfrac{(n+ku)^2}{4u^2}}+\dfrac{n+ku}{2u}.
\end{equation}
We just have to choose different values for $u$ according to the relative size of $k$ compared to $n$ in order to conclude.
\begin{itemize}
\item If $\limsup\limits_{\substack{k\rightarrow +\infty}}\dfrac{k}{\sqrt{n}}=\alpha\neq 0$, then we choose $u=\left\lceil\sqrt{n}/\alpha\right\rceil$ and \eqref{avant_choix_u} gives
\begin{align*}
r & \leqslant \dfrac{\alpha\sqrt{n}}{2}+\frac{k}{2}+\sqrt{n+\frac{k\sqrt{n}}{\alpha}+\left(\frac{\alpha\sqrt{n}}{2}+\frac{k}{2} \right)^2+o(n)} \\ & \leqslant \sqrt{n}\left(\alpha+\sqrt{2+\alpha^2} \right) +o(\sqrt{n}).
\end{align*}
\begin{Remarque}\label{S_dans_intervalles_alpha=1/sqrt(2)}
As we want $ r $ to be an $ O (\sqrt{n}) $, in \eqref{avant_choix_u}, on the one hand the $ k (u-1) $ in the root forces us to choose $ u = O (\sqrt{n}) $, and on the other hand the $ \frac{n}{2u} $ outside the root leads us to choose $ \sqrt{n} = O (u) $. So necessarily, our choice will be of the form $u=\gamma \sqrt{n}$.
The choice $u=\left\lceil\sqrt{n}/\alpha\right\rceil$ is the simplest giving a good bound for all $ \alpha $, but at this stage, if we know precisely $ \alpha $, it is possible to do a little better. For exemple if $\alpha=1/\sqrt{2}$, then choosing $u=\left\lceil\beta\sqrt{n}\right\rceil$ and injecting in \eqref{avant_choix_u}, we obtain a function to be minimized in $\beta$. For $\beta=1,79$, we get
$$r\leqslant 2,266\sqrt{n},$$
whereas our general choice $u=\left\lceil\sqrt{n}/\alpha\right\rceil$, only yields
$$r\leqslant 2,29\sqrt{n}.$$
Similarly if $\alpha=1$, we are led to choose $\beta=\frac{1+\sqrt{5}}{2}$ chich gives
$$r\leqslant \frac{3+\sqrt{5}}{2}\sqrt{n}\leqslant 2,62\sqrt{n}.$$
\end{Remarque}
\item If $k=o(\sqrt{n})$, then we choose $u=\left\lceil \dfrac{n^{3/4}}{\sqrt{k}}\right\rceil$ and \eqref{avant_choix_u} gives
\begin{align}\label{cas2versCas3}
r & \leqslant \sqrt{k}\dfrac{n^{1/4}}{2}+\dfrac{k}{2}+\sqrt{n+\sqrt{k}n^{3/4}+\left(\sqrt{k}\dfrac{n^{1/4}}{2}+\dfrac{k}{2} \right)^2} \\ & \leqslant \sqrt{n}\sqrt{1+\frac{\sqrt{k}}{n^{1/4}}+o(1)}+o(\sqrt{n}) \notag \\ & \leqslant \sqrt{n}+o(\sqrt{n}). \notag
\end{align}
If $k=o(n^{1/4})$, we can make the error term more precise. \\
\\
\item If $k=o(n^{1/4})$, \eqref{cas2versCas3} gives
\begin{align*}
r & \leqslant \sqrt{k}\dfrac{n^{1/4}}{2}+\dfrac{k}{2}+\sqrt{n+\sqrt{k}n^{3/4}+\left(\sqrt{k}\dfrac{n^{1/4}}{2}+\dfrac{k}{2} \right)^2} \\ & \leqslant \sqrt{k}\dfrac{n^{1/4}}{2}+\dfrac{k}{2}+\sqrt{n}\sqrt{1+\frac{\sqrt{k}}{n^{1/4}}+\frac{k}{4\sqrt{n}}+\frac{k^{3/2}}{2n^{3/4}}+\frac{k^2}{4n}} \\ & \leqslant \sqrt{n}+\sqrt{k}n^{1/4}+o(n^{1/4}),
\end{align*}
where the last line comes from the Taylor expansion $\sqrt{1+x}=1+\frac{x}{2}+o(x)$.
\end{itemize}
\end{proof}
\section{Conclusion and Remarks}
Theorems \ref{S_dans2intervalles} and \ref{S_dans_intervalles} prove that if $A$ is the union of two intervals of respective size $n_1$ and $n_2$, the maximum cardinality of a Sidon set in $A$ is (asymptotically) between $0,8444\sqrt{n_1+n_2}$ and $\sqrt{n_1+n_2}$. Erd\H{o}s' conjecture claims that it should be equivalent to $\sqrt{n_1+n_2}$. Therefore, it should be very interesting to improve Theorem \ref{S_dans2intervalles} in order to try to bring the constant $0,8444$ closer to $1$.
It is also surely possible to improve the first point of Theorem \ref{S_dans_intervalles} but we will never be able to reach $\sqrt{n}$. Indeed, it is easy to build Sidon sets with a cardinality larger than $\sqrt{n}$ under the hypothesis of Theorem \ref{S_dans_intervalles}.
\begin{prop}\label{S_dans_n_intervalles}
Let $n\in\mathbb{N}^*$. There exists a Sidon set of size $2n$ in a union of $n$ intervals each of size $n$.
\end{prop}
\begin{proof}
Let $n\in\mathbb{N}^*$, $S_1=\left\lbrace 2^{k+1} \ \vert \ k=1,...,n \right\rbrace$, $S_2=\left\lbrace 2^{k+1}+k \ \vert \ k=1,...,n-1 \right\rbrace$ and $S=S_1\sqcup S_2$. Since $\left| S \right|=2n-1$ and
$$S\subseteq \bigsqcup\limits_{\substack{k=1}}^{n}\left\llbracket 2^{k+1},2^{k+1}+n-1 \right\rrbracket,$$
we just have to check that $S$ is a Sidon set. Let $a,b,c,d\in S$ be such that $a+b=c+d$.
Since $2^{k+2}-2^{k+1}>2k$, $a+b=c+d$ implies that there exists $k_1$ and $k_2$ in $\left\lbrace 1,...,n \right\rbrace$ such that $\left\lbrace a,b \right\rbrace$ and $\left\lbrace c,d \right\rbrace$ are in $\left\llbracket 2^{k_1+1},2^{k_1+1}+n \right\rrbracket\cup\left\llbracket 2^{k_2+1},2^{k_2+1}+n \right\rrbracket$. We can assume without lost of generality that $a,c\in \left\llbracket 2^{k_1+1},2^{k_1+1}+n \right\rrbracket$ and $b,d\in\left\llbracket 2^{k_2+1},2^{k_2+1}+n \right\rrbracket$. In this way, we have
\begin{align*}
a+b=c+d & \Rightarrow a-c=d-b \\ & \Rightarrow a-c \in\left\lbrace 0,k_2 \right\rbrace,
\end{align*}
But since $a,c\in \left\llbracket 2^{k_1+1},2^{k_1+1}+n \right\rrbracket$, $a-c \in\left\lbrace 0,k_1 \right\rbrace$. Therefore either $a=c$ and so $\left\lbrace a,b \right\rbrace=\left\lbrace c,d \right\rbrace$, or $k_1=k_2$ and so $\left\lbrace a,b \right\rbrace=\left\lbrace c,d \right\rbrace$, which ends the proof.
\end{proof}
This proposition proves that the condition $ k = o(\sqrt{n}) $ of the point $ ii) $ of Theorem \ref{S_dans_intervalles} is optimal in a way. To improve this theorem, it would be necessary to reduce the constant $\left(\alpha+\sqrt{2+\alpha^2} \right)$ in front of $ \sqrt{n} $ in the bound of point $ i) $. However, Proposition \ref{S_dans_n_intervalles} implies that we cannot go below $ \sqrt{2} $ for $ \alpha = 1 $.
\newpage
\bibliographystyle{plain}
| {
"timestamp": "2022-02-04T02:04:02",
"yymm": "2202",
"arxiv_id": "2202.01296",
"language": "en",
"url": "https://arxiv.org/abs/2202.01296",
"abstract": "We study the maximum size of Sidon sets in unions of integers intervals. If $A\\subseteq\\mathbb{N}$ is the union of two intervals and if $\\left| A \\right|=n$ (where $\\left| A \\right|$ denotes the cardinality of $A$), we prove that $A$ contains a Sidon set of size at least $0, 876\\sqrt{n}$. On the other hand, by using the small differences technique, we establish a bound of the maximum size of Sidon sets in the union of $k$ intervals.",
"subjects": "Combinatorics (math.CO); Number Theory (math.NT)",
"title": "Sidon sets in a union of intervals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357585701874,
"lm_q2_score": 0.8080672135527632,
"lm_q1q2_score": 0.7911266973963272
} |
https://arxiv.org/abs/1702.04915 | Scaling limit of the uniform prudent walk | We study the 2-dimensional uniform prudent self-avoiding walk, which assigns equal probability to all nearest-neighbor self-avoiding paths of a fixed length that respect the prudent condition, namely, the path cannot take any step in the direction of a previously visited site. The uniform prudent walk has been investigated with combinatorial techniques in [Bousquet-Mélou, 2010], while another variant, the kinetic prudent walk has been analyzed in detail in [Beffara, Friedli and Velenik, 2010]. In this paper, we prove that the $2$-dimensional uniform prudent walk is ballistic and follows one of the $4$ diagonals with equal probability. We also establish a functional central limit theorem for the fluctuations of the path around the diagonal. | \section{Introduction}
The prudent walk was introduced in \cite{TD87b,TD87a} and \cite{SSK01} as a simplified version of the self-avoiding walk. It has attracted the attention of the combinatorics community in recent years, see e.g., \cite{B10,BI15,DG08}, and also the probability
community, see e.g. \cite{BFV10} and \cite{PT16} .
\smallskip
In dimension $2$, for a given $L\in \mathbb{N}$, the set $\Omega_L$ of $L$-step prudent path on $\mathbb{Z}^2$ contains all nearest-neighbor self-avoiding path starting from the origin, which never take any step in the direction of a site already visited, i.e.,
\begin{align}\label{defPP}
\nonumber \Omega_L:=\big\{(\pi_i)_{i=0}^L\in (\mathbb{Z}^2)^{L+1}\colon\, &\pi_0=(0,0), \pi_{i+1}-\pi_i\in \{\leftarrow,\rightarrow,\downarrow,\uparrow\} \quad \forall i\in \{0,\dots, L-1\}, \\
& \big(\pi_i+\mathbb{N} (\pi_{i+1}-\pi_i)\big) \cap \pi_{[0,i]}=\emptyset \quad \, \forall i\in \{0,\dots, L-1\}\big\}
\end{align}
where $\pi_{[0,i]}$ is the range of $\pi$ at time $i$, i.e., $\pi_{[0,i]}=\{\pi_j: 0\leq j\leq i\}$.
\smallskip
Two natural laws can be considered on $\Omega_L$:
\begin{enumerate}
\item The \emph{uniform} law $\bP_{\text{unif},L}$, also referred to as the uniform prudent walk, under which at every path in $\Omega_L$ is assigned equal probability $1/|\Omega_L|$;
\item The \emph{kinetic} law $\bP_{\text{kin},L}$, also referred to as the kinetic prudent walk, under which each step of the path is chosen uniformly among all the admissible steps. Note that the first step is in one of the $4$ directions with equal probability. Subsequently,
if a step increases either the width or the height of its range, then the next step has $3$ admissible choices; otherwise there are only $2$ admissible choices. Let ${\ensuremath{\mathcal H}} (\pi_{[0, L-1]})$ and ${\ensuremath{\mathcal W}} (\pi_{[0, L-1]})$ denote the height and width of the range of $\pi_{[0, L-1]}$. Then, for $L\in \mathbb{N}$
and $\pi \in \Omega_L$, we note that
\begin{align}\label{linkpruki}
\bP_{\text{kin},L}(\pi)&=\tfrac{1}{4}\big(\tfrac{1}{2}\big)^{L-{\ensuremath{\mathcal H}} (\pi_{[0, L-1]})-{\ensuremath{\mathcal W}} (\pi_{[0, L-1]})} \, \big(\tfrac{1}{3}\big)^{{\ensuremath{\mathcal H}} (\pi_{[0, L-1]})+{\ensuremath{\mathcal W}} (\pi_{[0, L-1]})}.
\end{align}
\end{enumerate}
\cite{BFV10} proved that the scaling limit of the kinetic prudent walk is given by
$Z_{u}=\int_{0}^{3u/7}\big( \sigma_1 {\sf 1}_{\{W_s\geq 0\}} {1 \choose 0} +\sigma_2 {\sf 1}_{\{W_s < 0\}} {0 \choose 1}\big)\dd s$, where $W$ is a Brownian motion and $\sigma_1,\sigma_2\in \{-1,1\}$ are random signs (independent of $W$), cf. \cite[Theorem 1]{BFV10}.
\smallskip
In this paper, we identify rigorously the scaling limit of the 2-dimensional uniform prudent walk, proving a conjecture
raised in several papers, e.g., \cite[Section 5]{BFV10}, and \cite[Proposition 8]{B10} where partial answers were provided
for the \emph{2-sided} and \emph{3-sided} versions of the 2-dimensional prudent walk using combinatorial techniques. The conjecture, supported by numerical simulations, was that when space and time are rescaled by the length $L$, the 2-dimensional uniform prudent walk converges to a straight line in one of the 4 diagonal directions chosen with equal probability. This is in stark contrast to the kinetic prudent walk.
\smallskip
\section{Main results}
\begin{definition}\label{def-pi-tilde}
For every $\pi\in \Omega_L$, let $\tilde \pi^L:[0,1]\mapsto \mathbb{R}^2$ be the rescaled and interpolated version of $\pi$, i.e.,
$$\tilde \pi^L_t=\frac{1}{L} \big(\pi_{\lfloor t L\rfloor}+(tL-\lfloor tL\rfloor) (\pi_{\lfloor tL\rfloor+1}-\pi_{\lfloor t L\rfloor})\big), \quad t\in [0,1].$$
\end{definition}
We also denote $\vec{e}_1:=(1,1)$, $\vec{e}_2:=(-1,1)$, $\vec{e}_3:=(-1,-1)$ and $\vec{e}_4:=(1,-1)$.
\smallskip
Our first result shows that the scaling limit of the uniform prudent walk is a straight line segment.
\begin{theorem}[Concentration along the diagonals]\label{unifscal}
There exists a $c>0$ such that for every $\gep >0$
\be{concentr}
\lim_{L\to \infty} \bP_{{\rm unif},L}\bigg(\exists\, i \in \{1,\dots ,4\} \ s.t.\ \sup_{t\in [0,1]}\big|\tilde \pi^L_t-c t\, \vec{e}_i \big|\leq \gep \bigg)=1.
\end{equation}
\end{theorem}
Furthermore, we can identify the fluctuation of the prudent walk around the diagonal. More precisely, let $\sigma_L=1, 2, 3, 4$, depending
on whether $\tilde \pi^L_1$ lies in the interior of the 1st, 2nd, 3rd, or the 4th quadrant, and let $\sigma_L=0$ otherwise. Then we have
\begin{theorem}[Fluctuations around the diagonal]\label{unifscal2}
Under $\bP_{{\rm unif},L}$, the law of $\sigma_L$ converges to the uniform distribution on $\{1, 2, 3, 4\}$, and
\be{concentrclt}
\big(\sqrt L(\tilde \pi^L_t-ct \vec{e}_{\sigma_L})\big)_{t\in [0,1]} \Rightarrow (B_t)_{t\in [0,1]} \qquad \mbox{as } L\to\infty,
\end{equation}
where $\Rightarrow$ denotes weak convergence, and $(B_t)_{t\geq 0}$ is a two-dimensional Brownian motion with a non-degenerate covariance matrix, cf.\ \eqref{covar}.
\end{theorem}
The proof of Theorem \ref{unifscal} follows the strategy used by \cite{BFV10}.
We consider the so called \emph{uniform 2-sided prudent walk} (cf. Section \ref{tspw}), a sub-family of prudent walks with a fixed diagonal direction.
First we prove that the scaling limit of the uniform 2-sided prudent walk is a straight line, cf. Theorem \ref{scalingtwosided}.
A weaker version of this result
was already proven by \cite[Proposition 6]{B10}. We reinforce it by using an alternative probabilistic approach.
We decompose a path into a sequence of excursions, which leads to an \emph{effective} one-dimensional random walk with geometrical increments, see e.g., Figure \ref{fig1}.
Then we show that under the uniform measure, a typical path of length $L$ crosses its range from one end to the other at most $\log L$ times
and the total length of the first $\log L$ excursions also grows at most logarithmically in $L$. This results refines the upper bound obtained by \cite{PT16}. The excursions crossing the range of the walk disappear in the scaling limit, while the remaining part of the path is nothing but a uniform 2-sided prudent walk (in one of the four diagonal directions), for which we have identified the correct scaling limit.
\smallskip
Theorem \ref{unifscal2} can be proved using the same strategy. Once it is shown to hold for the 2-sided uniform prudent walk, cf. Theorem \ref{scalingtwosided2}, then it also holds for the uniform prudent walk thanks to control on the number of excursions crossing the range of the walk.
\subsection{Organization of the paper}
The article is organized as follows: In Section \ref{tspw}, we introduce the uniform 2-sided prudent walk and identify its scaling limit.
In Section \ref{pw}, we analyze the uniform prudent walk and prove some technical results needed to control the excursions crossing
the range of the walk. Lastly, we prove our main results Theorems \ref{unifscal} and \ref{unifscal2} in Section \ref{sec5}.
\section{Uniform 2-sided prudent walk}\label{tspw}
Let $\Omega_L^{+}$ be the subset of $\Omega_L$ containing the so called \emph{2-sided} prudent path (in the north-east direction), that is, those paths $\pi\in \Omega_L$ satisfying three additional geometric constraints:
\begin{enumerate}
\item $\pi$ can not take any step in the direction of any site in the quadrant $(-\infty, 0]^2$;
\item The endpoint $\pi_L$ is located at the top-right corner of the smallest rectangle containing $\pi$;
\item $\pi$ starts
with an east step ($\rightarrow$), i.e., $\pi_1=(1,0)$.
\end{enumerate}
We denote by
$ \bP_{\text{unif},L}^+$ the uniform measure on $\Omega_L^+$.
Theorems \ref{scalingtwosided} and \ref{scalingtwosided2} below are the counterparts of Theorems \ref{unifscal} and \ref{unifscal2} for the uniform 2-sided prudent walk. Recall that $\vec e_1=(1,1)$.
\begin{theorem}
\label{scalingtwosided}
There exists a $c>0$ such that for every $\gep > 0$,
\be{concentr2sided}
\lim_{L\to \infty} \bP_{{\rm unif},L}^+\bigg(\sup_{t\in [0,1]}\big|\tilde \pi^L_t-c t \, \vec{e}_1 \big|\le \gep \bigg)=1.
\end{equation}
\end{theorem}
\begin{theorem}
\label{scalingtwosided2}
Under $\bP^+_{{\rm unif},L}$,
\be{concentr2sidedclt}
\big(\sqrt L(\tilde \pi^L_t-ct \vec{e}_1)\big)_{t\in [0,1]} \Rightarrow (B_t)_{t\in [0,1]} \qquad \mbox{as } L\to\infty.
\end{equation}
where $B$ is the same two-dimensional Brownian motion as in Theorem \ref{unifscal2}.
\end{theorem}
\subsection{Decomposition of a 2-sided prudent path into excursions}\label{decexcu}
Every path $\pi\in \Omega_L^+$ can be decomposed in a unique manner into a sequence of horizontal and vertical excursions (see Figure \ref{fig1}).
First we introduce some notation. For $\pi\in \Omega_L^+$ and $i\leq L$, denote $\pi_i=(\pi_{i,1}, \pi_{i, 2})$.
Let $\tau_0:=0$ and
\begin{align}
&\tau_1(\pi) :=\min\{ i>0 \, :\, \pi_{i,2}>0\}-1,
&\tau_2(\pi) :=\min\{ i>\tau_1 \, :\, \pi_{i,1}>\pi_{ \tau_{1,1}}\}-1,
\end{align}
which are the times when the first horizontal, resp.\, vertical excursion ends.
For $k\in \mathbb{N}$, define
\begin{align*}
&\tau_{2k+1}(\pi) :=\inf\{ i>\tau_{2k}\, :\, \pi_{i,2}>\pi_{ \tau_{2k,2}}\}-1,
& \tau_{2k+2}(\pi) :=\inf\{ i>\tau_{2k+1}\, :\, \pi_{i,1}>\pi_{ \tau_{2k+1,1}}\}-1.
\end{align*}
Let $\gamma_L(\pi):=\min\{j\geq 1\colon \, \tau_j(\pi) =\infty \}$ be the number of excursions in $\pi$. Note that each horizontal excursion starts with an east step, and each vertical excursion a north step. Since the endpoint $\pi_L$ lies at the top-right corner of the smallest rectangle containing $\pi$, the last excursion of $\pi$ can be made complete by adding an extra north step if it is a horizontal excursion, or adding an extra east step if it is a vertical excursion. Therefore, with a slight abuse of notation, we redefine $\tau_{\gamma_L}:=L$. We can thus decompose $\pi$ into the excursions $\big((\pi_{\tau_{k-1}},\dots,\pi_{\tau_{k}})\big)_{k=1}^{\gamma_L}$, which are horizontal for odd $k$ and vertical for even $k$.
\subsection{Effective random walk excursion} \label{sec:ERW}
Let ${\ensuremath{\mathcal I}} _t$ denote the set of horizontal excursions of length $t$, flipped above the $x$-axis, i.e.,
\be{defIt}
{\ensuremath{\mathcal I}} _t:=\big\{\pi=(\pi_0,\pi_1,\dots,\pi_t)\colon \pi_0=(0,0),\, \pi_1=(1,0),\ \pi_{i, 2} \ge 0\ \forall\, i \in\{1,\dots, t\},\, \pi_{t,2}= 0 \big\}.
\end{equation}
Recall from Section \ref{decexcu} that each path $\pi\in \Omega_L^+$ can be decomposed uniquely into $\gamma_L(\pi)$ excursions of length $\tau_i-\tau_{i-1}$, $i=1,\dots,\tau_L(\pi)$. These excursions are alternatingly horizontal and vertical, with the first excursion being horizontal, see Figure \ref{fig1}. We can thus partition $\Omega_L^+$ according to the value of $r:=\gamma_L(\pi)$ and the excursion lengths $t_1,\dots,t_r$.
Defining
\begin{equation}\label{defKbis}
K(t):=\frac{1}{2^t}\big | {\ensuremath{\mathcal I}} _{t}\big |,
\end{equation}
we have that
\begin{equation}\label{two-sided1}
\frac{1}{2^{L}}\, |\Omega_L^{+}|=\sum_{r\geq 1} \sum_{t_1+\dots+t_r=L} \prod_{i=1}^r \Big | {\ensuremath{\mathcal I}} _{t_i}\Big |\, \frac{1}{2^{t_i}}
=\sum_{r\geq 1} \sum_{t_1+\dots+t_r=L} \prod_{i=1}^r K(t_i).
\end{equation}
We now follow the idea introduced in \cite{BFV10} and rewrite \eqref{defKbis} in terms of a one-dimensional \emph{effective random walk} $V=(V_i)_{i=0}^\infty$. The walk $V$ starts from $0$, has law $\mathbf{P}$, and its increments
$(U_i)_{i=0}^\infty$ are i.i.d. and follow a discrete Laplace distribution, i.e.,
\begin{equation}\label{lawP}
\mathbf{P}(U_1 = x)=\frac{1}{3}\, \frac{1}{2^{|x|}}, \quad x\in \mathbb{Z}.
\end{equation}
\begin{lemma}
Given the walk $V$ and $t\in \mathbb{N}$, let $\eta_t:=\min\big\{i\geq 1\colon \, i+\sum_{j=1}^{i} |U_j| \geq t\big\}$, then
\be{defK}
K(t)=\mathbf{E}\Big[e^{\log(\frac{3}{2})\, \eta_t} \, {\sf 1}_{\{V_i\geq 0\ \forall i\leq \eta_t, \, V_{\eta_t=0}, \ \eta_t+\sum_{j=1}^{\eta_t} |U_j|=t\}}\Big].
\end{equation}
\end{lemma}
\begin{proof}
For each $\pi\in {\ensuremath{\mathcal I}} _t$ (cf. \eqref{defIt}), let $n(\pi):=|\pi_{t,1}-\pi_{0,1}|$ be the number of horizontal steps. Each horizontal step is followed by a stretch of vertical steps, and for $1\leq i\leq n$, let $\ell_i\in\mathbb{Z}$ denote the vertical displacement after the $i$-th horizontal step.
This gives a bijection between ${\ensuremath{\mathcal I}} _t$ and $\bigcup_{n=1}^t {\ensuremath{\mathcal L}} _{n,t}$, where
\begin{equation}
\label{stretches}
{\ensuremath{\mathcal L}} _{n,t} := \bigg\{\underline \ell=(\ell_1,\dots,\ell_n)\in\mathbb{Z}^n \colon \sum_{k=1}^j \ell_k\geq 0\,\, \forall\, j=1,\dots, n,\, \sum_{k=1}^n \ell_k= 0,\, n+\sum_{j=1}^n |\ell_j|=t
\bigg\}.
\end{equation}
At this stage we note that
\begin{equation}\label{excursion2t}
\frac{1}{2^t} \big | {\ensuremath{\mathcal I}} _{t}\big |=\sum_{\pi\in {\ensuremath{\mathcal I}} _t} \frac{1}{2^{t-n(\pi)}}\frac{1}{3^{n(\pi)}}\, \Big(\frac{3}{2}\Big)^{n(\pi)}= \sum_{n=1}^t \sum_{\underline \ell\in {\ensuremath{\mathcal L}} _{n,t}} \frac{1}{3^{n}}\frac{1}{2^{\sum_{j=1}^n|\ell_j|}} e^{n\, \log(\frac{3}{2})}.
\end{equation}
By identifying $\underline \ell=(\ell_1,\dots,\ell_n)$ in \eqref{excursion2t} with the increments of $V$, we get \eqref{defK}.
\end{proof}
\begin{figure}
\includegraphics[scale=2]{2SPW_excursions.pdf}
\caption{We decompose a path $\pi\in \Omega_L^+$ into a sequence of horizontal and vertical excursions $\big((\pi_{\tau_{k-1}},\dots,\pi_{\tau_{k}})\big)_{k=1}^{4}$, each associated with an effective one dimensional random walk excursion.}
\label{fig1}
\end{figure}
\subsection{Representation of the law of a uniform 2-sided prudent walk}\label{secEL}
\begin{lemma}\label{lemma:Klamba}
Let $K$ be as in \eqref{defK}, then there exists $\lambda^*>0$ such that $\widehat K(\lambda^*):=\sum_{t=1}^\infty K(t) e^{-\lambda^* t} =1$.
\end{lemma}
\begin{remark}\label{remKK}
{\rm We will denote by $K^{*}$ the probability measure on $\mathbb{N}$ defined by
\begin{equation}
K^{*}(t)=K(t) e^{-\lambda^* t}, \qquad t\in\mathbb{N}.
\end{equation}
The proof of Lemma \ref{lemma:Klamba} below shows that there exists $\hat\lambda<\lambda^*$ such that $1<\widehat K(\hat \lambda)<\infty$. Therefore $K^*$ has exponential tail, i.e., there exist $c_1,c_2>0$ such that $K^{*}(n)\leq c_1 e^{-c_2 n}$ for every $n\in \mathbb{N}$.}
\end{remark}
\smallskip
The proof of Lemma \ref{lemma:Klamba} will be given at the end of the present section. We first explain how the law $K^*$ can be used to express the law $\bP^*$ of the excursions of the uniform two-sided prudent walk. Continuing Section \ref{sec:ERW}, let ${\ensuremath{\mathcal V}} _\infty$ be the set of all
non-negative excursions of the effective walk, i.e.,
\be{Vinf}
{\ensuremath{\mathcal V}} _\infty:=\bigcup_{N\geq 1} \Big\{(V_i)_{i=0}^N\colon\, V_0=0, V_i\geq 0\ \forall i\leq N,\, V_N=0\Big\}.
\end{equation}
By \eqref{defK} and Lemma \ref{lemma:Klamba}, we obtain the following probability law $\bP^*$ on ${\ensuremath{\mathcal V}} _\infty$, with Radon-Nikodym derivative
\be{defPstar}
\frac{\dd \bP^*}{\dd \bP}\big((V_i)_{i=0}^N\big)=e^{\log(\frac{3}{2}) N-\lambda^{*} (N+\sum_{i=1}^N |U_i|)}.
\end{equation}
We will show that $\bP^*$ is in fact the law of a uniform 2-sided prudent walk excursion. To that end, consider a sequence $(t_i,n_i)_{i=1}^r\in \mathbb{N}^r\times\mathbb{N}^r$ satisfying $t_1+\dots+t_r=L$ and $n_i\leq t_i$ for every $i\leq r$. Let $\Omega_L^+ \big( (t_i,n_i)_{i=1}^r \big)$ denote the
set of 2-sided prudent path consisting of $r$ excursions, where the $i$-th excursion has total length $t_i$, with $n_i$ horizontal (resp. vertical) steps if it is a horizontal (resp.\ vertical) excursion. By the reasoning leading to \eqref{two-sided1}, with $\alpha^*:=\log(3/2)-\lambda^*$, we obtain
\begin{equation}
\label{probeve1}
\frac{1}{2^L}|\Omega_L^+\big((t_i,n_i)_{i=1}^r\big)| e^{-\lambda^*L}=\prod_{i=1}^r \bE\bigg[e^{\alpha^* n_i-\lambda^*(t_i-n_i)} \, {\sf 1}_{\big\{V_j \geq 0 \, \forall
j \leq n_i,\, V_{n_i}=0,\, n_i+\sum_{j=1}^{n_i} |U_j|=t_i \big\}}\bigg].
\end{equation}
If $(\tilde T_i,\tilde N_i)_{i\in\mathbb{N}}$ denotes an i.i.d.\ sequence such that $\tilde N_1=N$ and $\tilde T_1=N+\sum_{i=1}^N |U_i|$ for a random walk excursion $(V_i)_{i=0}^N$ following the law $\bP^*$ in \eqref{defPstar}, and
\be{gammaLV}
\tilde \gamma_L :=\min\{i\geq 1 \, :\, \tilde T_1+\dots +\tilde T_{i} \geq L\},
\end{equation}
then by \eqref{two-sided1} and \eqref{probeve1}, for any set of paths $A$ which is a union of some $\Omega_L^+\big((t_i,n_i)_{i=1}^r\big)$, we have
\be{P+unif}
\bP_{\text{unif},L}^+ (A)=\frac{|\Omega_L^+ (A)| }{|\Omega_L^+|}
= \frac{\bE^*\bigg[{\sf 1}_A {\sf 1}_{\{\tilde T_1+\dots +\tilde T_{\tilde \gamma_L}=L\}}\bigg]}
{\bP^*\bigg[ \tilde T_1+\dots +\tilde T_{\tilde \gamma_L}=L\bigg]},
\end{equation}
where we also used $\bP^*$ to denote the joint law of the i.i.d.\ sequence of effective random walk excursions that give rise to $(\tilde T_i,\tilde N_i)_{i\in\mathbb{N}}$. This representation will be the basis of our analysis.
\medskip
\begin{proof}[Proof of Lemma \ref{lemma:Klamba}]
The existence of
$\lambda^{*}$ is guaranteed if $\lambda^{**}:=\inf\{\lambda>0\colon\, \widehat K(\lambda) <\infty\}$ satisfies $\widehat K(\lambda^{**})>1$.
To show this, let $\tau$ be the first time the walk $V$ returns to or crosses the origin, i.e.,
\be{deftau}
\tau =\begin{dcases*}
1 & if $V_1=0$,\\
\min\{i\geq 2\colon\, V_{i-1} V_i\leq 0\} & otherwise.
\end{dcases*}
\end{equation}
Let $\alpha:=\log(3/2)-\lambda$. By \eqref{defK} and decomposing $V\in {\ensuremath{\mathcal V}} _\infty$ into positive excursions, we can write
\begin{align}\label{dived}
\nonumber \widehat K(\lambda)&=\sum_{t\geq 1} \bE\Big[e^{(\log(\frac{3}{2})-\lambda ) \eta_t-\lambda (t-\eta_t)}\, {\sf 1}_{\{V_i\geq 0\ \forall i\leq \eta_t, V_{\eta_t=0},\, \eta_t+\sum_{i=1}^{\eta_t} |U_i|=t\}}\Big]\\
\nonumber &= \sum_{t\geq 1} \sum_{N\leq t} \bE\Big[e^{ \alpha N-\lambda (t-N)} \, {\sf 1}_{\{V_i\geq 0\ \forall i\leq N, \, V_N=0,\, N+\sum_{i=1}^{N} |U_i|=t\}}\Big]\\
\nonumber &= \sum_{N=1}^{\infty} \bE\Big[e^{\alpha N} \, e^{-\lambda \sum_{i=1}^N |U_i|} {\sf 1}_{\{V_i\geq 0\ \forall i\leq N, \, V_N=0\}}\Big]\\
\nonumber &= \sum_{N=1}^\infty \sum_{r=1}^{\infty} \sum_{n_1+\dots+n_r=N} \prod_{i=1}^r \bE\Big[e^{\alpha \tau} \, e^{-\lambda \sum_{i=1}^{\tau} |U_i|} \, {\sf 1}_{\{V_1\geq 0,\, \tau=n_i,\, V_{n_i}=0\}}\Big]\\
\nonumber &= \sum_{r=1}^{\infty} \Big(\sum_{n=1}^\infty \bE\Big[e^{\alpha \tau -\lambda \sum_{i=1}^{\tau} |U_i|} \, {\sf 1}_{\{V_1\geq 0, \, \tau=n,\, V_{\tau}=0\}}\Big]\Big)^r\\
&=
\sum_{r=1}^{\infty} \left( \bE\Big[e^{\alpha \tau-\lambda \sum_{i=1}^{\tau} |U_i|}\, {\sf 1}_{\{V_1\geq 0, V_{\tau}=0\}}\Big]\right)^r
=: \sum_{r=1}^\infty G(\lambda)^r.
\end{align}
Therefore $\lambda^{**}=\inf \{\lambda>0\colon\, G(\lambda)< 1\}$, and it suffices to show that $G(\lambda^{**})>1/2$. Note that
\begin{equation}\label{G1}
\bE\Big[e^{\alpha \tau-\lambda \sum_{i=1}^{\tau} |U_i|}\, {\sf 1}_{\{V_1= 0\}}\Big]=\frac{e^\alpha}{3},
\end{equation}
and
\begin{equation}\label{G2}
\bE\Big[e^{\alpha \tau-\lambda \sum_{i=1}^{\tau} |U_i|}\, {\sf 1}_{\{V_1>0, \tau=n\}}\Big] = \bE\Big[e^{\alpha \tau-\lambda \sum_{i=1}^{\tau} |U_i|}\, {\sf 1}_{\{V_1>0, \tau=n, V_\tau=0\}}\Big] \frac{1}{1-e^{-\lambda}/2} ,
\end{equation}
because given $(V_i)_{i=0}^{n-1}$ with $V_1>0$, the events $\{\tau=n, V_n=0\}$ and $\{\tau=n\}$ differ only in that the first event requires $U_n=-V_{n-1}$, while the second event requires $U_n\leq -V_{n-1}$, and the probability ratio of the two events is precisely
$\sum_{k=0}^\infty \frac{e^{-k\alpha}}{2^k} = \frac{1}{1-e^{-\lambda}/2}$ by \eqref{lawP}. Summing over $n$ in \eqref{G2}, using the symmetry of $V$ and \eqref{G1} then gives
\begin{align}\label{Glambda}
G(\lambda)&= \frac{e^\alpha}{3}\Big(\frac{1}{2}+\frac{e^{-\lambda}}{4}\Big)+\frac{1}{2} \Big(1-\frac{e^{-\lambda}}{2}\Big)
\bE\Big[ e^{\alpha \tau-\lambda \sum_{i=1}^{\tau} |U_i|}\Big].
\end{align}
Now let $\hat \lambda$ be the unique solution of
$$
\log \bE[e^{-\lambda |U_1|}]=-\alpha=\lambda-\log(3/2), \qquad \lambda \in [0, \infty).
$$
Then $(M_n^{\hat\lambda})_{n\geq 0}:=(e^{\alpha n-\hat \lambda \sum_{i=1}^n |U_i|})_{n\geq 0}$ is a positive martingale. We will show that $\bE[M_\tau^{\hat\lambda}]=1$, which then gives $G(\hat \lambda)= \frac{1}{2}+\frac{e^{-2\hat\lambda}}{8}\in (1/2,1)$. By definition, we have $\hat\lambda> \lambda^{**}$. Since $\lambda\mapsto G(\lambda)$ is strictly decreasing, we conclude that $G(\lambda^{**})> G(\hat\lambda)>1/2$.
\smallskip
It remains to prove that $\bE[M_\tau^{\hat\lambda}]=1$. Note that $\tau$ is an almost surely finite stopping time, so that $M^{\hat\lambda}_{n\wedge \tau}$ converges almost surely to $M_\tau^{\hat\lambda}$. Fatou's lemma implies $\bE[M_{\tau}^{\hat\lambda}]\leq 1$. On the other hand,
\begin{align}\label{compuG}
\bE[M_{\tau}^{\hat\lambda}] = \lim_{n\to \infty} \bE[M_{\tau}^{\hat\lambda} \, {\sf 1}_{\{\tau \leq n\}}]
= \lim_{n\to \infty} \big(1-\bE[M_{n\wedge \tau}^{\hat\lambda} \, {\sf 1}_{\{\tau>n\}}]\big).
\end{align}
It remains to prove that $\lim_{n\to \infty} \bE[M_{n}^{\hat\lambda} {\sf 1}_{\{\tau>n\}}]=0$. Let $(\widetilde U_i)_{i\geq 1}$ be i.i.d.\ with law $\widetilde \bP$ such that
$$\widetilde \bP(\widetilde U_1=x)= \frac{1}{\bE[e^{-\hat \lambda |U_1|}]} \, e^{-\hat \lambda |x|} \, \bP(U_1=x), \quad x\in \mathbb{Z}.$$
We observe that
\begin{align}\label{compudd}
\bE[M_{n}^{\hat\lambda} \, {\sf 1}_{\{\tau>n\}}]=e^{\alpha n +\log \bE[ e^{-\hat \lambda |U_1|}] n} \, \widetilde \bP(\tau>n)
=\widetilde \bP(\tau>n).
\end{align}
Under $\widetilde \bP$, the random walk increments $(\widetilde U_i)_{i\geq 1}$ are symmetric and integrable. Thus, $\tau$ is finite $\widetilde \bP$-a.s.\ and the right hand side in \eqref{compudd} converges to $0$ as
$n$ tends to $\infty$. We conclude that $\bE[M_{\tau}^{\hat\lambda}]=1$.
\end{proof}
\subsection{Scaling limit of the uniform 2-sided prudent walk}
In this section we prove Theorems \ref{scalingtwosided} and \ref{scalingtwosided2}.
\begin{proof}[Proof of Theorems \ref{scalingtwosided} and \ref{scalingtwosided2}]
Let $\bP^*$ be the law of the i.i.d.\ sequence of effective random walk excursions as in \eqref{defPstar}, and let $(\tilde T_i,\tilde N_i)_{i\in\mathbb{N}}$ and $\tilde \gamma_L$ be as introduced after \eqref{probeve1}. Then by the law of large numbers, as $L\to\infty$, almost surely we have $ \frac{\tilde \gamma_L}{L} \to \frac{1}{\bE^*[\widetilde T_1]}>0$, since $\tilde T_1$ has exponential tail by Remark \ref{remKK}. Let $\tilde\tau_k =\sum_{i=1}^k \tilde T_i$, which defines a renewal process. For any $t_0< 1/\bE^*[\widetilde T_1]$, note that by the renewal theorem, cf. \cite[Appendix A]{GB07}, the law of $(\tilde T_i, \tilde N_i)_{1\leq i\leq t_0L}$ conditioned on $L\in \tilde \tau$ is equivalent to its law under $\bP^*$ without conditioning, in fact their total variation distance tends to $0$ as $L$ tends to infinity since $L-\sum_{i=1}^{t_0L} \tilde T_i\to\infty$ in probability. Therefore to identify the scaling limit of $(\pi_i)_{i=1}^{t_0L}$ under $\bP_{\text{unif},L}^+$, by \eqref{P+unif}, it suffices to consider $\bP^*$ in place of $\bP_{\text{unif},L}^+$.
\medskip
Recall that the 2-sided uniform prudent walk $\pi$ is constructed by concatenating alternatingly eastward horizontal excursions and northward vertical excursions, where modulo rotation, the excursions have a one-to-one correspondence with the effective random walk excursions. Therefore if we let $X_n:=(X_{n,1}, X_{n, 2})$ be a random walk on $\mathbb{Z}^2$ with
\begin{equation}\label{Xn12}
X_{n, 1} = \sum_{i=1}^n (\tilde N_{2i-1}-c(\tilde T_{2i-1}+\tilde T_{2i})), \ \ X_{n,2} = \sum_{i=1}^n (\tilde N_{2i}-c(\tilde T_{2i-1}+\tilde T_{2i})), \quad \mbox{where} \ c= \frac{\bE^*[\widetilde N_1]}{2\bE^*[\widetilde T_1]},
\end{equation}
then $X_n = \pi_{\phi(n)}-c\phi(n)\vec e_1$, with $\phi(n) =\sum_{i=1}^{2n} \tilde T_{i}$ playing the role of time change. By the strong law of large numbers, $\bP^*$-a.s., we have
\begin{equation}\label{phiconv}
\Big(\frac{1}{L} X_{tL}\Big)_{t\geq 0} \to 0 \qquad \mbox{and} \qquad \Big(\frac{\phi(tL)}{L}\Big)_{t\geq 0} \to \big(2t\bE^*[\tilde T_1]\big)_{t\geq 0}.
\end{equation}
It is then easily seen that, with $I:=\{\frac{1}{L}\sum_{i=1}^{2k} \widetilde T_i : 1\leq k\leq t_0L/2\}$, the rescaled path $\tilde \pi^L$ satisfies
\begin{equation}\label{supconv}
\sup_{t\in I}\big|\tilde \pi^L_t-c t \, \vec{e}_1 \big| = \sup_{t\in I} \Big|\frac{1}{L}X_{\phi^{-1}(tL)} \Big|
\to 0 \qquad \bP^*\mbox{-}a.s.\ \mbox{ as } L\to\infty.
\end{equation}
In fact \eqref{supconv} still holds if the supremum is taken over all $0\leq t\leq \frac{1}{L} \sum_{i=1}^{t_0L} \widetilde T_i$, since for the $i$-th excursion, the prudent path deviates from the end points of the excursion by at most $\widetilde T_i$, which has exponential tail by Remark \ref{remKK}. It is then easily seen that
\begin{equation}\label{Lmax}
\frac{1}{\sqrt{L}} \max_{1\leq i\leq L} \widetilde T_i \to 0 \qquad \bP^*\mbox{-}a.s.\ \mbox{ as } L\to\infty.
\end{equation}
Therefore \eqref{supconv} holds with $\sup$ taken over $t\in [0,\tilde t_0]$, with $\tilde t_0:=\lim_{L\to\infty}\frac{1}{L} \sum_{i=1}^{t_0L} \widetilde T_i = t_0\bE^*[\tilde T_1]<1$, and $(\tilde \pi^L_t)_{t\in [0,\tilde t_0]}$ converges in probability to $(ct\vec e_1)_{t\in [0, \tilde t_0]}$ under $\bP^*$ as well as $\bP_{{\rm unif},L}^+$. We can now deduce \eqref{scalingtwosided} by letting $\tilde t_0\uparrow 1$, using that modulo time reversal, translation and rotation, $(\pi_i)_{i=\gamma_L-\epsilon L}^{\gamma_L}$ has the same law as $(\pi_i)_{i=1}^{\epsilon L}$ under $\bP_{\text{unif},L}^+$, and hence is negligible in the scaling limit as $\epsilon\downarrow 0$.
\medskip
The proof of Theorem \ref{scalingtwosided2} is similar. By \eqref{Lmax}, it suffices to consider $\pi_t-ct\vec e_1$ along the sequence of times $(\phi_n)_{n\in\mathbb{N}}$, which is a time change of $(X_n)_{n\in\mathbb{N}}$. It is clear that $(X_{tL}/\sqrt{L})_{t\geq 0}$ converges to a Brownian motion $(\tilde B_t)_{t\geq 0}$ with covariance matrix $\bE[\tilde B_{1, i} \tilde B_{1, j}] = \bE[X_{1, i} X_{1, j}]$. Undo the time change $\phi$, which becomes asymptotically deterministic by \eqref{phiconv}, we find that under $\bP^*$, hence also $\bP_{\text{unif},L}^+$,
$$
\sqrt{L}(\tilde \pi^L_t - ct\vec e_1)_{t\in [0, \tilde t_0]} \Rightarrow (B_t)_{t\in [0, \tilde t_0]},
$$
where $B$ is a Brownian motion with covariance matrix
\begin{equation}\label{covar}
\bE[B_{1, i} B_{1, j} ] = \frac{\bE\big[\big(2\tilde N_i\bE^*[\tilde T_1] -\bE^*[\tilde N_1](\tilde T_1 +\tilde T_2)\big)\big(2\tilde N_j\bE^*[\tilde T_1] -\bE^*[\tilde N_1](\tilde T_1 +\tilde T_2)\big)\big]}{8\bE^*[\tilde T_1]^3}, \ i, j=1, 2. \!\!\!
\end{equation}
Letting $\tilde t_0\uparrow 1$ and applying the same reasoning as before then gives \eqref{concentr2sidedclt}.
\end{proof}
\section{Uniform prudent walk}\label{pw}
By symmetry, we may assume without loss of generality that the prudent walk starts with an east step, and the first vertical step is a north step. We will assume this from now on.
\subsection{Decomposition of a prudent path into excursions in its range}\label{decppe}
We now decompose each prudent path $\pi\in \Omega_L$ into a sequence of excursions within its range (see Figure \ref{fig2}). We use the same decomposition as in \cite[Section 2]{BFV10}, which is slightly different from our decomposition for the 2-sided prudent path.
\smallskip
For every $t\leq L$, let ${\ensuremath{\mathcal A}} _t$ (resp. ${\ensuremath{\mathcal B}} _t$) denote the projection of the range of $\pi$ onto the $x$-axis (resp. $y$-axis), i.e.,
\begin{equation}
{\ensuremath{\mathcal A}} _t =\big\{ \pi_{i, 1}\in \mathbb{Z}: 0\leq i\leq t\} \qquad \text{and}\qquad
{\ensuremath{\mathcal B}} _t =\big\{ \pi_{i, 2}\in \mathbb{Z}: 0\leq i\leq t\}.
\end{equation}
Let ${\ensuremath{\mathcal W}} _t=|{\ensuremath{\mathcal A}} _t|$ and ${\ensuremath{\mathcal H}} _t=|{\ensuremath{\mathcal B}} _t|$ denote respectively the width and height of the range $\pi_{[0,t]}$. Define ${\ensuremath{\mathcal H}} _0={\ensuremath{\mathcal W}} _0=1$, and set $\rho_0=\nu_0=0$. For $k\geq 0$, define
\begin{align}
&\rho_{k+1}=\min\{t>\upsilon_{k} \, :\, {\ensuremath{\mathcal H}} _t>{\ensuremath{\mathcal H}} _{t-1}\}-1,
&\upsilon_{k+1}=\min\{t>\rho_{k+1}\,:\, {\ensuremath{\mathcal W}} _t>{\ensuremath{\mathcal W}} _{t-1}\}-1.
\end{align}
We say that on each interval $[\rho_k,\upsilon_k]$ (resp.\ $[\upsilon_k,\rho_{k+1}]$) $\pi$ performs a vertical (resp.\ horizontal) excursion in its range, and the path is monotone in the vertical (resp.\ horizontal) direction. Note that each excursion ends by exiting one of two sides of the smallest rectangle containing the range of $\pi$ up to that time, and the excursion ends at a corner of this rectangle.
\smallskip
Let $\gamma_L(\pi)$ be the number of complete excursions contained in $\pi$, where the last excursion is considered complete if adding an extra horizontal or vertical step can make it complete. Let $T_i$ denote the length of the $i$-th excursion, $N_i$ its horizontal (resp.\ vertical) extension if it is a horizontal (resp.\ vertical) excursion, and let ${\ensuremath{\mathcal E}} _i=1$ if the excursion crosses the range and let ${\ensuremath{\mathcal E}} _i=0$ otherwise. More precisely, a horizontal excursion on the interval $[\nu_k, \rho_{k+1}]$ crosses the range if $|\pi_{\rho_{k+1}, 2}-\pi_{\nu_k, 2}|= {\ensuremath{\mathcal H}} _{\rho_{k+1}}$. We can thus associate with every $\pi\in \Omega_L$ the sequence $(T_i,N_i,{\ensuremath{\mathcal E}} )_{i=1}^{\gamma_L(\pi)}$.
Note that the $i$-th excursion is a horizontal excursion if $i$ is odd, and vertical excursion if $i$ is even.
For $i\in\mathbb{N}$, let $R_{i-1}$ denote the width (resp.\ height) of the range of $\pi$ before the start of the $i$-th excursion if it is a vertical (resp.\ horizontal) excursion. It can be seen that $R_{i}=R_{i-2}+N_i$ for $i\geq 1$, with $R_{-1}=R_0=0$.
\begin{figure}
\includegraphics[scale=2.5]{PW_excursions.pdf}
\caption{We decompose a path $\pi\in \Omega_L$ into a sequence of excursions.
The $i$-th excursion is a horizontal excursion if $i$ is odd, and vertical excursion if $i$ is even.
The $1$-st excursion corresponds to the sub-path $\pi_{[0,\rho_1]}$, the $2$-nd to the sub-path $\pi_{[\rho_1,\upsilon1]}$ and so on. At the end of the $i$-th excursion, if $i$ is odd (resp. if $i$ is even) we set $R_i$ to be the width (resp. the height) of the range.
The last excursion is incomplete.
}
\label{fig2}
\end{figure}
\subsection{Effective random walk excursion in a slab}\label{effslab}
The one-to-one correspondence in Section \ref{sec:ERW} between the excursion paths (which are partially directed) and the effective random walk paths can be extended to the current setting, except that now the effective random walk lies in a slab corresponding to the range of the path at the start of the excursion, and the excursion may end on either side of the slab. As a consequence, we define a measure $L_R$ on $\mathbb{N}\times \mathbb{N}\times \{0,1\}$ by
\begin{equation}\label{LR}
\begin{aligned}
L_{R}(t,n,0)&=\bE\Big[e^{\log(\frac{3}{2}) n-\lambda^*t} \, {\sf 1}_{\{V_i\in \{0,\dots,R\}\ \forall i\leq n,\, V_n=0,\, \sum_{i=1}^n |U_i|=t-n\}}\Big],\\
L_{R}(t,n,1)&=\bE\Big[e^{\log(\frac{3}{2}) n-\lambda^*t} \, {\sf 1}_{\{V_i\in \{0,\dots,R\}\ \forall i\leq n,\, V_n=R,\, \sum_{i=1}^n |U_i|=t-n\}}\Big].
\end{aligned}
\end{equation}
When $R=0$, define $L_0(t,n, 1)$ as above and define $L_0(t, n, 0)=0$. Let $\widehat L_R$ be a variant of $L_R$ that accounts for an incomplete excursion (cf. Figure \ref{fig2}), i.e.,
\begin{align}\label{hatLR}
\widehat L_R(t,n)&= \bE\Big[e^{\log(\frac{3}{2}) n-\lambda^*t} \, {\sf 1}_{\{V_i\in \{0,\dots,R\}\ \forall i\leq n,\, 0< V_n<R, \, \sum_{i=1}^n |U_i|=t-n\}}\Big],
\end{align}
where $\lambda^*$ is as in Lemma \ref{lemma:Klamba}. We also set $\widehat{L}_R(t)=\sum_{n\geq 1}\widehat L_R(t,n)$ and $\widehat{L}_R(0)=1$.
\smallskip
Let $\alpha^*=\log(\frac{3}{2})-\lambda^*$, and let $(t_i, n_i, \gep_i)\in \mathbb{N}^2 \times \{0,1\}$, $1\leq i\leq r$, be such that $t_1+\dots+t_r\leq L$ and $n_i\leq t_i$. Let $\Omega_L \big( (t_i,n_i,\gep_i)_{i=1}^r \big)$ be the set of prudent paths containing $r$ complete excursions, with
$(T_i,N_i,{\ensuremath{\mathcal E}} _i)_{i=1}^r=(t_i,n_i,\gep_i)_{i=1}^r$, and recall $(R_{i-1})_{i\in\mathbb{N}}$ from the end of Section \ref{decppe}. Reasoning as for \eqref{probeve1}, we then have
\begin{align}
\frac{1}{2^L} |\Omega_L \big( (t_i,n_i,\gep_i)_{i=1}^r \big)| e^{-\lambda^* L}=&
\prod_{i=1}^r \bE\bigg[e^{\alpha^* n_i-\lambda^*(t_i-n_i)} \, {\sf 1}_{\big\{V_i\in [0,R_{i-1}] \, \forall
i \leq n_i,\, V_{n_i}=\gep_i R_{i-1},\, n_i+\sum_{j=1}^{n_i} |U_j|=t_i \big\}}\bigg] \nonumber \\
& \quad \times \, \widehat{L}_{R_r}\big(L-(t_1+\dots+t_r)\big) \label{tug}\\
=&\bigg[\prod_{i=1}^r L_{R_{i-1}}(t_i,n_i,\gep_i)\bigg]\, \widehat{L}_{R_r}\big(L-(t_1+\dots+t_r)\big), \nonumber
\end{align}
where $\widehat L_{R_r}(L-(t_1+\dots+t_r))$ accounts for the last incomplete excursion in $\pi$.
\subsection{Representation of the law of a uniform prudent walk}
\label{sec:sampling}
We now show how to represent the law of the uniform prudent walk in terms of the excursions of the effective random walk $V$.
\smallskip
For $R\in \mathbb{N}$, let ${\ensuremath{\mathcal V}} _R$ be the set of effective random walk paths in a slab of width $R$ and ending at either $0$ or $R$. Namely,
\be{sample}
{\ensuremath{\mathcal V}} _R:=\bigcup_{N\geq 1}\Big[ {\ensuremath{\mathcal V}} ^{\, 1}_{N,R} \cup {\ensuremath{\mathcal V}} ^{\, 0}_{N,R}\Big],
\end{equation}
where for $a=0, 1$,
\begin{align}\label{VR}
{\ensuremath{\mathcal V}} _{N,R}^{\, a}:= \Big\{(V_i)_{i=0}^N\colon\, V_0=0, V_i\in \{0,\dots,R\}\ \forall i\in \{0,\dots,N\}, V_N=aR\Big\}.
\end{align}
Recall the effective random walk excursion measure $\bP^*$ from \eqref{defPstar}. We will define a probability law $\bP_R^{*}$ on ${\ensuremath{\mathcal V}} _R$ by sampling a path under $\bP^*$ and truncating it if it passes above $R+1$. More precisely, define the truncation $T_R:{\ensuremath{\mathcal V}} _\infty\mapsto {\ensuremath{\mathcal V}} _{R}$ as follows. Given $V:=(V_i) _{i=0}^N\in {\ensuremath{\mathcal V}} _\infty$, let $T_R V := V$ if $V_i\leq R$ for every $i\leq N$. Otherwise, let $\tau_R:=\inf\{i\geq 1\colon V_i\geq R+1\}$ and set
\be{deftauR}
(T_R V)_i=V_i \quad \text{for $i\leq \tau_R-1$ and}\ (T_RV)_{\tau_R}=R.
\end{equation}
Then define $\bP_R^*$ as the image measure of $\bP^*$ under $T_R$. For each trajectory $V\in {\ensuremath{\mathcal V}} _R$, we associate $(T,N,{\ensuremath{\mathcal E}} )$ such that $N$ is the number of increments $(U_i)_{i=1}^N$ of $V$, $T=N+\sum_{i=1}^N |U_i|$, and ${\ensuremath{\mathcal E}} =1$ if $V_N=R$ and ${\ensuremath{\mathcal E}} =0$ if $V_N=0$ (if $R=0$, set ${\ensuremath{\mathcal E}} =1$). Let $L^*_{R}$ denote the law of $(T,N,{\ensuremath{\mathcal E}} )$ when $V$ is sampled from $\bP_R^*$, and we observe that $L_R^*$ and $L_R$ (cf. \eqref{LR}) coincide when $\gep=0$, i.e.,
\be{L0=L*0}
L_R(t,n,0)=L_R^{*}(t,n,0),\quad (t,n)\in \mathbb{N}\times \mathbb{N}.
\end{equation}
\smallskip
Let $(\widetilde V^{(i)})_{i\geq 1}$ be an i.i.d.\ sequence of effective walk excursions with law $\bP^{*}$, and for each $i\in\mathbb{N}$, let
$(\widetilde T_i, \widetilde N_i)$ denote the total length and the number of increments of $\widetilde V^{(i)}$. We now construct a sequence $(T_i,N_i,{\ensuremath{\mathcal E}} _i)_{i\geq 1}$ from $(\widetilde V^{(i)})_{i\geq 1}$ inductively, using the truncation map $T_R$. First set $R_{-1}=R_0:=0$. For each $i\geq 1$, set
\be{defTNV}
V^{(i)}=T_{R_{i-1}}\widetilde V^{(i)},\qquad (N_i,T_i,{\ensuremath{\mathcal E}} _i)=(N, T, {\ensuremath{\mathcal E}} )(V^{(i)}),\quad \text{and} \quad R_i=R_{i-2}+N_i.
\end{equation}
where $(N,T,{\ensuremath{\mathcal E}} )(V^{(i)})$ is the triple $(N, T, {\ensuremath{\mathcal E}} )$ associated with $V^{(i)}\in {\ensuremath{\mathcal V}} _{R_{i-1}}$. For every $i\geq 1$, we have $N_i\leq \widetilde N_i$ and $T_i\leq \widetilde T_i$, and conditioned on $(T_j,N_j,{\ensuremath{\mathcal E}} _j)_{j=1}^{i-1}$, the law of $(N_i,T_i,{\ensuremath{\mathcal E}} _i)$ is $\bP_{R_{i-1}}^{*}$. Note that the excursion decomposition of a prudent path in Section \ref{decppe} gives exactly a sequence of excursions of the form $(T_R\widetilde V^{(i)})_{i\geq 1}$.
\medskip
For a set of prudent paths $A\subset \Omega_L$ depending only on $(T_i, N_i, {\ensuremath{\mathcal E}} _i)_{i=1}^{\gamma_L}(\pi)$, where
\begin{equation}
\gamma_L=\min\{i\geq 1\colon\, T_1+\dots+T_i> L\}-1,
\end{equation}
let $(t_i, n_i, \gep_i)_{i=1}^r\sim A$ denote compatibility with $A$. By \eqref{tug}, we then have
\begin{align}\label{probeve}
& \frac{1}{2^L} |A| e^{-\lambda^* L} = \sum_{(t_i, n_i, \gep_i)_{i=1}^r \sim A} \bigg[\prod_{i=1}^r L_{R_{i-1}}(t_i,n_i,\gep_i)\bigg]\, \widehat{L}_{R_r}\big(L-(t_1+\dots+t_r)\big) \nonumber\\
= \ & \bE^*\Bigg[{\sf 1}_{\{( T_i, N_i, {\ensuremath{\mathcal E}} _i)_{i=1}^{\gamma_L}\sim A\}} \prod_{i=1}^{\gamma_L}\ \frac{L_{R_{i-1}}( T_i, N_i, {\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}( T_i, N_i, {\ensuremath{\mathcal E}} _i)} \, \cdot \,
\frac{\widehat{L}_{R_{\gamma_L}}(L-( T_1+\dots+ T_{\gamma_L}))}{\bP_{R_{\gamma_L}}^*\big( T> L-( T_1+\dots+T_{\gamma_L}\big)}\Bigg],
\end{align}
where $\bE^*$ is expectation over the i.i.d.\ excursions $(\widetilde V^{(i)})_{i\geq 1}$, and hence $( T_i, N_i, {\ensuremath{\mathcal E}} _i)_{i\geq 1}$.
\medskip
We conclude this section with two technical lemmas needed to control the ratios inside the expectation in \eqref{probeve}.
For ease of notation, let us denote
\begin{equation}
L_R(t,\epsilon):=\sum_{n\geq 1} L_R(t,n,\epsilon) \qquad \mbox{and} \qquad L_R(t):=L_R(t,0)+L_R(t,1).
\end{equation}
\begin{lemma}\label{Cbonud}
There exists $C>0$ such that
\be{rnd}
\frac{L_R(t)}{L_R^*(t)}\leq C\, t \, {\sf 1}_{\{t\geq R\}}\, +\, {\sf 1}_{\{t<R\}} \qquad \mbox{for all } t\in \mathbb{N}.
\end{equation}
\end{lemma}
\begin{proof}
First, observe that for $t<R$, a path of length $t$ cannot reach level $R$. Therefore, $L_{R}(t,n,1)=L^*_R(t,n,1)=0$ and $L_{R}(t,n,1)=L^*_R(t,n,1)$. It only remains to consider $t\geq R$, and it suffices to show that $L_R(t,1)\leq C t L_{R}(t,0)=C tL_R^*(t)$. For simplicity we only consider the case $R\in 2\mathbb{N}$, but the case $R\in 2\mathbb{N}+1$ can be treated in a similar manner. Let
\begin{align}\label{ABR}
{\ensuremath{\mathcal B}} _{n,t}^{R}:=& \Big\{(V_i)_{i=0}^n\colon\, V_0=0, V_i\in \{0,\dots,R\}\ \forall i\in \{0,\dots,n\}, V_n=R, \sum_{i=1}^n|U_i|=t-n\Big\},\\\label{HR1}
{\ensuremath{\mathcal A}} _{n,t}^{R}:=& \Big\{(V_i)_{i=0}^n\colon\, V_0=0, V_i\in \{0,\dots,R\}\ \forall i\in \{0,\dots,n\}, V_n=0, \sum_{i=1}^n|U_i|=t-n\Big\}.
\end{align}
We define a map $G_{n,t}^R: {\ensuremath{\mathcal B}} _{n,t}^R\mapsto {\ensuremath{\mathcal A}} _{n,t}^R\cup {\ensuremath{\mathcal A}} _{n+2,t}^R$ as follows. For $V\in {\ensuremath{\mathcal B}} _{n,t}^R$, let $\tau_{R/2}:=\min\{i\geq 1\colon\, V_i\geq R/2\}$. We distinguish between two cases (see Figure \ref{fig3}):
\begin{enumerate}
\item If $V_{\tau_{R/2}}=R/2$, then define $G_{n,t}^R(V)$ by simply reflecting $V$ across $R/2$ from $\tau_{R/2}$ onward, i.e.,
$G_{n,t}^R (V)_i=V_i$ for $i\leq \tau_{R/2}$ and $G_{n,t}^R(V)_i=R-V_i$ for $i\in \{\tau_{R/2},\dots,n\}$. Then, $G_{n,t}^R (V)\in
{\ensuremath{\mathcal A}} _{n,t}^{R}$.\\
\item If $V_{\tau_{R/2}}=R/2+y$ with $y\in \{1,\dots,\frac{R}{2}\}$, then let $G_{n,t}^R(V)_i=V_i$ for $i\leq \tau_{R/2}-1$,
$G_{n,t}^R (V)_{\tau_{R/2}}=\frac{R}{2}-1$, $G_{n,t}^R(V)_i=R-V_{i-1}$ for $i\in \{\tau_{R/2}+1,\dots,n+1\}$ and $G_{n,t}^R(V)_{n+2}:=0$. Then, $G_{n,t}^R(V)\in {\ensuremath{\mathcal A}} _{n+2,t}^R$.
\end{enumerate}
Note that under $G_{n, t}^R$, every $V\in G_{n,t}^R({\ensuremath{\mathcal B}} _{n,t}^R)\cap {\ensuremath{\mathcal A}} _{n,t}^R$ has a unique pre-image in ${\ensuremath{\mathcal B}} _{n,t}^R$, and every $V\in G_{n,t}^R({\ensuremath{\mathcal B}} _{n,t}^R)\cap {\ensuremath{\mathcal A}} _{n+2,t}^R$ has at most $n\leq t$ pre-images in ${\ensuremath{\mathcal B}} _{n,t}^R$, one for each time that $V$ is at level $\frac{R}{2}-1$. Finally, we note that in the second case, $G_{n,t}^R(V)$ has two fewer vertical steps and two more horizontal steps than $V$. This allows us to write
\begin{align}\label{grandb}
L_R(t,1)&=\sum_{n=1}^t \bE\Big[ e^{\log(\frac{3}{2}) n-\lambda^* t} \ {\sf 1}_{{\ensuremath{\mathcal B}} _{n,t}^R}(V)\Big]\\
\nonumber &\leq \sum_{n=1}^t \bE\Big[ e^{\log(\frac{3}{2}) n-\lambda^* t} \ {\sf 1}_{{\ensuremath{\mathcal A}} _{n,t}^R}(V)\Big]+
t\, \bE\Big[ e^{\log(\frac{3}{2}) (n+2)-\lambda^* t} \ {\sf 1}_{{\ensuremath{\mathcal A}} _{n+2,t}^R}(V)\Big].
\end{align}
Observe that the r.h.s.\ in \eqref{grandb} is less than $\sum_{n=1}^t L_R(n,t,0)+t L_R(n+2,t,0)$, which implies
\begin{align}\label{grandb2}
L_R(t,1)&\leq 2 t L_R(t,0).
\end{align}
This concludes the proof of the lemma.
\end{proof}
\begin{figure}[t]
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[scale=1.4]{fig41-1.pdf}
\label{subfig31}
\caption{}
\end{subfigure}
\vspace{0.7cm}
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[scale=1.4]{fig41-2.pdf}
\caption{}
\label{subfig32}
\end{subfigure}
\caption{The transformation $G_{n,t}^R(V)$. We let $\tau_{R/2}:=\min\{i\ge 1\colon V_i \ge R/2\}$.
In $(\textrm a)$ we draw the case in which $V_{\tau_{R/2}}=R/2$. In this case we define
$G_{n,t}^R(V)$ by simply reflecting $V$ across $R/2$ from $\tau_{\tau_{R/2}}$ onward (in blue, dotted).
In $(\textrm b)$ we draw the case in which $V_{\tau_{R/2}}> R/2$. In this second case we let $G_{n,t}^R (V)_{\tau_{R/2}}=\frac{R}{2}-1$ (in red, dotted) and we concatenate the reflection of $V$ across $R/2$ from $\tau_{R/2}$ onward. We add a final point $G_{n,t}^R(V)_{n+2}:=0$ (in blue, dotted).
}
\label{fig3}
\end{figure}
\medskip
\begin{figure}
\centering
\includegraphics[scale=1.5]{fig42.pdf}
\caption{The transformation $H_{n,t}^{R,x}(V)$: We fix $x\le R$ even and we consider a path $V$ ending at $V_n=x$. We let
$\sigma_{x/2} :=\max\{i\geq 0\colon\, V_i<\tfrac{x}{2}\}$ and $\tilde \sigma_{x} :=\min\{i\geq \sigma_{x/2}+1 \colon\, V_i\geq x\}.$
In the figure we draw the transformation $H_{n,t}^{R,x}(V)$ when $V_{\sigma_{x/2}+1}>x/2$ and $V_{\widetilde \sigma_{x}}>x$.
In this case we define $H_{n,t}^{R,x}(V)_{\sigma_{x/2}+1}:=x/2$. Then we take the piece of $V$ on the interval $[\widetilde \sigma_x, n]$
lowered by $x/2$ and we insert it at time $\sigma_{x/2}+2$ (blue). Finally the piece of $V$ on the interval $[\sigma_{x/2}+1, \widetilde \sigma_x-1]$ is reflected across $x/2$ and reattached at the end (violet). We add a final point $H_{n,t}^{R,x}(V)_{n+2}:=0$.}
\label{fig4}
\end{figure}
To bound the last ratio in \eqref{probeve}, we will bound $\widehat{L}_R(t)/ \bP_R^*\big( T \geq t\big)$, which arises from the last incomplete excursion in the excursion decomposition. Recall that $\widehat{L}_R(0):=1=\bP_R^*(T> 0)$.
\begin{lemma}\label{Cbonudlast}
There exists $C>0$ such that
\be{rnd2}
\frac{\widehat L_R(t)}{\bP_R^*( T> t)}\leq CRt^2 \quad \mbox{ for all } R, t\in \mathbb{N}.
\end{equation}
\end{lemma}
\begin{proof}
Recall $\widehat L_R(t)$ from \eqref{hatLR}. It suffices to show that there exists $C>0$ such that
\be{supnu}
\widehat L_R(t)\leq C R\, t^2 L_R(t+2,0),
\end{equation}
since
$$
\bP_R^*\big( T > t\big)=\sum_{j> t}L^*_R(j)\geq \sum_{j> t} L_R^*(j,0)=\sum_{j> t} L_R(j,0)\geq L_R(t+2,0).
$$
For $x\in \{1,\dots,R-1\}$ and $n\leq t$, we consider the set of effective random walk trajectories
\begin{align}
{\ensuremath{\mathcal D}} _{n,t}^{R,x}=& \Big\{(V_i)_{i=0}^n\colon\, V_0=0, V_i\in \{0,\dots,R\}\ \forall i\in \{0,\dots,n\}, V_n=x, \sum_{i=1}^n|U_i|=t-n\Big\}.
\end{align}
For simplicity, we assume that $x$ is even, but the case $x$ odd can be treated similarly. Let
\be{defsigma}
\sigma_{x/2} :=\max\{i\geq 0\colon\, V_i<\tfrac{x}{2}\}\quad \text{and}\quad
\tilde \sigma_{x} :=\min\{i\geq \sigma_{x/2}+1 \colon\, V_i\geq x\}.
\end{equation}
We define a map $H_{n,t}^{R,x}:{\ensuremath{\mathcal D}} _{n,t}^{R,x}\to {\ensuremath{\mathcal A}} _{n+2,t+2}^{R}$ (cf. \eqref{HR1}) as follows. Let $V\in{\ensuremath{\mathcal D}} _{n,t}^{R,x}$. We distinguish between four cases:
\begin{enumerate}
\item $V_{\sigma_{x/2}+1}>x/2$ and $V_{\widetilde \sigma_{x}}>x$,
\item $V_{\sigma_{x/2}+1}>x/2$ and $V_{\widetilde \sigma_{x}}=x$,
\item $V_{\sigma_{x/2}+1}=x/2$ and $V_{\widetilde \sigma_{x}}>x$,
\item $V_{\sigma_{x/2}+1}=x/2$ and $V_{\widetilde \sigma_{x}}=x$.
\end{enumerate}
We will treat case 1 only, where $H_{n,t}^{R,x}$ maps $V$ to a path in ${\ensuremath{\mathcal A}} _{n+2,t+2}^{R}$ (see Figure \ref{fig4}). Cases 2--4 are similar and even simpler, and to ensure
that $H_{n,t}^{R,x}(V) \in {\ensuremath{\mathcal A}} _{n+2,t+2}^{R}$, we can add extra horizontal steps if needed. Roughly speaking, under $H_{n,t}^{R,x}$, the piece of $V$ on the interval $[\widetilde \sigma_x, n]$ is lowered by $x/2$ and inserted at time $\sigma_{x/2}+2$, while the piece of $V$ on the interval $[\sigma_{x/2}+1, \widetilde \sigma_x-1]$ is reflected across $x/2$ and reattached at the end. More precisely, set
\begin{align*}
& H_{n,t}^{R,x}(V)_i:=V_i \quad \text{for}\quad i\leq \sigma_{x/2},\\
& H_{n,t}^{R,x}(V)_{\sigma_{x/2}+1}:=x/2,\\
& H_{n,t}^{R,x}(V)_{\sigma_{x/2}+1+i}:=V_{\widetilde \sigma_x +i-1}-x/2 \quad \text{for}\quad i=1,\dots,n+1-\widetilde \sigma_x,\\
& H_{n,t}^{R,x}(V)_{n+2-(\widetilde \sigma_x-\sigma_{x/2})+i}:=x-V_{\sigma_{x/2}+i} \quad \text{for} \quad
i=1,\dots,\widetilde \sigma_x-\sigma_{x/2}-1,\\
& H_{n,t}^{R,x}(V)_{n+2}:=0.
\end{align*}
We note that the sum of absolute increments of $H_{n,t}^{R,x}(V)$ equals that of $V$, and $H_{n,t}^{R,x}(V)$ is confined to $[0, R]$.
Therefore $H_{n,t}^{R,x}(V) \in {\ensuremath{\mathcal A}} _{n+2,t+2}^R$. It remains to bound the number of pre-images of every $V\in {\ensuremath{\mathcal A}} _{n+2,t+2}^R\cap H_{n,t}^{R,x}({\ensuremath{\mathcal D}} _{n,t}^{R,x})$ under $H_{n,t}^{R,x}$. Note that to undo $H_{n,t}^{R,x}(V)$, we only need to find the two times $\sigma_{x/2}+2$
and $\sigma_{x/2}+n+2-\widetilde \sigma_x$ at which the original segments of $V$ are glued together and $H_{n,t}^{R,x}(V)_i=0$. Since there are at most $n^2\leq t^2$ such choices, and combined with similar estimates for cases 2--4, we have
\begin{align}
\widehat L_R(t,n)&= \sum_{x=1}^{R-1} \bE\Big[e^{\log(\frac{3}{2}) n-\lambda^*t} \, {\sf 1}_{\, {\ensuremath{\mathcal D}} _{n,t}^{R,x}} (V)\Big]\\
\nonumber & \leq 4 (R-1)\, t^2 \Big( 3^2 e^{-2 \log(\frac{3}{2})+2\lambda^*}\bE\Big[e^{\log(\frac{3}{2}) (n+2)-\lambda^*(t+2)} \, {\sf 1}_{\, {\ensuremath{\mathcal A}} _{n+2,t+2}^{R}} (V)\Big]\Big)\\
\nonumber & \leq C R\, t^2 L_{R}(t+2,n+2,0),
\end{align}
which establishes \eqref{supnu} and hence the lemma.
\end{proof}
\smallskip
As a corollary of Lemma \ref{Cbonudlast}, we have the following bound on the last ratio in \eqref{probeve}:
\be{nrdcor}
\frac{\widehat L_{R_{\gamma_L}}(L-(T_1+\cdots +T_{\gamma_L}))}{\bP_{R_{\gamma_L}}^*\big( T> L-(T_1+\cdots+T_{\gamma_L})\big)}\leq C\, L(L-(T_1+\cdots+T_{\gamma_L}))^2.
\end{equation}
\section{Proof of Theorems \ref{unifscal} and \ref{unifscal2}}\label{sec5}
We will use the excursion decomposition developed in Section \ref{pw}, in particular, the representation in \eqref{probeve}.
First we show that for large $L$, a uniform prudent walk typically crosses its range at most $\log L$ times. Namely,
\begin{lemma}\label{finiteexc}
There exists $\delta>0$ such that
\be{excufi}
\lim_{L\to\infty} \bP_{{\rm unif},L}\big[\exists\, i\in \{\delta \log L,\dots,\gamma_L(\pi)\}\colon\, {\ensuremath{\mathcal E}} _i(\pi)=1\big]=0.
\end{equation}
\end{lemma}
Then we show that the total length of the first $\log L$ excursions grows less than a power of $\log L$.
\begin{lemma}\label{finitesteps}
For every $\delta>0$, there exists $\kappa>0$ such that
\be{excufi2}
\lim_{L\to \infty} \bP_{{\rm unif},L}\big[T_1(\pi)+\dots+T_{\delta \log L}(\pi)\geq \kappa \, (\log L)^2 \big]=0.
\end{equation}
\end{lemma}
Finally, we show that the last incomplete excursion of the walk typically has length at most $\log L$.
\begin{lemma}\label{lastExc}
There exists $\alpha>0$ such that
\be{excufi3}
\lim_{L\to \infty} \bP_{{\rm unif},L}\big[L-(T_1+\dots+T_{\gamma_L})\geq \alpha \log L \big]=0.
\end{equation}
\end{lemma}
We prove Theorem \ref{unifscal} next using Lemmas \ref{finiteexc}--\ref{lastExc}, whose proof are postponed to Sections \ref{dec:excu}--\ref{dec:excu3}.
\subsection{Proof of Theorems \ref{unifscal} and \ref{unifscal2}}\label{Theo1}
Let $\delta,\kappa,\alpha>0$, and we define ${\ensuremath{\mathcal G}} _L\subset \Omega_L$ by
$$
{\ensuremath{\mathcal G}} _{L}:=\Big\{{\ensuremath{\mathcal E}} _i=0\, \forall\, i\in \{\delta \log L,\dots, \gamma_L\}, \,
T_1+\dots+T_{\delta \log L}\leq \kappa (\log L)^2, \ L-(T_1+\dots+T_{\gamma_L})\leq \alpha \log L \Big\} .
$$
By Lemmas \ref{finiteexc}--\ref{lastExc}, we can choose $\delta, \kappa$ and $\alpha$ such that
$\lim_{L\to\infty}\bP_{\text{unif},L}\big( {\ensuremath{\mathcal G}} _L \big)=1$.
\smallskip
We introduce a little more notation. Let ${\ensuremath{\mathcal O}} :=\{\text{NE,NW,SE,SW}\}$ be the set of possible directions of a 2-sided prudent path. For $o\in {\ensuremath{\mathcal O}} $ let $\Omega_L^{\, o}$ be the set of $L$-step 2-sided path with orientation $o$ (e.g.
$\Omega_L^{\text{NE}}=\Omega_L^+$). Pick $\pi \in \Omega_L$ and recall that the endpoint of each excursion of $\pi$ lies at one of the
4 corners (indexed in ${\ensuremath{\mathcal O}} $) of the smallest rectangle containing the range of $\pi$ up to that endpoint.
Thus, for $\pi \in {\ensuremath{\mathcal G}} _L$, we denote by $\theta(\pi)\in {\ensuremath{\mathcal O}} $ the corner at which the endpoint of the
$\delta \log L$-th excursion lies.
\smallskip
For a path $\pi\in {\ensuremath{\mathcal G}} _L$, let $\sigma_1:=T_1+\dots+ T_{\delta\log L}$ be the length of the first $\delta\log L$ excursions, and
let $\sigma_2:=L-(T_1+\dots + T_{\gamma_L})$ be the length of the last incomplete excursion. Note that $(\pi_i)_{i=\sigma_1}^{L-\sigma_2}$ is a 2-sided prudent path of orientation $\theta(\pi)$ because ${\ensuremath{\mathcal E}} _i=0$ for $\delta \log L<i \leq \gamma_L(\pi)$. Therefore, we can safely enlarge a bit ${\ensuremath{\mathcal G}} _L$ into
$$
\widetilde {\ensuremath{\mathcal G}} _{L}:=\Big\{(\pi_i)_{i=\sigma_1}^{L-\sigma_2} \in \Omega_{L-\sigma_1-\sigma_2}^{\theta(\pi)}, \,
T_1+\dots+T_{\delta \log L}\leq \kappa (\log L)^2, \ L-(T_1+\dots+T_{\gamma_L})\leq \alpha \log L \Big\} .
$$
Note that conditioned on $\pi\in \widetilde {\ensuremath{\mathcal G}} _L$, $\sigma_1(\pi)=m$, $\sigma_2(\pi)=n$, and $\theta(\pi) = o$, the law of $(\pi_i)_{i=m}^{L-n}$ under $\bP_{\text{unif},L}$ (modulo translation and rotation) is exactly that of a uniform 2-sided prudent walk with total length $L-m-n$, for which we have proved the law of large numbers in Theorem \ref{scalingtwosided} and the invariance principle in Theorem \ref{scalingtwosided2}. Since $\bP_{\text{unif},L}\big( \widetilde {\ensuremath{\mathcal G}} _L \big)\to 1$, we only need to consider $m\leq \kappa (\log L)^2$ and $n\leq \alpha \log L$. Since $m/\sqrt{L}, n/\sqrt{L}$ tend to $0$ uniformly as $L$ tends to infinity, $(\pi_i)_{i=1}^m$ and $(\pi_i)_{i=L-n}^{L}$ are negligible in the scaling limit, and hence Theorems \ref{unifscal} and \ref{unifscal2} follow from their counterparts for the uniform 2-sided prudent walk, with the direction $o$ distributed uniformly in ${\ensuremath{\mathcal O}} $ by symmetry.
\qed
\subsection{Proof of Lemma \ref{finiteexc}}\label{dec:excu}
Let $M=M(L)$ be an increasing function of $L$ that will be specified later. We set
\be{fracretur}
\alpha_L:=\bP_{\text{unif},L}\big( \exists\, i\in [M,\gamma_L] \, s.t.\, {\ensuremath{\mathcal E}} _i(\pi)=1\big)=\frac{\big|\{\pi\in \Omega_L: \,\exists\, i\in [M,\gamma_L]\, s.t.\, {\ensuremath{\mathcal E}} _i(\pi) =1\} \big|}{|\Omega_L|}.
\end{equation}
Multiply both the numerator and denominator by $2^{-L} e^{-\lambda^* L}$, we can then apply \eqref{probeve} together with \eqref{nrdcor} to obtain
\begin{align}
\alpha_L & \leq C L^3
\frac{\, \sum_{j=M}^L \bE^*\Big[{\sf 1}_{\{{\ensuremath{\mathcal E}} _j=1\}}\, \prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}
\Big]}{\bE^*\Big[\prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)} {\sf 1}_{\{T_1+\dots+T_{\gamma_L}=L\}}
\Big]} \nonumber \\
& \leq C L^3
\frac{\,\sum_{j\geq M} \bE^*\Big[ {\sf 1}_{\{T_j \geq \frac{j}{2}\}} \, \prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}
\Big]}{\bE^*\Big[\prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)} {\sf 1}_{\{T_1+\dots+T_{\gamma_L}=L\}}
\Big]}:= CL^3 \frac{\Psi_1(L,M)}{D_L}, \label{fracreturrr}
\end{align}
where we used that ${\ensuremath{\mathcal E}} _i=1$ only if $T_i\geq 1+R_{i-1}$, and $R_i\geq \frac{i-1}{2}$ for every $i\in \mathbb{N}$ (cf.\ Section \ref{decppe}).
Lemma \ref{finiteexc} then follows immediately from \eqref{fracreturrr} and Claims \ref{c2} and \ref{c1} below.
\begin{claim}\label{c2}
There exist $c_1,c_2>0$ such that
$\Psi_1(L,M) \leq c_1\, e^{-c_2 M}$ for every $M\in \mathbb{N}$ and $L\geq M$.
\end{claim}
\begin{claim}\label{c1}
There exists $c_3>0$ such that $D_L\geq c_3$ for every $L\in \mathbb{N}$.
\end{claim}
\medskip
{\bf Proof of Claim \ref{c2}.} Recall from Section \ref{sec:sampling} how $(T_i,N_i,{\ensuremath{\mathcal E}} _i)_{i\geq 1}$ is constructed from the i.i.d.\ sequence $(\widetilde V_i, \widetilde T_i,\widetilde N_i)_{i\geq 1}$ with law $\bP^*$, with $\tilde T_i\geq T_i\, \forall\, i\in \mathbb{N}$. We first state and prove a key lemma.
\begin{lemma}\label{lemmaKey}
Let $L\in \mathbb{N}$, and let $\Phi:\mathbb{R}^L_+\to \mathbb{R}_+$ be any function that is non-decreasing in each of its $L$ arguments. Then there exists $c>0$ independent of $L$ and $\Phi$, such that
\be{eqKey1}
\bE^*\bigg[\!\Phi(T_1,\dots, T_L) \prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)} \bigg]
\leq
\bE^*\bigg[\Phi(\tilde T_1,\dots,\tilde T_L) \prod_{i=1}^{L} \big(1 + c \tilde T_i\, {\sf 1}_{\{\tilde T_i\geq \frac{i-1}{2}\}}\big) \bigg].
\end{equation}
\end{lemma}
\begin{proof}
For $n\in \mathbb{N}$, let ${\ensuremath{\mathcal F}} _n$ be the $\sigma$-algebra generated by $(\widetilde T_i, T_i,N_i,{\ensuremath{\mathcal E}} _i)_{i\leq n}$. For ease of notation, let $A_L$ denote the l.h.s.\ of \eqref{eqKey1}. Note that
\begin{align}\label{LL*0}
A_L & \leq \bE^*\bigg[ \Phi( T_1,\dots, T_L) \prod_{i=1}^{L} \max \bigg\{ \frac{L_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}, 1\bigg\} \bigg] \\
& = \bE^*\bigg[\prod_{i=1}^{L-1} \max \Big\{ \frac{L_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}, 1\Big\} \, \, H_L\bigg], \label{LL*02}
\end{align}
with
\begin{equation} \label{defHL}
\begin{split}
H_L & := \bE^*\bigg[\Phi( T_1,\dots, T_L) \max \Big\{ \frac{L_{R_{L-1}}(T_L,N_L,{\ensuremath{\mathcal E}} _L)}{L^*_{R_{L-1}}(T_L,N_L,{\ensuremath{\mathcal E}} _L)}, 1\Big\}\Big| {\ensuremath{\mathcal F}} _{L-1}\bigg] \\
&\ = \sum_t \Phi(T_1,\dots,T_{L-1}, t) \sum_{n\leq t}\sum_{\epsilon=0,1} \max \big\{ {L_{R_{L-1}}(t,n,\gep)}, L^*_{R_{L-1}}(t,n,\gep) \big\}.
\end{split}
\end{equation}
When $t<R_{L-1}$, we have ${L_{R_{L-1}}(t,n,1)}= L^*_{R_{L-1}}(t,n,1)=0$, and ${L_{R_{L-1}}(t,n,0)}= L^*_{R_{L-1}}(t,n,0)$ by \eqref{L0=L*0}, so that
\begin{equation}
\sum_{n\leq t}\sum_{\epsilon=0,1} \max \big\{ {L_{R_{L-1}}(t,n,\gep)}, L^*_{R_{L-1}}(t,n,\gep) \big\} = L^*_{R_{L-1}}(t).
\end{equation}
When $t\geq R_{L-1}$, we have
\begin{equation}
\begin{split}
& \sum_{n\leq t}\sum_{\epsilon=0,1} \max \big\{ {L_{R_{L-1}}(t,n,\gep)}, L^*_{R_{L-1}}(t,n,\gep) \big\} \\
& \quad \leq \ \sum_{n\leq t}\sum_{\epsilon=0,1} ({L_{R_{L-1}}(t,n,\gep)} + L^*_{R_{L-1}}(t,n,\gep))
= L_{R_{L-1}}(t) + L^*_{R_{L-1}}(t) \leq (1+ct) L^*_{R_{L-1}}(t),
\end{split}
\end{equation}
where we applied Lemma \ref{Cbonud}. Therefore we have
\begin{equation}\label{rtg}
\begin{split}
H_L&\leq \sum_{t} \Phi(T_1,\dots, T_{L-1},t) \big(1+ct {\sf 1}_{\{ t\geq R_{L-1}\}}\big) L^*_{R_{L-1}}(t),
\\
&=
\bE^*\left[ \Phi(T_1,\dots,T_{L-1}, T_L) \big( 1+cT_L {\sf 1}_{\{ T_L\geq R_{L-1}\}} \big) \, \big|\, {\ensuremath{\mathcal F}} _{L-1}\right].
\end{split}
\end{equation}
Since $R_{L-1}\geq \frac{L-1}{2}$ and $\widetilde T_L\geq T_L$, we can replace $R_{L-1}$ by $\frac{L-1}{2}$ and $T_L$ by $\tilde T_L$ in the r.h.s. of \eqref{rtg}. Moreover, note that $\tilde T_L$ does not depend on $R_{L-1}$, and hence we can plug \eqref{rtg} into \eqref{LL*0} to obtain
\be{LL*2}
\nonumber \begin{split}
A_L \leq
\bE^*\bigg[ \bigg( \prod_{i=1}^{L-1} \max \big\{ \frac{L_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i,N_i,{\ensuremath{\mathcal E}} _i)}, 1\big\} \bigg)
\Phi(T_1,\dots, T_{L-1},\tilde T_L)\big(1+ c \tilde T_L\, {\sf 1}_{\{\tilde T_L\geq \frac{L-1}{2}\}}\big)\bigg].
\end{split}
\end{equation}
We can now iterate the argument to deduce \eqref{eqKey1}.
\end{proof}
To prove Claim \ref{c2}, we now apply Lemma \ref{lemmaKey} with $\Phi(t_1,\dots,t_L)={\sf 1}_{\{t_j\geq \frac{j}{2} \}}$
for $j\geq M$ to obtain
\begin{align}
\Psi_1(L,M) & \leq \sum_{j\geq M} \bE^*\bigg[ {\sf 1}_{\{\tilde T_j\geq \frac{j}{2} \}} \prod_{i=1}^{L} \big(1 + c \tilde T_i\, {\sf 1}_{\{\tilde T_i\geq \frac{i-1}{2}\}}\big) \bigg] \nonumber\\
& \leq \sum_{j\geq M} \bE^*\bigg[ (1+c\tilde T_j) {\sf 1}_{\{\tilde T_j\geq \frac{j}{2} \}} \prod_{i\neq j\leq L} \Big(1 + c\, \tilde T_i\, {\sf 1}_{\{\tilde T_i\geq \frac{i-1}{2}\}}\Big)
\bigg] \nonumber \\
& = \sum_{j\geq M} \bE^*\bigg[ (1+c\tilde T_1) {\sf 1}_{\{\tilde T_1\geq \frac{j}{2} \}}\bigg] \prod_{i\neq j\leq L} \Big(1 + c \bE^*[\tilde T_1 {\sf 1}_{\{\tilde T_1\geq \frac{i-1}{2}\}}]\Big)
\end{align}
Since $\widetilde T_1$ has exponential tail under $\bP^*$ (cf. Remark \ref{remKK}), there exist $C_1, C_2>0$ such that
\be{Estar}
\bP^*(\tilde T_1\geq l) \leq \bE^*\big[\tilde T_1 {\sf 1}_{\{\tilde T_1\geq \ell\}}\big] \leq C_1\, e^{-C_2 \ell} \qquad \mbox{for all } \ell \in \mathbb{N}.
\end{equation}
This implies that
\begin{equation}\label{defA3}
\Psi_1(L,M) \leq (1+c) \sum_{j\geq M} C_1 e^{-C_2\, \frac{j}{2}} \prod_{i=1}^\infty \Big(1+c\, C_1 e^{-C_2\, \frac{i-1}{2}}\Big)
\leq c_1 e^{-c_2\, M},
\end{equation}
which concludes the proof of Claim \ref{c2}.
\qed
\medskip
{\bf Proof of Claim \ref{c1}}
The claim is essentially a consequence of the renewal theorem. Note that by construction, we have $R_0=0$, ${\ensuremath{\mathcal E}} _1=1$, and $L_0(T_1, N_1, 1)= L^*_0(T_1, N_1, 1)$, and when $R_{i-1}\geq 1$ and $T_i=1$, or when $T_i<R_{i-1}$, we must have ${\ensuremath{\mathcal E}} _i=0$ and $L_{R_{i-1}}(T_i, N_i,0)=L^*_{R_{i-1}}(T_i, N_i,0)$. Therefore, with $A>0$ to be chosen later, we can bound
\be{loBdeno}
\begin{split}
M_L:=\ &{\bE^*\bigg[\prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)} {\sf 1}_{\{T_1+\dots+T_{\gamma_L}=L\}}
\bigg]} \\
\geq\ &
\bP^*\big( T_1+\dots+T_{\gamma_L}=L,\ T_i=1\, \forall\, i\in [1, A], \ T_j < R_{j-1} \, \forall\, j\in [A+1, \gamma_L]
\big).
\end{split}
\end{equation}
Recall that $(T_i, N_i, {\ensuremath{\mathcal E}} _i)_{i\in\mathbb{N}}$ is constructed from $(\widetilde V_i, \widetilde T_i, \widetilde N_i)_{i\in\mathbb{N}}$ with law $\bP^*$ such that $\tilde T_i\geq T_i$ a.s., and when $\widetilde T_i=1$ or $\widetilde T_i \leq R_{i-1}$, we have $T_i=\tilde T_i$ (cf.\ Section \ref{sec:sampling}). Since $R_1, R_2\geq 1$ and $R_i\geq \frac{i-1}{2}$ for $i\geq 3$, we can bound the r.h.s.\ of \eqref{loBdeno} by
\begin{align}\label{thuiboun}
\nonumber M_L&\geq \bP^*\big(\tilde T_1+\dots+ \tilde T_{\tilde \gamma_L}=L,\ \tilde T_i=1\ \forall \, i\in [1, A], \
\tilde T_j < \tfrac{j-1}{2}\ \forall\, j>A\big),\\
&=\bP^*(\tilde T_1=1)^{A} \ \bP^*\Big(\tilde T_1+\dots+ \tilde T_{\, \tilde \gamma_{L-A}}=L-A, \ \tilde T_i < \tfrac{A+i-1}{2}\ \forall\, i\geq 1 \Big),
\end{align}
where $\tilde \gamma_L$ is the counterpart of $\gamma_L$ for $(\widetilde T_i)_{i\in \mathbb{N}}$ (recall \eqref{gammaLV}). Since $(\tilde T_j)_{j\in\mathbb{N}}$ is i.i.d. with exponential tail, we may pick $A\in \mathbb{N}$ large enough such that
\be{guar1}
\bP^*\Big(\tilde T_i < \frac{A+i-1}{2}\ \forall\, i\geq 1 \Big) \geq 1-\frac{1}{4\bE^*[\tilde T_1]}.
\end{equation}
Having chosen $A$, the renewal theorem then ensures that there exists $L_0\in \mathbb{N}$ such that
\be{guar2}
\bP^*(\tilde T_1+\dots+ \tilde T_{\, \tilde \gamma_{L-A}}=L-A)\geq \frac{1}{2\bE^*[\tilde T_1]} \quad \forall\ L>L_0.
\end{equation}
Combining \eqref{guar1} and \eqref{guar2} then shows that the r.h.s.\ in \eqref{thuiboun} is bounded from below by a positive constant uniformly in $L\geq L_0$. The proof is then complete.
\qed
\subsection{Proof of Lemma \ref{finitesteps}}
The proof is similar to that of Lemma \ref{finiteexc}. Let $\delta>0$ and $\kappa>0$ and set
$$
\beta_L:=\bP_{\text{unif},L}\big( T_1(\pi)+\dots+T_{\delta \log L}(\pi) \geq \kappa (\log L)^2 \big)=
\frac{|\{\pi\in \Omega_L: T_1+\cdots+T_{\delta \log L} \geq \kappa (\log L)^2\}|}{|\Omega_L| }.
$$
Since $T_1(\pi)+\cdots +T_{\delta \log L} \geq \kappa (\log L)^2$ implies that $T_i\geq \kappa\delta^{-1} \log L$ for some
$1\leq i\leq \delta \log L$, similar to \eqref{fracreturrr}, we have
\be{fracretur22}
\beta_L\leq C L^3 \,
\frac{\sum_{j=1}^{\delta \log L} \bE^*\Big[ {\sf 1}_{\{T_j\geq {\kappa \delta^{-1} \log L}\}} \prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i, N_i, {\ensuremath{\mathcal E}} _i)}
\Big]}{\bE^*\Big[\prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i, N_i, {\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)} {\sf 1}_{\{T_1+\dots+T_{\gamma_L}=L\}}
\Big]}:= C L^3 \frac{\Psi_2(L)}{D_L}.
\end{equation}
By Claim \ref{c1}, $D_L$ is bounded away from $0$ uniformly in $L$. Using Lemma \ref{lemmaKey} and \eqref{Estar}, we obtain
\begin{align}\label{B1}
\nonumber \Psi_2(L)
&\leq
\sum_{j=1}^{ \delta \log L}
{\bE^*\bigg[ {\sf 1}_{\{\tilde T_j\geq {\kappa \delta^{-1} \log L}\}} \prod_{i=1}^{L} \left(1+ c \tilde T_i{\sf 1}_{\{ \tilde T_i \geq \frac{i-1}{2}\}}\right)
\bigg]} \nonumber\\
&\leq
(\delta \log L)\, \bE^*\Big[(1+c\tilde T_1) {\sf 1}_{\{\tilde T_1\geq \kappa {\delta^{-1} \log L}\}}\Big] \prod_{i=1}^{\infty}
\Big(1+ c\bE^*[\tilde T_1{\sf 1}_{\{ \tilde T_1 \geq \frac{i-1}{2}\}}]\Big) \nonumber\\
&\leq (\delta \log L) c_1 e^{-c_2 \kappa {\delta^{-1} \log L}},
\end{align}
which tends to $0$ as $L$ tends to infinity if $\kappa$ is chosen large enough.
\qed
\subsection{Proof of Lemma \ref{lastExc}}\label{dec:excu3}
As in the proof of Lemmas \ref{finiteexc} and \ref{finitesteps}, given $\delta>0$, we set
$$
\rho_L:=\bP_{\text{unif},L}\big( L-(T_1(\pi)+\dots+T_{\gamma_L}(\pi)) \geq \alpha \log L \big)=
\frac{|\{\pi\in\Omega_L: ( L-(T_1+\cdots+T_{\gamma_L}) \geq \alpha \log L)\}|}{|\Omega_L| }.
$$
Similar to \eqref{fracreturrr}, we have
\be{thu}
\rho_L \leq
CL^3\, \frac{\bE^*\bigg[ {\sf 1}_{\{L-(T_1+\dots+T_{\gamma_L}) \geq \alpha \log L \}} \prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i, N_i, {\ensuremath{\mathcal E}} _i)}
\bigg]}{\bE^*\bigg[\prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i, N_i, {\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)} {\sf 1}_{\{T_1+\dots+T_{\gamma_L}=L\}}
\bigg]}=:C L^3 \frac{\Psi_3(L)}{D_L}.
\end{equation}
By Claim \ref{c1}, $D_L$ is bounded away from $0$ uniformly in $L$.
Since $L-(T_1+\dots+T_{\gamma_L})\geq \alpha \log L$ implies $\max\{T_1,\dots,T_L\}\geq \alpha \log L$, again by Lemma \ref{lemmaKey} and \eqref{Estar}, we have
\begin{align}\label{trio}
\nonumber \Psi_3(L)&\leq \, \bE^*\bigg[ {\sf 1}_{\{ \max\{T_1,\dots,T_L\} \geq \alpha \log L \}} \prod_{i=1}^{\gamma_L} \frac{L_{R_{i-1}}(T_i, N_i,{\ensuremath{\mathcal E}} _i)}{L^*_{R_{i-1}}(T_i, N_i, {\ensuremath{\mathcal E}} _i)}
\bigg]\\
&\leq
\bE^*\bigg[ {\sf 1}_{\{ \max\{\tilde T_1,\dots,\tilde T_L\} \geq \alpha \log L \}} \prod_{i=1}^L \big(1 + c\, \tilde T_i\, {\sf 1}_{\{\tilde T_i\geq \frac{i-1}{2}\}}\big)\bigg] \nonumber \\
& \leq L \bE^*\Big[(1+c\tilde T_1) {\sf 1}_{\{\tilde T_1\geq \alpha \log L\}}\Big] \prod_{i=1}^{\infty}
\Big(1+ c\bE^*[\tilde T_1{\sf 1}_{\{ \tilde T_1 \geq \frac{i-1}{2}\}}]\Big) \leq c_1 L e^{-c_2 \alpha \log L},
\end{align}
which tends to $0$ as $L$ tends to infinity if $\alpha$ is chosen large enough.
\qed
\bibliographystyle{imsart-nameyear}
| {
"timestamp": "2017-09-08T02:05:20",
"yymm": "1702",
"arxiv_id": "1702.04915",
"language": "en",
"url": "https://arxiv.org/abs/1702.04915",
"abstract": "We study the 2-dimensional uniform prudent self-avoiding walk, which assigns equal probability to all nearest-neighbor self-avoiding paths of a fixed length that respect the prudent condition, namely, the path cannot take any step in the direction of a previously visited site. The uniform prudent walk has been investigated with combinatorial techniques in [Bousquet-Mélou, 2010], while another variant, the kinetic prudent walk has been analyzed in detail in [Beffara, Friedli and Velenik, 2010]. In this paper, we prove that the $2$-dimensional uniform prudent walk is ballistic and follows one of the $4$ diagonals with equal probability. We also establish a functional central limit theorem for the fluctuations of the path around the diagonal.",
"subjects": "Probability (math.PR)",
"title": "Scaling limit of the uniform prudent walk",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357585701875,
"lm_q2_score": 0.8080672089305841,
"lm_q1q2_score": 0.7911266928710486
} |
https://arxiv.org/abs/2211.14807 | Universal convex covering problems under translation and discrete rotations | We consider the smallest-area universal covering of planar objects of perimeter 2 (or equivalently closed curves of length 2) allowing translation and discrete rotations. In particular, we show that the solution is an equilateral triangle of height 1 when translation and discrete rotation of $\pi$ are allowed. Our proof is purely geometric and elementary. We also give convex coverings of closed curves of length 2 under translation and discrete rotations of multiples of $\pi/2$ and $2\pi/3$. We show a minimality of the covering for discrete rotation of multiples of $\pi/2$, which is an equilateral triangle of height smaller than 1, and conjecture that the covering is the smallest-area convex covering. Finally, we give the smallest-area convex coverings of all unit segments under translation and discrete rotations $2\pi/k$ for all integers $k\ge 3$. | \section{Introduction}
Given a (possibly infinite) set $S$ of planar objects and a group $G$
of geometric transformations, a $G$-covering\xspace $K$ of $S$ is a region such that
every object in $S$ can be contained in $K$ by transforming the object with a suitable transformation $g \in G$.
Equivalently, every object of $S$ is contained in $g^{-1}K$
for a suitable transformation $g \in G$.
That is,
\[ \forall \gamma \in S,\; \exists g \in G \:\text{such that}\: g \gamma \subseteq K.\]
We denote the group of planar translation by $T$ and that of planar translation and rotation by $TR$. Mathematically, $TR = T \rtimes R$ is the semidirect product of $T$ and the rotation group $R = SO(2,\mathbb{R})$.
We often call coverings\xspace for $G$-coverings\xspace if $G$ is known from the context.
The problem of finding a smallest-area covering\xspace is a classical problem in mathematics,
and such a covering is often called a {\em universal covering\xspace}.
In the literature, the cases where $G = T$ or $G=TR$ have been widely studied.
The universal covering\xspace problem has attracted many mathematicians.
Henri Lebesgue (in his letter to J. P\'{a}l in 1914) proposed a problem to find
the smallest-area convex $TR$-covering\xspace of all objects of unit diameter (see \cite{BMP2005,BBG2015,G2018} for its history).
Soichi Kakeya considered in 1917 the $T$-covering\xspace of the set $\ensuremath{S_{\textsf{seg}}}$ of all unit line segments (called needles)~\cite{Kakeya}.
Precisely, his formulation is to find the smallest-area region in which
a unit-length needle can be turned round, but it is equivalent to the covering\xspace problem if the covering\xspace is convex~\cite{Bae2018}.
Originally, Kakeya considered the convex covering\xspace, and Fujiwara conjectured that the equilateral triangle of height 1 is the solution. The conjecture was affirmatively solved by P\'{a}l in 1920 \cite{Pal}.
For the nonconvex covering\xspace, Besicovitch~\cite{B1928} gave a construction such that the area can be arbitrarily small.
Generalizing P\'{a}l's result, for any set of $n$ segments, there is a triangle to be the smallest-area convex $T$-covering\xspace of the set, and the triangle can be computed
efficiently in $O( n \log n )$ time~\cite{Ahn2014}.
It is further conjectured that the smallest-area convex $TR$-covering\xspace of a family of triangles is a triangle,
which is shown to be true for some families~\cite{Park2021}.
The problem of finding the smallest-area covering\xspace and convex covering\xspace of the set of all curves of unit length and $G =TR$ was given by Leo Moser as an open problem in 1966 \cite{Moser66}.
The problem is still unsolved, and the best lower bound of the smallest area of the convex covering\xspace is
slightly larger than $0.23$, while the best upper bound was about $0.27$ for long time~\cite{CFG1991,BMP2005}.
Wetzel informally conjectured (formally published in~\cite{Wetzel1}) in 1970 that the $30^\circ$ circular fan of unit radius, which has an area $\pi/12 \approx 0.2618$, is a convex $TR$-covering\xspace of all unit-length curves,
and it is recently proved by Paraksa and Wichiramala~\cite{PW2021}.
However, when only translations are allowed,
the equilateral triangle of height $1$ is the smallest-area convex covering\xspace (Corollary~\ref{cor:t.container.worm}), which is the same as the case of
considering the unit line segments.
Corollary~\ref{cor:t.container.worm} is folklore and the authors cannot find its concrete proof in the literature.
The fact can be confirmed analytically, since it suffices to consider polyline worms with two legs. Also, we can directly prove it by applying a reflecting argument similar to the proof of Theorem~\ref{thm:G2.main}, which is given in Appendix.
There are many variants of Moser's problem, and they are called {\it Moser's worm problem}.
The history of progress on the topic can be found in an article~\cite{Moser91} by
William Moser (Leo's younger brother), in Chapter D18 in~\cite{CFG1991},
and in Chapter 11.4 in~\cite{BMP2005}.
It is interesting to find a new case of Moser's worm problem with a clean
mathematical solution.
From now on, we focus on the convex $G$-covering\xspace of the set $\ensuremath{S_{\textsf{c}}}$ of all closed curves $\gamma$ of length $2$. Here, we follow the tradition of previous works on this problem to consider length $2$ instead of $1$, since a unit line segment
can be considered as a degenerate convex closed curve of length $2$.
Since the boundary curve of the convex hull $C(\gamma)$ of any closed curve $\gamma$
is not longer than $\gamma$, it suffices to consider only convex curves.
This problem is known to be an interesting but hard variant of Moser's worm problem, and remains unsolved for $T$ and $TR$ despite of substantial efforts in the literature~\cite{FW2011,Wetzel1,Wetzel2,CFG1991,BMP2005}.
As far as the authors know, the smallest-area
convex $TR$-covering\xspace known so far is a hexagon obtained by clipping two corners of a rectangle
given by Wichiramala, and its area is slightly less than $0.441$~\cite{W2018}. It is also shown that the smallest area is at least $0.39$~\cite{GS2020a}, which has been recently improved to $0.4$~\cite{GS2020b} with help of computer programs.
The smallest area of the convex $T$-covering\xspace is known to be between $0.620$ and $0.657$~\cite{BMP2005}.
There are some works on restricted shapes of covering\xspace.
Especially, if we consider triangular coverings\xspace, Wetzel~\cite{Wetzel2,Wetzel3} gave a
complete description, and it is shown that an acute triangle with side lengths $a$, $b$, $c$ and area $X$ becomes a $T$-covering\xspace (resp. $TR$-covering\xspace) of $\ensuremath{S_{\textsf{c}}}$
if and only if $2 \le \frac{8X^2}{abc}$ (resp. $2 \le \frac{2\pi X}{a+b+c}$). As a consequence, the equilateral triangle of side length $4/3$ (resp. $\frac{2\sqrt{3}}{\pi}$)
is the smallest triangular $T$-covering\xspace (resp. $TR$-covering\xspace) of $\ensuremath{S_{\textsf{c}}}$. Unfortunately, their areas are larger than those of the known smallest-area convex coverings\xspace.
If $H$ is a subgroup of $G$, an $H$-covering\xspace is a $G$-covering\xspace.
Since $T \subset TR$, it is quite reasonable to consider groups $G$
lying between them, that is $T \subset G \subset TR$.
The group $R = SO(2, \mathbb{R})$ is an abelian group, and its finite subgroups
are $Z_k = \{ e^{2i\pi \sqrt{-1} /k} \mid 0 \le i \le k-1 \}$ for $k=1,2,\ldots,$ where $e^{\theta \sqrt{-1} }$ means the rotation of angle $\theta$.
In this paper, we consider the coverings\xspace under the action of the group $G _k= T \rtimes Z_k$.
We show that the smallest-area convex $G_2$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$ is the equilateral triangle of height $1$, whose area is $\frac{\sqrt{3}}{3}\approx 0.577.$
A nice feature is that the proof is purely geometric and elementary, assuming Pal's result mentioned above.
Then, we show that the equilateral triangle with height $\beta = \cos (\pi/12) \approx 0.966$ is a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
Its area is $\frac{2\sqrt{3} +3 }{12} \approx 0.538675$, and we
conjecture that it is the smallest-area convex $G_4$-covering\xspace.
It is a pleasant surprise that the equilateral triangular covering\xspace becomes the optimal convex covering\xspace
if we consider rotation by $\pi$ ($G_2$-covering\xspace),
and it also seems to be
true if we consider rotations by $\pi/2$ ($G_4$-covering\xspace).
However, the minimum area convex $G_3$-covering\xspace is no longer an equilateral triangle.
We give a convex $G_3$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$
which has area not larger than $0.568$.
Our $G_3$-covering\xspace has areas strictly smaller than
the area of the $G_2$-covering\xspace $\triangle_1$, and a bit larger than the area of
the $G_4$-covering\xspace $\triangle_\beta$. Unlike the $G_2$- and $G_4$-coverings\xspace,
the $G_3$-covering\xspace is not \emph{regular} under rotation.
We show that any convex $G_3$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$ which is regular
under rotation by $\pi/2$ or $2\pi/3$ has area strictly larger than
the area of our $G_3$-covering\xspace.
For triangles of perimeter $2$, we give a smaller convex $G_3$-covering\xspace
of them that has area $0.563$.
We also determine the set of all
smallest-area convex $G_k$-coverings\xspace of $\ensuremath{S_{\textsf{seg}}}$, which are all triangles.
\medskip
We use $\peri{C}$ to denote the perimeter of a compact set $C$,
and $\peri{\gamma}$ to denote the length of a curve $\gamma$.
The \emph{slope} of a line is the angle swept from the $x$-axis in a counterclockwise direction
to the line, and it is thus in $[0,\pi)$. The slope of a segment is the slope of the line that
extends it.
For two points $p$ and $q$, we use $pq$ to denote
the line segment connecting $p$ and $q$, and by $|pq|$ the length of $pq$.
We use $\ell_{pq}$ to denote the line passing through $p$ and $q$.
\section{Covering under rotation by 180 degrees}
In this section, we show that the smallest-area convex $G_2$-covering\xspace of the set $\ensuremath{S_{\textsf{c}}}$
of all closed curves of length $2$ is the equilateral triangle of height $1$, denoted by $\triangle_1$,
whose area is $\sqrt{3}/3$.
\subsection{The smallest-area covering\xspace and related results}
First, we recall a famous result mentioned in the introduction.
\begin{theorem} [Pal's theorem for the convex Kakeya problem]
\label{thm:pal.kakeya}
The equilateral triangle $\triangle_1$ is the smallest-area convex $T$-covering\xspace of the set
of all unit line segments.
\end{theorem}
\begin{corollary}\label{cor:G2.area.bound}
The area of a convex $G_2$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$ is at least $\sqrt{3}/3$.
\end{corollary}
\begin{proof}
Observe that all unit line segments are in $\ensuremath{S_{\textsf{c}}}$, and
line segments are stable under the action of rotation by $\pi$. Thus, any convex
$G_2$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$ must be a $T$-covering\xspace of all unit line segments, and the corollary follows from Theorem~\ref{thm:pal.kakeya} (Pal's theorem).
\end{proof}
From Corollary~\ref{cor:G2.area.bound}, it suffices to show that any closed curve $\gamma$ in $\ensuremath{S_{\textsf{c}}}$ can be contained in $\triangle_1$ by applying an action of $G_2$ to prove the following main theorem in this section.
\begin{theorem}
\label{thm:G2.main}
The equilateral triangle $\triangle_1$ is the smallest-area convex $G_2$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
\end{theorem}
Before proving the theorem, we give some direct implications of it.
An object $P$ is centrally symmetric if $-P = P$, where $-P = \{-x\mid x \in P\}$. Let $\ensuremath{S_{\textsf{sym}}}$ be the set of all centrally symmetric closed curves of length $2$.
\begin{corollary}
\label{cor:equi.tcover.sym}
The equilateral triangle $\triangle_1$ is the smallest-area convex $T$-covering\xspace of $\ensuremath{S_{\textsf{sym}}}$.
\end{corollary}
\begin{proof}
$\ensuremath{S_{\textsf{sym}}}$ contains all unit segments. Thus from Theorem~\ref{thm:pal.kakeya} (Pal's theorem),
the smallest convex $T$-covering\xspace of $\ensuremath{S_{\textsf{sym}}}$ has area at least
the area of $\triangle_1$ as in the proof of Corollary~\ref{cor:G2.area.bound}.
On the other hand, $\ensuremath{S_{\textsf{sym}}} \subset \ensuremath{S_{\textsf{c}}}$, and by Theorem~\ref{thm:G2.main}, any curve in $\ensuremath{S_{\textsf{sym}}}$ can be contained in $\triangle_1$ by applying a suitable transformation in $G_2$.
However, a centrally symmetric object is stable under the action of $Z_2$, and hence it can be contained in $\triangle_1$ by applying a transformation in $T$. Thus, $\triangle_1$ is the
smallest-area convex $T$-covering\xspace of $\ensuremath{S_{\textsf{sym}}}$.
\end{proof}
We can consider two special cases shown below.
Corollary~\ref{cor:equi.tcover.sym} implies that $\triangle_1$ is a $T$-covering\xspace of
all rectangles of perimeter $2$, and also of all parallelograms of perimeter $2$.
Since a unit line segment is a degenerate rectangle and a degenerate parallelogram of perimeter $2$, we have the following corollary.
\begin{corollary}
The equilateral triangle $\triangle_1$ is the smallest-area convex $T$-covering\xspace of the set of all rectangles of perimeter $2$, and also of the set of all parallelograms of perimeter $2$.
\end{corollary}
We can also obtain the following well-known result about the covering\xspace of
the set $\ensuremath{S_{\textsf{worm}}}$ of all curves of length $1$ (often called {\it worms}) mentioned in the introduction as a corollary.
\begin{corollary} \label{cor:t.container.worm}
The equilateral triangle $\triangle_1$ is the smallest convex $T$-covering\xspace of $\ensuremath{S_{\textsf{worm}}}$.
\end{corollary}
\begin{proof}
Given a curve $\zeta$ in $\ensuremath{S_{\textsf{worm}}}$, we consider the rotated copy $-\zeta$ by angle $\pi$. Suitably translated, we can form a closed curve $\gamma(\zeta)$ by connecting them at their
endpoints.
Then $\gamma(\zeta)\in \ensuremath{S_{\textsf{sym}}}$, and can be contained in $\triangle_1$
under translation. Therefore, $\zeta$ is also can be contained there. The corollary follows from Theorem~\ref{thm:pal.kakeya} (Pal's theorem).
\end{proof}
We can prove Corollary~\ref{cor:t.container.worm} directly by applying a reflecting argument similar to the proof of Theorem~\ref{thm:G2.main}, which is given in Appendix.
\begin{figure}[bh]
\centering
\includegraphics[scale=.8]{tiling.pdf}
\caption{Tiling argument. (a) $H_\gamma=L_0\cap L_{\pi/3}\cap L_{2\pi/3}$. (b) A tiling of six copies $H_\gamma(1),\ldots,H_\gamma(6)$ of $H_\gamma$. Any closed curve that touches every side of $H_\gamma$ is not shorter than the dashed (blue) segment of length at least $d_0+d_{\pi/3}+d_{2\pi/3}$.}
\label{fig:tiling}
\end{figure}
\subsection{Proof of Theorem~\ref{thm:G2.main}}
A slab is the region bounded by two parallel lines in the plane, and
its width is the distance between the two bounding lines.
For a closed curve $\gamma$ of $\ensuremath{S_{\textsf{c}}}$, let $L_\theta$
be the minimum-width slab of orientation $\theta$ with
$0\leq\theta<\pi$ containing $\gamma$. We denote the width of $L_\theta$ by $d_\theta$.
\begin{lemma}
\label{lem:tiling}
For a closed curve $\gamma$ of $\ensuremath{S_{\textsf{c}}}$, $d_0+d_{\pi/3}+d_{2\pi/3}\leq 2$
for slabs $L_0$, $L_{\pi/3}$, and $L_{2\pi/3}$
of $\gamma$.
\end{lemma}
\begin{proof}
Let $H_\gamma$ be the hexagon which is the intersection of $L_0$, $L_{\pi/3}$ and $L_{2\pi/3}$ of $\gamma$. See Figure~\ref{fig:tiling}(a).
Let $e_1,\ldots, e_6$ be
the edges of $H_\gamma$ in counterclockwise order. We can obtain
a tiling of six copies $H_\gamma(1),\ldots, H_\gamma(6)$
of $H_\gamma$ such that $H_\gamma(1)=H_\gamma$ and $H_\gamma(k+1)$ is the copy of
$H_\gamma(k)$ reflected about $e_k$ of $H_\gamma(k)$
for $k$ from $1$ to $5$
as shown in Figure~\ref{fig:tiling}(b).
We observe that the length of a closed curve
that touches every side of $H_\gamma$ is at least $d_0+d_{\pi/3}+d_{2\pi/3}$. See Figure~\ref{fig:tiling}(b).
Since $\gamma$ touches every side of $H_\gamma$
and the length of $\gamma$ is $2$ from the definition of $\ensuremath{S_{\textsf{c}}}$,
the lemma follows.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.8]{H_containment.pdf}
\caption{
(a) $P_\gamma=L_{\pi/3}\cap L_{2\pi/3}$, and $h=d_{\pi/3}+d_{2\pi/3}$.
(b) $h+d=2$.
}
\label{fig:H_containment}
\end{figure}
\begin{lemma}\label{lem:G2.Sc}
Any closed curve $\gamma$ of $\ensuremath{S_{\textsf{c}}}$ can be contained in $\triangle_1$ or $-\triangle_1$ under translation.
\end{lemma}
\begin{proof}
For a closed curve $\gamma$ of $\ensuremath{S_{\textsf{c}}}$, let $P_\gamma$ be the parallelogram which is the intersection of $L_{\pi/3}$ and $L_{2\pi/3}$ of $\gamma$.
Then, the height $h$ of $P_\gamma$ is $d_{\pi/3}+d_{2\pi/3}$ (Figure~\ref{fig:H_containment}(a)).
In case of $h\leq1$, $P_\gamma$ can be contained in both $\triangle_1$ and $-\triangle_1$ under translation.
In case of $h>1$, consider two horizontal lines $\ell_t$ and $\ell_b$ such that
$\ell_t$ lies above the bottom corner of $P_\gamma$ at distance $1$
and $\ell_b$ lies below the top corner of $P_\gamma$ at distance $1$.
For the distance $d$ between $\ell_t$ and $\ell_b$, $d+h$ equals $2$ (Figure~\ref{fig:H_containment}(b)).
With Lemma~\ref{lem:tiling}, we have
$d_0+d_{\pi/3}+d_{2\pi/3}=d_0+h\leq 2$, so $d\ge d_0$.
This implies that $L_0$ does not contain both of the lines $\ell_b$ and $\ell_t$,
so $L_0$ lies above $\ell_b$ or below $\ell_t$.
If $L_0$ lies above $\ell_b$, then $\gamma$ can be contained in $\triangle_1$ under translation,
and if $L_0$ lies below $\ell_t$, then $\gamma$ can be contained in $-\triangle_1$ under translation.
\end{proof}
For a closed curve $\gamma$, either $\gamma$ or $-\gamma$ can be
contained in $\triangle_1$ under translation by Lemma~\ref{lem:G2.Sc},
so we get Theorem~\ref{thm:G2.main}.
\section{Covering under rotation by 90 degrees}
In this section, we consider the convex $G_4$-coverings\xspace of the sets $\ensuremath{S_{\textsf{seg}}}$ and
$\ensuremath{S_{\textsf{c}}}$, where $G_4 = T \rtimes Z_4$. We show that the smallest-area
convex $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$ is the isosceles right triangle with legs
(the two equal-length sides) of length $1$.
For $\ensuremath{S_{\textsf{c}}}$, we give an equilateral triangle of height slightly smaller than $1$ as
a minimal convex $G_4$-covering\xspace.
\subsection{Covering of unit line segments}
First, we consider the set $\ensuremath{S_{\textsf{seg}}}$ of all unit segments.
\begin{theorem}\label{thm:G4.seg}
The smallest-area convex $G_4$-covering\xspace of all unit segments
has area $1/2$, and it is uniquely attained by the isosceles right triangle with legs of length $1$ under closure.
\end{theorem}
\begin{proof}
Let \irtriangle{}\ \ be the isosceles right triangle with legs of length $1$ and
base of slope $\pi/4$.
Any unit segment of slope $\theta$ can be placed in \irtriangle{}\ \
for $0\leq \theta\leq \pi/2$. Thus, \irtriangle{}\ \
is a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$, and its area is $1/2$.
Now we show that any convex $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$
has area at least $1/2$.
Let $X$ be a smallest-area convex $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$,
and let $s(\theta)$ be a unit segment of slope $\theta$.
Let $A$ be the set of angles $\theta$ such that $s(\theta)$ can be placed in $X$ under translation with $0 \le \theta <\pi$.
Let $A^{-}= A \cap [0, \pi/2)$ and
$A^{+}=\{\theta-\pi/2\mid \theta\in A\cap [\pi/2, \pi)\}$.
If $A^{-} \cap A^{+} \ne \varnothing$,
then $X$ contains two unit segments which are orthogonal to each other and also contains their convex hull.
Thus, the area of $X$ is at least $1/2$.
So we assume that $A^{-} \cap A^{+} = \varnothing$.
Since $X$ is a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$, $A^{-} \cup A^{+} = [0, \pi/2)$.
If $A^{-} = \varnothing$, then $A^{+}=[0,\pi/2)$ and
there is a sequence $\{\theta_k\}_{k=1}^{\infty}$ in $A^{+}$
such that $\lim_{k\to\infty}\theta_k=\pi/2$.
Since $X$ contains a unit segment $s(\pi/2)$ and a unit segment $s(\theta_k+\pi/2)$
for any $k$, $X$ contains their convex hull.
Since $\lim_{k\to\infty}\theta_k=\pi/2$,
the area of $X$ is at least $1/2$.
Similarly, we can prove this for the case $A^{+} = \varnothing$.
So we assume that $A^{-} \ne \varnothing$ and $A^{+} \ne \varnothing$.
Then there are two cases : $A^{-}$ contains no interval or
$A^{-}$ contains an interval.
Consider the case that $A^{-}$ contains no interval.
Let $I(x, \epsilon)$ be the open interval in $[0,\pi/2)$
centered at an angle $x$ with radius
$\epsilon$, and let $\theta'$ be an angle in $A^{-}$.
Since $A^{-}$ contains no interval,
there is a sequence $\{\theta'_k\}_{k=1}^{\infty}$ such that $\theta'_k \in I(\theta', 1/k) \cap A^+$ for each $k$, and $\lim_{k\to\infty}\theta'_k=\theta'$.
Since $X$ contains a unit segment $s(\theta')$ and a unit segment
$s(\theta'_k+\pi/2)$ for any $k$,
$X$ contains their convex hull.
Thus, the area of $X$ is at least $1/2$.
Consider now the case that $A^{-}$ contains an interval.
Since $A^{+} \ne \varnothing$, there is an interval in $A^-$
with an endpoint $\bar{\theta}$, other than $0$ and $\pi/2$.
Then there is a sequence $\{\bar{\theta}_k\}_{k=1}^{\infty}$ in $A^-$
such that $\lim_{k\to\infty}\bar{\theta}_k=\bar{\theta}$.
Since $\bar{\theta}$ is an endpoint of the interval in $A^-$,
$I(\bar{\theta}, 1/n) \cap A^+\ne\varnothing$ for any $n$.
So there is a sequence $\{\bar{\theta}'_n\}_{n=1}^\infty$
such that $\bar{\theta}'_n \in I(\bar{\theta}, 1/n) \cap A^+$ for each $n$,
and $\lim_{n\to\infty}\bar{\theta}'_n=\bar{\theta}$.
Since $X$ contains unit segments $s(\bar{\theta}_k)$ and $s(\bar{\theta}'_n+\pi/2)$ for any $k$ and any $n$,
$X$ contains their convex hull.
Thus, the area of $X$ is at least $1/2$.
We show the uniqueness of the smallest-area convex $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$ under its closure. We assume that $X$ is compact.
First, we prove that if there is a sequence of angles in $A$
converging to $\theta\in[0,\pi)$, then $X$ contains a unit segment $s(\theta)$.
Consider a sequence $\{\theta_k\}_{k=1}^\infty$ in $A$ such that
$\lim_{k\to\infty}\theta_k=\theta$.
Then there is a sequence $\{s(\theta_k)\}_{k=1}^\infty$ of unit segments
$s(\theta_k)$ contained in $X$.
Since $X$ is compact,
there is a subsequence $\{s(\theta_{k_p})\}_{p=1}^\infty$
that converges to a unit segment $s(\theta)$.
Since $X$ is closed, $s(\theta)$ is contained in $X$.
By the argument in the previous paragraphs,
there is an angle $\tilde{\theta} \in [0, \pi/2)$
such that $X$ contains unit segments $s(\tilde{\theta})$ and
$s(\tilde{\theta}+\pi/2)$, orthogonal to each other.
Moreover, if $X$ is a covering\xspace of area $1/2$,
$X$ is the convex hull $\ensuremath{\Phi}$ of $s(\tilde{\theta})$ and $s(\tilde{\theta}+\pi/2)$.
There are two cases of $\ensuremath{\Phi}$ : it is either a convex quadrilateral or a triangle of height $1$ and base length $1$.
Consider the case that $\ensuremath{\Phi}$ is a convex quadrilateral.
Then both $\tilde{\theta}$ and $\tilde{\theta}+\pi/2$ are
isolated points in $A$.
Thus, $\ensuremath{\Phi}$ is not a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$.
Consider the case that $\ensuremath{\Phi}$ is a triangle of height $1$ and base length $1$.
Without loss of generality, assume that $s(\tilde{\theta})$
is the base of $\ensuremath{\Phi}$ and $s(\tilde{\theta}+\pi/2)$
corresponds to the height of $\ensuremath{\Phi}$.
Suppose that $\ensuremath{\Phi}$ is an acute triangle.
Since $s(\tilde{\theta})$ is the base of $\ensuremath{\Phi}$,
$\tilde{\theta}$ is an isolated point in $A$.
Observe that $s(\tilde{\theta}+\pi/2)$ can be rotated infinitesimally
around the corner opposite to the base in both directions
while it is still contained in $\ensuremath{\Phi}$.
Thus, $\tilde{\theta}+\pi/2$ is an interior point of an interval $I$ in $A$.
So for an endpoint $\hat{\theta}$ of $I$ other than $\tilde{\theta}$,
$\ensuremath{\Phi}$ contains unit segments $s(\hat{\theta})$ and $s(\hat{\theta}+ \pi/2)$.
The convex hull $\ensuremath{\Phi}'$ of the two segments is also a convex quadrilateral or
a non-obtuse triangle of area at least $1/2$.
Since $\ensuremath{\Phi}'$ is contained in $\ensuremath{\Phi}$ of area $1/2$, $\ensuremath{\Phi}'=\ensuremath{\Phi}$,
that is, $\ensuremath{\Phi}'$ must be an acute triangle with base $b$ of length 1.
Since $\hat{\theta}\neq\tilde{\theta}$, $b$ is not the base of $\ensuremath{\Phi}$,
and $b$ must be one of the other two sides of $\ensuremath{\Phi}$.
But clearly both the sides are longer than $1$, the height of $\ensuremath{\Phi}$,
a contradiction. Hence $\ensuremath{\Phi}$ is not an acute triangle.
Thus, $\ensuremath{\Phi}$ is the isosceles right triangle, and it is the unique
$G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$ under closure.
\end{proof}
\subsection{ An equilateral triangle covering\xspace of closed curves}
Unfortunately, the isosceles right triangle with legs of length $1$
is not a $G_4$-coverings\xspace of $\ensuremath{S_{\textsf{c}}}$, since it does not contain a circle
of perimeter $2$.
Using the convexity of the area function on the convex hull of two translated convex objects~\cite{Ahn2012},
it can be shown that
the area of the convex hull of the union of the isosceles right triangle and any translated copy of the disk has area at least 0.543.
Naturally, the equilateral triangle of height $1$ is a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$, since it is a $G_2$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
However, a smaller equilateral triangle can be a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$, and
we seek for the smallest one, which is conjectured to be the smallest-area
$G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
Consider an equilateral triangle that is a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$. Since it is a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$, it must contain a pair of orthogonally crossing unit segments as shown in the proof of Theorem~\ref{thm:G4.seg}.
Figure~\ref{fig:G4-tri-beta-a} shows a smallest equilateral triangle containing such
a pair.
The side length of the triangle is $ \frac{1 + \sqrt{3}}{\sqrt{6}} \approx 1.115$
and the height is $\beta = \cos (\pi /12) = \frac{1+ \sqrt{3}}{2 \sqrt{2}} \approx 0.966$.
Its area is $\frac{2\sqrt{3} +3 }{12} \approx 0.538675$.
We denote this triangle by $\triangle_{\beta}$.
\begin{figure}[ht]
\centering
\includegraphics[scale=.9]{G4-tri-beta-a.pdf}
\caption{ The smallest equilateral triangle $\Delta_\beta$ containing a pair of orthogonal unit segments.
}
\label{fig:G4-tri-beta-a}
\end{figure}
\begin{theorem}\label{thm:G4.main}
The equilateral triangle $\triangle_{\beta}$ is a convex $G_4$-covering\xspace of
all closed curves of length $2$.
\end{theorem}
\begin{definition}
\label{def:brimful}
A closed curve $\gamma$ is called $G$-brimful\xspace for a set $\Lambda$
if it can be placed in $\Lambda$
but cannot be placed in any smaller scaled copy of $\Lambda$
under $G$ transformation.
\end{definition}
To prove Theorem~\ref{thm:G4.main}, we consider the following claim.
\begin{claim}
The length of any $G_4$-brimful\xspace curve for $\triangle_{\beta}$ is at least $2$.
\end{claim}
The claim implies that any closed curve of length smaller than $2$
can be placed in $\triangle_{\beta}$ under $G_4$ transformation,
but it is not $G_4$-brimful\xspace for $\triangle_{\beta}$.
Thus, if the claim is true, then
every closed curve of length 2 can be placed in $\triangle_{\beta}$
under $G_4$ transformation, so Theorem~\ref{thm:G4.main} holds.
Thus, we will prove the claim.
Let $\gamma$ be a $G_4$-brimful\xspace curve for $\triangle_{\beta}$.
Let
$g^i =e^{2i\pi \sqrt{-1} /4}\in Z_4$ for $i=0,\ldots,3$.
We consider the rotated copy
$g^i \triangle_\beta$ of $\triangle_\beta$,
and find the smallest scaled copy $\Upsilon_i$ of $g^i \triangle_\beta$
which circumscribes $\gamma$ with a proper translation.
Let $a_i$ be the scaling factor, and the brimful\xspace property implies that $a_i \ge 1$ for $i=0,1,2,3$ and the minimum of them equals $1$.
Also, $\gamma$ touches all three sides of $\Upsilon_i$ for each $i$, where we allow it
to touch two sides simultaneously at the vertex shared by them.
As illustrated in Figure~\ref{fig:G4-peri-same}(a), the intersection $H=H(\gamma) = \bigcap_{i=0}^3 \Upsilon_i$ contains $\gamma$, and
$H$ is a (possibly degenerate)
convex $12$-gon such that $\gamma$ touches all 12 edges of $H$.
The $12$-gon can be degenerate as shown in Figure~\ref{fig:G4-peri-same}(b), where $\gamma$ touches multiple edges simultaneously at vertices
corresponding to degenerated edges.
Note that $H$ consists of edges of slopes $2k\pi/12\; (\bmod\; \pi)$
for $k=0,\ldots,11$.
The following lemma states that the perimeter of $H$ only depends on $a_i$ $(i=0,1,2,3)$.
\begin{figure}[hb]
\centering
\includegraphics[scale=.8]{G4-peri-same.pdf}
\caption{
(a) $H=\bigcap_{i=0}^3 \Upsilon_i$ is a (possibly degenerate) convex $12$-gon.
$\gamma$ touches every edge of $H$.
(b) An example of a degenerated convex $12$-gon. The degenerated edges
are denoted by black disks.
(c) For $X_{0,2}=\Upsilon_0 \cap \Upsilon_2$ (red) and $X_{1,3}=\Upsilon_1 \cap \Upsilon_3$ (blue), $(X_{0,2} \cup X_{1,3}) \setminus H$ consists of 12 triangles (gray), each of which
is an isosceles triangle with apex angle $2\pi/3$.
}
\label{fig:G4-peri-same}
\end{figure}
\begin{lemma}\label{lem:hexagon.perimeter.bound}
The perimeter $\peri{H}$ equals $\frac{a_0+a_1 + a_ 2 + a_3}{2\cos(\pi/12)}$.
\end{lemma}
\begin{proof}
Let $\delta$ be the side length of $\Delta_{\beta}$.
The intersection $X_{0,2}= \Upsilon_0 \cap \Upsilon_2$ is a hexagon where $\gamma$ touches all of its six (possibly degenerated) edges.
Then, $(\Upsilon_0 \cup \Upsilon_2) \setminus X_{0,2}$ consists of six equilateral triangles, and the total sum of the perimeters of them equals
$\peri{\Upsilon_0} + \peri{\Upsilon_2} = 3(a_0 + a_2) \delta$.
On the other hand, one edge of each equilateral triangle contributes to
the boundary polygon of $X_{0,2}$. Hence, $\peri{X_{0,2}} = (a_0 + a_2)\delta$. Similarly, the perimeter of $X_{1,3} = \Upsilon_1 \cap \Upsilon_3$ equals $(a_1 + a_3)\delta$.
Now, consider $H = \bigcap_{i=0}^3 \Upsilon_i = X_{0,2} \cap X_{1,3}$. See Figure~\ref{fig:G4-peri-same}(c).
The set $(X_{0,2} \cup X_{1,3}) \setminus H$ consists of 12 triangles, each of which
is an isosceles triangle with apex angle $2\pi/3$. The total sum of the perimeters of these triangles equal $\peri{X_{0,2}} + \peri{X_{1,3}}$,
while the bottom side of each isosceles triangle contributes to the 12-gon $H$.
Since the ratio of the bottom side length to the perimeter of the isosceles triangle is $\frac{\sqrt{3}}{2+ \sqrt{3}}$,
\begin{eqnarray*}
\peri{H} &=& \frac{\sqrt{3}}{2 + \sqrt{3}} (\peri{X_{0,2}} + \peri{X_{1,3}})\\
&=& \frac{\sqrt{3}}{2+ \sqrt{3}}(a_0+a_1+a_2+a_3) \delta\\
&=& \frac{2}{2+\sqrt{3}} \cos(\pi/12)(a_0+a_1+a_2 + a_3)\\
&=& \frac{a_0+a_1+a_2+ a_3} {2\cos(\pi/12)}
\end{eqnarray*}
The third equality comes from $\delta = \frac{2}{\sqrt{3}} \beta = \frac{2}{\sqrt{3}}\cos(\pi /12)$,
and the last equality comes from
$\cos(\pi/12) = \frac{1+ \sqrt{3}}{2 \sqrt{2}} $ and hence $\cos^2(\pi/12) = \frac{2+\sqrt{3}}{4}$.
\end{proof}
The following lemma states the relation between the lengths of $\gamma$ and $H$.
\begin{figure}[t]
\centering
\includegraphics[scale=.8]{G4-tri-beta-b.pdf}
\caption{Illustration of the proof of Lemma~\ref{lem:G4.12gon}.
}
\label{fig:G4-tri-beta-b}
\end{figure}
\begin{lemma}\label{lem:G4.12gon}
Let $P$ be a (possibly degenerate) convex $12$-gon with edges
$e_k$ of slopes $2k\pi/12\; (\bmod\; \pi)$
for $k=0,\ldots,11$.
Let $C$ be a circuit that connects $12$ points, one point on each edge of $P$,
in order along the boundary of $P$.
Then $\peri{C} \ge \peri{P} \cos (\pi/12)$.
\end{lemma}
\begin{proof}
Let $p_k$ be the point on $e_k$ for $k=0,\ldots,11$.
The circuit $C$ connects the points from $p_0$ to $p_{12}$ in increasing order of their
indices, where
$p_{12}$ is a duplicate of $p_0$. We also assume that $p_{12}$
is on $e_{12}$ which is a duplicate of $e_0$.
While incrementing $k$ by $1$ from $1$ to $11$,
we reflect the edges $e_{k+1},\ldots, e_{12}$
about the edge $e_k$.
Then, the edges $e_0$,\ldots,$e_{12}$ are transformed to
a zigzag path alternating a horizontal edge and an edge of slope $2\pi/12$,
and $C$ becomes the path
connecting $p_0, p_1, p'_2,\ldots,p'_{12}$, where $p'_k$ is the location
of $p_k$ after the series of reflections for $k=2,\ldots,12$.
Thus, $\peri{C}$ is at least the distance between $p_0$ to $p'_{12}$.
See Figure~\ref{fig:G4-tri-beta-b} for the illustration.
The vector $x = p_0 p'_{12}$ is the sum of a horizontal vector $a$ and
another vector $b$ of slope $\pi/6$ such that $|a| + |b| = \peri{P}$.
The size $|x|$ of $x$ is minimized when $|a|=|b|$
to attain the value $(|a|+|b|) \cos(\pi/12)$. Thus, $|x| \ge \ell(P) \cos(\pi/12)$
and $\peri{C} \ge \peri{P} \cos(\pi/12)$.
\end{proof}
From Lemma~\ref{lem:G4.12gon}, the following corollary is immediate.
\begin{corollary}\label{cor:perimeter}
For any $G_4$-brimful\xspace curve $\gamma$ for $\triangle_{\beta}$,
$\peri{\gamma} \ge \peri{H} \cos (\pi/12)$, $H = H(\gamma)$.
\end{corollary}
Combining Lemma~\ref{lem:hexagon.perimeter.bound} and Corollary~\ref{cor:perimeter},
we have
$\peri{\gamma} \ge \peri{H} \cos (\pi/12) = (a_0+ a_1 + a_2+ a_3)/2 \ge 2$. The claim is proved.
\subsection{Minimality of the covering\xspace}
\begin{figure}[b]
\centering
\includegraphics[scale=.8]{G4-brimful.pdf}
\caption{Examples of $G_4$-brimful\xspace curves of length 2.}
\label{fig:G4-brimful}
\end{figure}
There are many $G_4$-brimful\xspace curves of length $2$, and some examples are illustrated as red curves (the line segment is considered as a degenerate closed curve) in Figure~\ref{fig:G4-brimful} together with the (possibly degenerate) $12$-gons they inscribe.
They suggest that $\triangle_{\beta}$ is the smallest-area convex $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
\begin{conjecture}
$\triangle_{\beta}$ is the smallest-area convex $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
\end{conjecture}
Although we have not yet proven it rigorously, we can prove that it is minimal in the
set theoretic sense. That is, no proper closed subset of $\triangle_{\beta}$ can be a
$G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
Without loss of generality, assume that $\triangle_{\beta}$ is located with
its bottom side being horizontal.
Let $S$ be the set of unit line segments that can be contained in $\triangle_{\beta}$
under translation (without considering the $Z_4$-action).
Then, the set of slope angles of the line segments of $S$ is $A=[0, \pi/12] \cup [3\pi/12, 5\pi/12] \cup [7\pi/12,9\pi/12] \cup [11\pi/12, \pi]$. Let $A' = (0, \pi/12) \cup (3\pi/12, 5\pi/12) \cup (7\pi/12,9\pi/12) \cup (11\pi/12, \pi)$ and let $S_{A'}$ be the set of all unit line segments
of slopes in $A'$.
\begin{lemma}\label{lem:covering_Sa'}
$\triangle_{\beta}$ is the smallest-area closed convex $T$-covering\xspace of $S_{A'}$.
\end{lemma}
\begin{proof}
Consider six unit segments of slopes $\pi/12+i\cdot\pi/6$ for $i=0,1,\ldots,5$.
Then any compact convex $T$-covering\xspace of $S_{A'}$ contains
these six unit segments under translation.
Ahn {\em et al.}~\cite{Ahn2014} proved there exists a triangle that is the smallest-area
convex $T$-covering\xspace of any given set of segments,
and showed an algorithm to construct the triangle.
For the six unit segments,
their algorithm computes the smallest regular hexagon that contains
them under translation. Since the hexagon is the Minkowski symmetrization of $\triangle_{\beta}$, which is $\{\frac{1}{2}(x-y)\mid x,y\in\triangle_{\beta} \}$,
the algorithm returns $\triangle_{\beta} $ as
the smallest-area closed convex $T$-covering\xspace of the six unit segments.
Thus, the lemma follows.
\end{proof}
\begin{proposition}
$\triangle_{\beta}$ is a minimal closed convex $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
\end{proposition}
\begin{proof}
Suppose $P \subseteq \triangle_{\beta}$ is a closed subset and a $G_4$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
Observe that each angle $\theta$ in $A'$ has no other angle in
$A$ that is equivalent to $\theta$ under the $Z_4$ action.
Thus, $P$ must contain all line segments of $S_{A'}$ under translation.
Lemma~\ref{lem:covering_Sa'} implies that the area of $P$ is the same as that of $\triangle_{\beta}$.
Therefore, $P$ must be $\triangle_{\beta}$ itself.
\end{proof}
\section{\texorpdfstring{$G_k$}{Gk}-covering\xspace of unit line segments}
Consider the smallest-area convex $G_k$-covering\xspace of
the set $\ensuremath{S_{\textsf{seg}}}$ of all unit line segments.
In general, a smallest-area convex $T$-covering\xspace of
any given set of segments is attained by a triangle~\cite{Ahn2014}.
This implies that there is a triangle that is a smallest-area convex $G_k$-covering\xspace
of $\ensuremath{S_{\textsf{seg}}}$.
The following theorem determines the set of all smallest-area convex $G_k$-coverings\xspace.
\begin{theorem}
If $k \ge 3$ is odd, the smallest area of convex $G_k$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$ is $\frac{1}{2} \sin (\pi /k) $, and it is attained
by any triangle $\triangle XYZ$ with bottom side $XY$ of length $1$ and
height $\sin( \pi/ k)$ such that $\pi/2 \le \angle X \le (k-1)\pi/k$.
If $k \ge 4 $ is even, the smallest area of convex $G_k$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$ is $\frac{1}{2} \sin (2\pi/k) $, and it is attained
by any triangle $\triangle XYZ$ with bottom side $XY$ of length $1$ and height
$\sin (2\pi/ k)$ such that $ \pi/2 \le \angle X \le (k-2)\pi / k$.
\end{theorem}
\begin{proof}
We have already seen a proof for $k=4$, and it can be generalized as follows.
Let $\Lambda$ be a smallest-area convex $G_k$-covering\xspace of $\ensuremath{S_{\textsf{seg}}}$.
First, let us consider the case that $k$ is odd.
Since $Z_k$ consists of $2i\pi/k$ rotations for $i=0,1,2,\ldots,k-1$,
one of the unit segments of slopes $\theta+ 2i\pi/k\; (\bmod\; \pi )$
must be contained in $\Lambda$
under translation for each angle $\theta$ with $0 \le \theta < 2 \pi/k$.
We define the smallest one of such angles to be $f(\theta)$.
As before, we can assume there exists an angle $\bar{\theta}$
such that $f(\bar{\theta}) = \bar{\theta}$.
Let $A = \{ f(\theta) \mid 0 \le \theta < 2\pi/k \} $ be the set of angles, let
$\bar{A}$ be the complement of $A$, and
let $s(\theta)$ be a unit segment of slope $\theta$.
There exists an angle $\tilde{\theta}$ with $0 \le \tilde{\theta} < 2\pi/k $ such that
$\tilde{\theta}$ is contained in both $A$ and the closure of $\bar{A}$.
There is a sequence $\{\tilde{\theta}_n\}_{n=1}^\infty \subseteq A$
such that $\lim_{n \to \infty} \tilde{\theta}_n = \tilde{\theta}+2i\pi/k$ for some $i$.
Then, $\Lambda$ contains two segments $s(\tilde{\theta})$ and $s(\tilde{\theta}_n)$ for any $n$,
and their convex hull $\ensuremath{\Phi}_n$.
Since $\lim_{n \to \infty} \tilde{\theta}_n = \tilde{\theta}+2i\pi/k$,
$\lim_{n \to \infty}\lVert \ensuremath{\Phi}_n \rVert = \frac{1}{2} |\sin (2 i \pi / k )|$.
Since $\frac{1}{2} |\sin (2 i \pi / k )|$ is minimized at $i= (k-1)/2$ , the area of $\Lambda$ is at least $\frac{1}{2} \sin (\pi /k)$.
On the other hand,
let $\triangle XYZ$ be a triangle with bottom side $XY$ of length $1$ and
height $\sin( \pi/ k)$ such that $\pi/2 \le \angle X \le (k-1)\pi/k$. Then $\angle Y +\angle Z\ge \pi/k$. We show that any segment of slope $\theta$ with $0\le\theta\le \pi/k$ can be placed
within $\triangle XYZ$.
Any unit line segment of slope $\theta_1$ for $0\le\theta_1\le\angle Y$ can be placed within $\triangle XYZ$
with one endpoint at $Y$, and any unit line segment of slope $\theta_2$ for $\angle Y \le\theta_2\le \max\{\angle Y, \pi/k\}$ can be placed within $\triangle XYZ$ with one endpoint at $Z$.
Thus, any segment of slope $\theta$ with $0\le\theta\le \pi/k$ can be placed
within $\triangle XYZ$.
The case of even $k$ can be proven analogously. The only difference is that $i \ne k/2$,
and the minimum area is attained at $i = k/2 -1$.
Analogously to the odd $k$ case, the area can be minimum only if the convex hull of $s(\theta) \cup s(\theta + 2i \pi/k)$ is a triangle, and
it is routine to derive the
conditions that the triangle is a $G_k$-covering\xspace.
\end{proof}
\section{Covering under rotation by 120 degrees}
\label{sec:contain.g3}
We construct a convex $G_3$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$, denoted by $\Gamma_3$,
as follows. Let $\Gamma$ be the convex region bounded by
$y^2=1+2x$ and $y^2=1-2x$, and containing the origin $O$.
Then $\Gamma_3$ is the convex subregion of $\Gamma$ bounded by
the $x$-axis and the line $y=2/3$.
The area of $\Gamma_3$ is $|\Gamma_3| = 2 \left(\frac{5}{27}+\int_{\frac{5}{18}}^{\frac{1}{2}} \sqrt{1-2x}\, dx \right)=\frac{46}{81}$, which is smaller than $0.5680$.
See Figure~\ref{fig:G3-curve-properties}(a).
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{G3-curve-properties}
\caption{(a) Illustration of $\Gamma_3$. (b) $H=L_0\cap L_{\pi/3}\cap L_{2\pi/3}$.
}
\label{fig:G3-curve-properties}
\end{figure}
We show that $\Gamma_3$ is a $G_3$-covering\xspace of $\ensuremath{S_{\textsf{c}}}$.
We first show a few properties that we use in the course.
Let $\Gamma^+$ be the region of $\Gamma$
above the $x$-axis.
We call the boundary segment on the $x$-axis the \emph{bottom side},
the boundary curve on $y=\sqrt{1+2x}$ the \emph{left side}, and the boundary curve
on $y=\sqrt{1-2x}$ the \emph{right side} of $\Gamma^+$.
$\Gamma^+$ is called a {\it church window}, which is a $T$-covering\xspace of
$\ensuremath{S_{\textsf{c}}}$~\cite{BC89}.
The following lemma gives a lower bound on the length of a
$T$-brimful\xspace curve for $\Gamma^+$.
\begin{lemma} \label{lem:G3.curve.length}
Any closed $T$-brimful\xspace curve for $\Gamma^+$ has length at least 2.
\end{lemma}
\begin{proof}
Recall the definition of a $G$-brimful\xspace curve in Definition~\ref{def:brimful}.
Consider a closed $T$-brimful\xspace curve $\gamma$ of minimum length for
$\Gamma^+$. Observe that $\gamma$ touches every side of $\Gamma^+$;
otherwise $\gamma$ can always be translated to lie in the interior
of $\Gamma^+$.
Let $\triangle{XYZ}$ be a triangle
for the touching points $X, Y, Z$ of the curve with the boundary
of $\Gamma^+$.
Since $\peri{\gamma}\ge\peri{\triangle{XYZ}}$, $\gamma$ is $\triangle{XYZ}$.
\begin{figure}[b]
\centering
\includegraphics[scale=0.85]{G3-curve-length}
\caption{
Illustration of the proof of Lemma~\ref{lem:G3.curve.length}.
}
\label{fig:G3-curve-length}
\end{figure}
Without loss of generality,
assume that $X$ is on the left side, $Z$ is on the right side,
and $Y$ is on the bottom side of $\Gamma^+$.
If $X$ and $Y$ are at $(-1/2, 0)$, or $X$ and $Z$ are at $(0, 1)$,
or $Y$ and $Z$ are at $(1/2,0)$,
$\gamma$ becomes a line segment
and the length of the line segment is always larger than or equal to $1$.
Thus, $\peri{\gamma}\geq 2$.
Now we assume that none of $X,Y$, and $Z$ is on a corner of $\Gamma^+$.
Let $\ell_X$ be the line tangent to the left side of $\Gamma^+$ at $X$.
Let $Z'$ be the point symmetric to $Z$ with respect to the $x$-axis,
and let $\bar{Z}$ be the point symmetric to $Z$ with respect to $\ell_X$.
See Figure~\ref{fig:G3-curve-length}(a).
We claim that $\bar{Z}, X, Y$, and $Z'$ are collinear.
If $X, Y$, and $Z'$ are not collinear, consider the intersection point $Y'$
of $XZ'$ and the bottom side of $\Gamma^+$.
Then, $\peri{\triangle{XY'Z}}<\peri{\triangle{XYZ}}$, since
$|XY'| + |Y'Z| = |XZ'| < |XY|+|YZ'|=|XY|+|YZ|$.
This contradicts the assumption on $\gamma$.
Thus, $X, Y$, and $Z'$ are collinear.
If $\bar{Z}$ is not on the line through $X$ and $Y$,
consider the point $X'$ where $Y \bar{Z}$ intersects $\ell_X$.
Then we have $\peri{\triangle{X'YZ}}<\peri{\triangle{XYZ}}$ because
$|YX'| + |X'Z| = |Y\bar{Z}| < |YX|+|X\bar{Z}|=|YX|+|XZ|$.
For the point $X''$ where $X'Y$ intersects the left side of $\Gamma^+$,
we have $\triangle{X''YZ}\subset \triangle{X'YZ}$.
Therefore, $\peri{\triangle{X''YZ}}<\peri{\triangle{X'YZ}}<\peri{\gamma}$, and
this contradicts the assumption on $\gamma$.
Suppose that $XZ$ is not horizontal.
From the collinearity of $X, \bar{Z},$ and $Z'$,
the reflection of $\ell_{XZ}$ in the tangent line $\ell_X$
is $\ell_{XZ'}$.
Let $X_r$ be the point symmetric to $X$ with respect to the $y$-axis, and
let $X'$ and $X'_r$ be the points symmetric to $X$ and $X_r$ with respect to the $x$-axis.
By the symmetry, $\angle ZXX_r =\angle Z'X'X'_r$.
From the geometry of the parabola,
the reflection of $\ell_{XX_r}$ in the tangent line $\ell_X$
is $\ell_{XX'_r}$.
Therefore, $\angle Z'X'X'_r=\angle Z'XX'_r$, implying that
$X, X', X'_r$ and $Z'$ are on a circle $C$.
See Figure~\ref{fig:G3-curve-length}(b).
Since $C$ passes through $X, X'$ and $X'_r$, the center of $C$
is at the origin. This implies that $X'Z'$ is horizontal, and thus
$XZ$ is also horizontal. This contradicts that $XZ$ is not horizontal.
Since $XZ$ is horizontal, $Y$ is at the origin.
Thus, $\triangle{XYZ}$ is an isosceles triangle with base $XZ$,
and $\peri{\triangle{XYZ}}=2$ by the construction of $\Gamma^+$.
\end{proof}
The following lemma shows the convexity of the perimeter function
on the convex hull of planar figures under translation.
\begin{lemma}[Theorem 2 of~\cite{Ahn2012}]
\label{lem:peri.convex}
For $k$ compact convex figures $C_i$ for $i=1,\ldots,k$ in the plane,
the perimeter function of their convex hull of $C_i(r)$ is convex,
where $C_i(r)$ for a vector $r=(r_1,\ldots,r_k)\in\mathbb{R}^{2k}$ is $C_i+r_i$
for $r_i\in\mathbb{R}^2$.
\end{lemma}
For compact convex figures that have point symmetry in the plane,
we can show an optimal translation of them
using the convexity of the perimeter function in Lemma~\ref{lem:peri.convex}.
\begin{lemma} \label{lem:k.segments.convexhull}
For $k$ compact convex figures $C_i$ for $i=1,\ldots,k$ that have
point symmetry in the plane, the perimeter function of
their convex hull of $C_i(r)$ is minimized
when their centers (of the symmetry) meet at a point,
where $C_i(r)$ for a vector $r=(r_1,\ldots,r_k)\in\mathbb{R}^{2k}$ is $C_i+r_i$
for $r_i\in\mathbb{R}^2$.
\end{lemma}
\begin{proof}
Without loss of generality, assume that the $k$ compact convex figures
are given with centers all lying at the origin.
Let $r=(r_1,\ldots,r_k)\in\mathbb{R}^{2k}$ be a vector such that
the perimeter of their convex hull of $C_i(r)$ is minimized
among all translation vectors in $\mathbb{R}^{2k}$.
Then $-r=(-r_1,\ldots,-r_k)$ is also a vector such that
the convex hull of $C_i(-r)$ has the minimum perimeter.
This is because the two convex hulls are symmetric to the origin.
Since the perimeter function is convex by Lemma~\ref{lem:peri.convex},
the convex hull of $C_i(\mathbf{0})$ also has the minimum perimeter,
where $\mathbf{0}$ is the zero vector (a vector of length zero).
\end{proof}
We are now ready to have a main result.
\begin{theorem} \label{theorem:curve.2.contain}
$\ensuremath{\Gamma_3}$ is a convex $G_3$-covering\xspace of all closed curves of length $2$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:G3.curve.length}, any closed curve of length 2
can be contained in $\Gamma^+$ under translation.
Without loss of generality, assume that $\ensuremath{\Gamma_3}$ is given
as a part of $\Gamma^+$.
Let $C$ be a closed curve of length 2 that is contained in $\Gamma^+$
and touches its bottom side, and let $\bar{C}$ be the convex hull of $C$.
Suppose that $C$ crosses the top side of $\ensuremath{\Gamma_3}$.
Let $s$ be a segment contained in $\bar{C}$ and
connecting the top side and the bottom side of $\ensuremath{\Gamma_3}$
such that the upper endpoint of $s$ lies in the interior of $\bar{C}$.
For each $i=1,2$, let $C_i$ be a rotated and translated copy of $C$ by $2i\pi/3$
such that they are contained in $\Gamma^+$ (by Lemma~\ref{lem:G3.curve.length})
and touch the bottom side of
$\ensuremath{\Gamma_3}$. If $C_1$ or $C_2$ is contained in $\ensuremath{\Gamma_3}$,
$\ensuremath{\Gamma_3}$ is a convex $G_3$-covering\xspace of $C$ and
we are done.
Assume to the contrary that neither $C_1$ nor $C_2$ is contained
in $\ensuremath{\Gamma_3}$. Then both curves cross the top side of $\ensuremath{\Gamma_3}$.
For $i=1,2$, let $s_i$ be a line segment contained in the
convex hull of $C_i$ and
connecting the top side and the bottom side of $\ensuremath{\Gamma_3}$
such that the upper endpoint of $s_i$ lies in the interior of
the convex hull of $C_i$. See Figure~\ref{fig:G3-curve-properties}(a).
Then there is a rotated and translated copy $s'_i$ of $s_i$ by $-2i\pi/3$
such that $s'_i$ is contained in $\bar{C}$.
Let $\ensuremath{\Phi}$ be the convex hull of $s, s'_1$, and $s'_2$.
Since $s, s'_1, s'_2\subset \bar{C}$
and the upper endpoint of $s$ lies in the interior of $\bar{C}$,
$\peri{\ensuremath{\Phi}}<\peri{\bar{C}}\leq \peri{C}=2$.
Now consider a translation of these three segments such that
their midpoints meet at a point, and let $\ensuremath{\Phi}_m$ be the convex hull
of the three translated segments.
By Lemma~\ref{lem:k.segments.convexhull}, $\peri{\ensuremath{\Phi}_m}\le \peri{\ensuremath{\Phi}}$.
Let $L_\theta$ denote the slab of minimum width at orientation $\theta$ for
$0 \leq \theta < \pi$ that contains $\ensuremath{\Phi}_m$.
Let $d_\theta$ be the width of $L_\theta$.
Consider the three slabs $L_0, L_{\pi/3}$, and $L_{2\pi/3}$ of $\ensuremath{\Phi}_m$.
Observe that $d_\theta$ for $\theta=0,\pi/3,2\pi/3$
is at least height of $\ensuremath{\Gamma_3}$, which is $2/3$.
Let $H=L_0\cap L_{\pi/3}\cap L_{2\pi/3}$ as shown
in Figure~\ref{fig:G3-curve-properties}(b).
Then $\ensuremath{\Phi}_m$ is contained in $H$
and it touches every side of $H$.
Since $H$ is a (possibly degenerate) hexagon,
$\peri{\ensuremath{\Phi}_m}\ge d_0+d_{\pi/3}+d_{2\pi/3} \geq 2$, which can be shown by a proper tiling of copies of $H$ as in Lemma~\ref{lem:tiling}. Thus, $\peri{\ensuremath{\Phi}_m}\ge 2$, contradicting $\peri{\ensuremath{\Phi}_m}\le \peri{\ensuremath{\Phi}}<2$.
\end{proof}
Recall that the smallest-area convex $G_2$-covering\xspace $\triangle_1$ and
$G_4$-covering\xspace $\triangle_\beta$ of $\ensuremath{S_{\textsf{c}}}$ are equilateral triangles.
Our $G_3$-covering\xspace, $\ensuremath{\Gamma_3}$, has area smaller than the area of $\triangle_1$,
but a bit larger than the area of $\triangle_\beta$,
which sounds reasonable.
However, it may look odd that $\ensuremath{\Gamma_3}$ is not \emph{regular} under any discrete
rotation while $\triangle_1$ and $\triangle_\beta$ are regular
under rotation by $2\pi/3$.
We show that any convex $G_3$-covering\xspace regular under rotation by
$2\pi/3$ or $\pi/2$ has area strictly larger than the area of $\ensuremath{\Gamma_3}$.
Let $\Lambda$ be a convex $G_3$-covering\xspace which is regular under rotation by $2\pi/3$.
Then $\Lambda$ is a $T$-covering\xspace of all unit segments of
any slope $\theta$ for $0\le\theta<\pi$.
Since $\triangle_1$ is the smallest-area convex $T$-covering\xspace of
the set of all unit segments by Theorem~\ref{thm:pal.kakeya},
the area of $\Lambda$ is at least the area of $\triangle_1$, which is strictly
larger than the area of $\ensuremath{\Gamma_3}$.
\begin{figure}[t]
\centering
\includegraphics[scale=.8]{G3-regular}
\caption{(a) $s_{i\pi/2}$ is contained in $\Lambda'$ for every $i=1, 2, 3$. (b) The convex hull of $s^1$ and $s^2$ is contained in the convex hull $\Phi_1$
of $s, s_{\pi/2}, s_{\pi},$ and $s_{3\pi/2}$.}
\label{fig:G3-regular}
\end{figure}
Let $\bar{\Lambda}$ be a convex $G_3$-covering\xspace which is regular under rotation by $\pi/2$.
Assume that a unit segment $s$ of slope $\pi/4$ is contained in $\bar{\Lambda}$.
Since $\bar{\Lambda}$ is a $G_3$-covering\xspace,
there is a unit segment $s_1$ of one of slopes
$\{0, \pi/3, 2\pi/3\}$ contained in $\bar{\Lambda}$.
Assume that $s_1$ of slope $\pi/3$ is contained in $\bar{\Lambda}$.
Since $\bar{\Lambda}$ is regular under rotation by $\pi/2$, there are
unit segments $s'$ of slope $3\pi/4=\pi/4+\pi/2$ and $s'_1$ of slope
$5\pi/6=\pi/3+\pi/2$ contained in $\bar{\Lambda}$. Thus, the four segments
$s,s',s_1, s'_1$ are contained in $\bar{\Lambda}$.
Let $c$ be the point of symmetry of $\bar{\Lambda}$.
Let $\ensuremath{\Phi}$ be the convex hull
of the translated copies of $s, s', s_1, s_{1}'$ such that their midpoints are all at $c$.
We will prove that the area $\|\bar{\Lambda}\|$ of $\bar{\Lambda}$ is at least the area $\|\ensuremath{\Phi}\|$ of $\ensuremath{\Phi}$.
Then $\|\bar{\Lambda}\|\ge\|\ensuremath{\Phi}\|=\sqrt6/4>\|\Gamma_3\|$,
where $\|\Gamma_3\|$ is the area of $\Gamma_3$.
Suppose that the midpoint of $s$ is not at $c$.
Let $s_{i\pi/2}$ be the copy obtained by rotating $s$ around $c$ by $i\pi/2$
for $i=1, 2, 3$.
Since $\bar{\Lambda}$ is regular under the rotation by $\pi/2$,
$s_{i\pi/2}$ is contained in $\bar{\Lambda}$ for every $i =1,2,3$.
Since $\bar{\Lambda}$ is convex, the convex hull $\ensuremath{\Phi}_1$ of $s$ and the segments
$s_{i\pi/2}$ for all $i=1, 2, 3$ is contained in $\bar{\Lambda}$,
and thus $\|\bar{\Lambda}\|\ge\|\ensuremath{\Phi}_1\|$.
See Figure~\ref{fig:G3-regular}(a).
Let $s^1$ be the translated copy
of $s$ such that the midpoint of $s^1$ is at $c$,
and $s^2$ be the copy of $s^1$ rotated by $\pi/2$ around $c$.
Let $\ensuremath{\Phi}_2$ be the convex hull of $s^1$ and $s^2$.
Since $s^1$ is contained in the convex hull of $s$
and $s_\pi$, and $s^2$ is contained in the convex hull of $s_{\pi/2}$ and
$s_{3\pi/2}$, $\ensuremath{\Phi}_2\subset \ensuremath{\Phi}_1$. See Figure~\ref{fig:G3-regular}(b).
Similarly, the convex hull of the translated copy $\bar{s}$
of $s_1$ with midpoint lying at $c$ and the rotated copy of $\bar{s}$
by $\pi/2$ around $c$ is contained in $\bar{\Lambda}$.
Thus, we conclude that
$\|\bar{\Lambda}\|\ge\|\ensuremath{\Phi}\|\ge\sqrt6/4> 0.6>\|\Gamma_3\|$.
We can show this for $s_1$ of slopes $0$ and $2\pi/3$ contained in $\bar{\Lambda}$
in a similar way.
\section{Covering of triangles under rotation by 120 degrees}
\subsection{Construction}
\label{sec:g3.gt.construction}
Let $\ensuremath{S_{\textsf{t}}}$ be the set of all triangles of perimeter $2$.
We construct a convex $G_3$-covering\xspace of $\ensuremath{S_{\textsf{t}}}$,
denoted by $\ensuremath{\Gamma_{\textsf{t}}}$, from $\ensuremath{\Gamma_3}$ by shaving off some regions
around the top corners.
Consider an equilateral triangle $\triangle=\triangle XYZ$
of perimeter 2 such that side $YZ$ is vertical, $Y$ lies on the bottom side of
$\ensuremath{\Gamma_3}$ and $X$ lies on the left side of $\ensuremath{\Gamma_3}$. See Figure~\ref{fig:G3-tri-construction}(a-b)
for an illustration.
Imagine $\triangle$ rotates in a clockwise direction such that $X$ moves along
the left side and $Y$ moves along the bottom side of $\ensuremath{\Gamma_3}$.
Let $t$ denote the $x$-coordinate of $X$ and $\theta=\angle{XYO}$.
Then, $\tan\theta=\sqrt{\frac{2t+1}{-2t-\frac{5}{9}}}$ and
$Z=\left(\frac{\sqrt{6t+3}+\sqrt{-2t-\frac{5}{9}}+2t}{2},\frac{\sqrt{-6t-\frac{5}{3}}+\sqrt{2t+1}}{2}\right)$ for $t$ varying from $-4/9$ to $-1/3$.
The trajectory of $Z$ forms the top-right boundary of $\ensuremath{\Gamma_{\textsf{t}}}$
that connects the top side and the right side of $\ensuremath{\Gamma_3}$.
Thus, the region of $\ensuremath{\Gamma_3}$ lying above the trajectory is shaved off.
The top-left boundary of $\ensuremath{\Gamma_{\textsf{t}}}$ can be obtained similarly.
Figure~\ref{fig:G3-tri-construction}(c) shows $\ensuremath{\Gamma_{\textsf{t}}}$.
We show that $\ensuremath{\Gamma_{\textsf{t}}}$ is convex by showing that
$\frac{d}{dx}(\frac{dy}{dx})=\frac{d}{dt}(\frac{dy}{dx})/\frac{dx}{dt}\le0$
for $Z = (x(t), y(t))$, and the boundary of $\ensuremath{\Gamma_{\textsf{t}}}$ has a unique tangent
at $t=-4/9$ and $-1/3$.
Since the $x$-coordinate of $Z$ increases as $t$ increases, $\frac{dx}{dt}>0$.
Thus, it suffices to show that $\frac{d}{dt}(\frac{dy}{dx})\le0$ for $t$ with $-4/9\le t \le -1/3$.
Observe that
\begin{equation*}
\frac{dy}{dx} = \frac{-3(-6t-\frac{5}{3})^{-\frac{1}{2}}+(2t+1)^{-\frac{1}{2}}}{3(6t+3)^{-\frac{1}{2}}-(-2t-\frac{5}{9})^{-\frac{1}{2}}+2}.
\end{equation*}
We obtain
\begin{equation*}
f(t):= \frac{d}{dt}\left(\frac{dy}{dx}\right)=\frac{(36t+10)f_1(t)-\sqrt{3}(54+108t)f_2(t)-24}{f_1(t)f_2(t)\{2f_1(t)f_2(t)+\sqrt{3}f_1(t)-3f_2(t)\}^2},
\end{equation*}
where $f_1(t)=\sqrt{-18t-5}$ and $f_2(t)=\sqrt{2t+1}$.
Since $f_1(t)>0$ and $f_2(t)>0$, the denominator of $f(t)$ is positive.
Since the numerator of $f(t)$ is negative,
$\frac{d}{dt}(\frac{dy}{dx})/\frac{dx}{dt}\le0$.
At $t=-4/9$, $\frac{dy}{dx}=0$,
which is the slope of the top side of $\ensuremath{\Gamma_3}$. At $t=-1/3$,
$\frac{dy}{dx}=-\sqrt{3}$, which is
the slope of the tangent to the right side of $\ensuremath{\Gamma_3}$ at the same point.
Thus, $\ensuremath{\Gamma_{\textsf{t}}}$ is convex.
Now we show the area of $\ensuremath{\Gamma_{\textsf{t}}}$.
The area that is shaved off from $\ensuremath{\Gamma_3}$ is
\begin{equation*}
2 \left( \frac{2}{3}\left(\frac{13}{18} -\frac{1}{\sqrt{3}} \right) +\int_{\frac{5}{18}}^{\frac{1}{3}} \sqrt{1-2x}\, dx -\int_{\frac{1}{\sqrt{3}}-\frac{4}{9}}^{\frac{1}{3}} f(x)\, dx \right)
= \frac{1}{81}(86-44\sqrt{3}-3\pi),
\end{equation*}
where $f(x)$ is the function of $\gamma_{DE}$ such that
\begin{equation*}
\int_{\frac{1}{\sqrt{3}}-\frac{4}{9}}^{\frac{1}{3}} f(x)\, dx
= \frac{1}{4} \int_{-\frac{4}{9}}^{-\frac{1}{3}} \left(\sqrt{-6x-\frac{5}{3}}+\sqrt{2x+1}\right)\left(\frac{3}{\sqrt{6x+3}}-\frac{1}{\sqrt{-2x-\frac{5}{9}}}+2\right)\, dx.
\end{equation*}
Thus, $\ensuremath{\Gamma_{\textsf{t}}}$ has area smaller than $0.5634$.
\begin{figure}[t]
\centering
\includegraphics[width=.9\textwidth]{G3-tri-construction}
\caption{Construction. (a) $\ensuremath{\Gamma_3}$. (b) The trajectories of $X$, $Y$ and $Z$. (c) A $G_3$-covering\xspace $\ensuremath{\Gamma_{\textsf{t}}}$ of $\ensuremath{S_{\textsf{t}}}$.}
\label{fig:G3-tri-construction}
\end{figure}
\subsection{Covering of triangles of perimeter 2}
We show that $\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of all triangles of perimeter 2.
To do this, we first show a few properties
that we use in the course. Let $A,B,C,D,E$, and $F$ be the boundary points of $\ensuremath{\Gamma_{\textsf{t}}}$
as shown in Figure~\ref{fig:G3-tri-construction}. We denote the boundary curve
of $\ensuremath{\Gamma_{\textsf{t}}}$ from a point $a$ to a point $b$ in clockwise direction along the boundary
by $\gamma_{ab}$.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{G3-tri-cases-X}
\caption{(a) $\triangle{XYZ}$ lying in the left of $\ell_{OE}$.
There is a copy $\triangle$
that is contained in $\ensuremath{\Gamma_{\textsf{t}}}$. (b) $\triangle{XYZ}$ contained in $\ensuremath{\Gamma_3}$
such that $X$ is at $A$, $Y$ is in the right of $\ell_{OE}$,
and $Z\in R_2$. (c) $\triangle{XYZ}$ contained in $\ensuremath{\Gamma_3}$
such that $X$ is on $\gamma_{OA}$, $Y$ is in the right of $\ell_{OE}$,
and $Z\in R_1$.
}
\label{fig:G3-tri-cases-X}
\end{figure}
\begin{lemma} \label{lem:triangle.left.line}
Let $\triangle{XYZ}$ be a triangle of perimeter 2 contained in $\Gamma^+$. If it is in the left of $\ell_{OE}$
or in the right of $\ell_{OB}$ (including the lines),
$\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of $\triangle{XYZ}$.
\end{lemma}
\begin{proof}
If $\triangle{XYZ}$ lies in the left of $\ell_{OE}$ (including the line),
$\triangle{XYZ}$ lies in between $\ell_{OE}$ and the line $\ell$ tangent to
$\Gamma_t$ at $B$.
The two lines are of slope $\pi/3$ and they are
at distance $\sqrt{3}/3$.
Thus there are copies of $\triangle{XYZ}$ rotated by $2\pi/3$ and lying in between
$\ell_{BE}$ and $\ell_{AF}$.
Among such copies, let $\triangle$ be the one that touches
$\gamma_{FA}$ from above and $\gamma_{AB}$ from right.
Since $\peri{\triangle}=2$, $\triangle$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$ by Lemma~\ref{lem:G3.curve.length}.
See Figure~\ref{fig:G3-tri-cases-X}(a). The case of $\triangle{XYZ}$ lying
in the right of $\ell_{OB}$ can be shown by a copy of the triangle
rotated by $-2\pi/3$.
\end{proof}
In the following, we assume that $\triangle{XYZ}$ is contained in
$\Gamma_3$ but it is not contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
If there is no corner of $\triangle{XYZ}$ lying in the left of $\ell_{OB}$ or
in the right of $\ell_{OE}$ (including the lines), $\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of $\triangle{XYZ}$ by Lemma~\ref{lem:triangle.left.line}.
Thus, we assume that $X$ is in the left of $\ell_{OB}$
and $Y$ is in the right of $\ell_{OE}$.
Since $\ensuremath{\Gamma_{\textsf{t}}}$ is convex,
the remaining corner $Z$ of $\triangle{XYZ}$ lies in
$\ensuremath{\Gamma_3}\setminus\ensuremath{\Gamma_{\textsf{t}}}$.
Let $R_{1}$ and $R_{2}$ denote
the left and right regions of $\ensuremath{\Gamma_3}\setminus\ensuremath{\Gamma_{\textsf{t}}}$, respectively,
as shown in Figure~\ref{fig:G3-tri-cases-X}(a).
Translate $\triangle{XYZ}$ leftwards horizontally until $X$ or $Z$ hits the left side of $\Gamma_3$.
If the triangle lies in the left of $\ell_{OE}$ (including the line), we are done by Lemma~\ref{lem:triangle.left.line}.
Thus, we assume that $Z$ is in $R_1\cup R_2$ and $Y$ lies in the right of $\ell_{OE}$.
There are two cases, either $Z\in R_1$ or $Z\in R_2$. See Figure~\ref{fig:G3-tri-cases-X}(b) and (c).
If $Z$ is in $R_2$, then $X$ is at $A$.
\begin{lemma}\label{lem:triangle.R2}
Let $\triangle{XYZ}$ be a triangle contained in $\ensuremath{\Gamma_3}$
such that $X$ is at $A$, $Y$ is in the right of $\ell_{OE}$,
and $Z\in R_2$. Then $\ensuremath{\Gamma_{\textsf{t}}}$ is a convex $G_3$-covering\xspace of $\triangle{XYZ}$.
\end{lemma}
\begin{proof}
Let $\triangle=\triangle{X'Y'Z'}$
be the copy of $\triangle{XYZ}$ rotated by $2\pi/3$ such that $X'$
lies at $F$. We show that $\triangle$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
Assume to the contrary that $\triangle$ is not contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
Since $\angle ZAF>\pi/6$ and
$F$ is at distance at least 1 from any point on $\gamma_{AB}$,
$Z'$ must be contained in $\ensuremath{\Gamma_{\textsf{t}}}$. See Figure~\ref{fig:G3-tri-lem25}(a) and (b).
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth]{G3-tri-lem25}
\caption{(a) $\triangle{XYZ}$ with $Z\in R_2$.
(b) A rotated copy $\triangle{X'Y'Z'}$ of $\triangle{XYZ}$ by $2\pi/3$ with $X'$ lying at $F$.
(c) If $Y$ lies in the right of $\ell$, then $\peri{\triangle{XYZ}}>\peri{\triangle{XVW}}\ge 2$.
}
\label{fig:G3-tri-lem25}
\end{figure}
If $Y'$ lies on or below $\ell_{BE}$, $\triangle$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$
by Lemma~\ref{lem:G3.curve.length}.
So assume that $Y'$ lies above $\ell_{BE}$.
Then $Y$ must lie to the right of the line $\ell$ of slope $\pi/3$
and passing through the point of $\gamma_{FO}$ at distance $1/3$ from $F$.
See Figure~\ref{fig:G3-tri-lem25}(c).
Let $H$ be the point at $(0,2/3)$.
Then there is a triangle $\triangle{XVW}$ with $V\in\ell$, and $W\in HE$
such that $\peri{\triangle{XVW}}<\peri{\triangle{XYZ}}$.
Thus, to show a contradiction,
it suffices to show $\peri{\triangle{XVW}}\ge 2$.
For a point $p$, let $\bar{p}$ denote the point symmetric to $p$ along $\ell$.
Then $ |VW|=|V\bar{W}|$.
Since $X$ is at distance at least $7/6$ to any point in $\bar{H}\bar{E}$
and at distance at least $5/6$ to any point in $HE$,
$\peri{\triangle{XVW}}=|XV|+|VW|+|WX|=|XV|+|V\bar{W}|+|WX| \geq
7/6+5/6=2$.
\end{proof}
We can also show that $\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of $\triangle{XYZ}$ for
the remaining case of $Z\in R_1$.
\begin{lemma}\label{lem:triangle.R1}
Let $\triangle{XYZ}$ be a triangle contained in $\ensuremath{\Gamma_3}$
such that $X$ is on $\gamma_{OA}$, $Y$ is in the right of $\ell_{OE}$,
and $Z\in R_1$.
Then $\ensuremath{\Gamma_{\textsf{t}}}$ is a convex $G_3$-covering\xspace of $\triangle{XYZ}$.
\end{lemma}
Before proving the lemma, we need a few technical lemmas.
\begin{lemma} \label{lem:iso.rotate1}
Let $\triangle{XYZ}$ be an isosceles triangle of perimeter $2$
such that its base $YZ$ is of length $\geq 2/3$ and
parallel to the bottom side of $\ensuremath{\Gamma_{\textsf{t}}}$, and $X$ lies at $O$.
Then $\triangle{XYZ}$ can be rotated in a clockwise direction
within $\ensuremath{\Gamma_{\textsf{t}}}$ such that $X$ moves along $\gamma_{OA}$ and
$Y$ moves along $\gamma_{EF}$ until $Y$ meets $F$.
\end{lemma}
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth]{G3-tri-properties}
\caption{(a) The convex hull $Q$ of an equilateral triangle $\triangle{XYZ}$ and $p$. (b) Rotated copy $Q(\theta)$ of $Q$ by $\theta$ around $O$. (c) Translated copy $Q_T(\theta)$ of $Q(\theta)$ such that $Y^*\in\gamma_{EF}$ and $X^*\in\gamma_{OA}$}
\label{fig:G3-tri-properties}
\end{figure}
\begin{proof}
Let $\triangle{XYZ}$ be an equilateral triangle satisfying the conditions
in the lemma statement. Then $|YZ|=2/3$ and $\angle YOF= \pi/3$.
Let $p$ be a point in $\ensuremath{\Gamma_{\textsf{t}}}$ lying below $\ell_{YO}$,
and let $Q$ be the convex hull of $p$ and $\triangle{XYZ}$.
Let $\theta_0$ be the angle $\angle pOF$. See Figure~\ref{fig:G3-tri-properties}(a).
We claim that $Q$ can be rotated within $\ensuremath{\Gamma_{\textsf{t}}}$ in a clockwise direction
such that $Y$ moves along $\gamma_{EF}$
and $X$ moves along $\gamma_{OA}$ until $p$ reaches $\ell_{AF}$.
Let $Q(\theta)=X'Y'Z'p'$ be the rotated copy of $Q$ by $\theta$ with $0\le\theta\le\theta_0$ in clockwise
direction around $O$ such that each corner $\kappa'$ of $Q(\theta)$
corresponds to the rotated point of $\kappa$ for $\kappa\in\{X,Y,Z,p\}$.
Let $Q_T(\theta)=X^*Y^*Z^*p^*$ be the translated copy of $Q(\theta)$ such that $Y^*\in\gamma_{EF}$, $X^*\in\gamma_{OA}$, and each corner
$\kappa^*$ of $Q_T(\theta)$
corresponds to the translated point of $\kappa'$ for $\kappa'\in\{X',Y',Z',p'\}$.
See Figure~\ref{fig:G3-tri-properties}(b) and (c) for an illustration.
Since $\triangle{XYZ}$ is an equilateral triangle,
$Z^*$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$
as shown in Section~\ref{sec:g3.gt.construction}.
Thus, we show that $p^*\in\ensuremath{\Gamma_{\textsf{t}}}$.
We may assume that $p \in \gamma_{EF}$.
Let $\bar{p}$ be the intersection between
the horizontal line through $p^*$ and $\gamma_{EF}$.
We show that $f(\theta)=x(\bar{p})-x(p^*)\ge 0$ with $0\le\theta\le\theta_0$, implying $p^*\in\ensuremath{\Gamma_{\textsf{t}}}$.
Then $f(\theta)=x(\bar{p})-x(p^*)= x(\bar{p})-x(p')+x(p')-x(p^*)=x(\bar{p})-x(p')+x(Y')-x(Y^*)$.
Let $g(\theta_0, \theta)=x(p')-x(\bar{p})$
for $0\le\theta\le\theta_0 \le \pi/3$.
Observe that $g(\theta_0, \theta)=x(p')-x(\bar{p})=
h(\theta_0)\cos(\theta_0-\theta)-(\frac{1}{2}-\frac{1}{2}(h(\theta_0)\sin(\theta_0-\theta))^2)$,
where $h(\theta_0)=\frac{1}{1+\cos(\theta_0)}$.
For $\theta_0=\pi/3$, $p'$ is at $Y'$ and $\bar{p}$ is at $Y^*$,
and thus $x(Y')-x(Y^*)=g(\pi/3,\theta)$.
From this, $f(\theta)=g(\pi/3, \theta)-g(\theta_0, \theta)$.
Since $p^*$ is on $\gamma_{EF}$ at $\theta=0$, $f(0)=0$.
Thus, it suffices to show that $g(\theta_0, \theta)$ is
not decreasing for $\theta_0$ increasing from $\theta$ to $\pi/3$
for any fixed $\theta$ with $0<\theta\le \pi/3$.
\begin{eqnarray*}
\frac{\partial g}{\partial \theta_0} &=& \frac{1}{(1+\cos\theta_0)^3}(g_1(\theta_0)+g_2(\theta_0)) >0,\; \text{where}\\
g_1(\theta_0) &=& \sin\theta_0\sin^2(\theta-\theta_0)+(\sin(\theta-\theta_0)+\sin\theta_1\cos(\theta-\theta_0))(1+\cos\theta_0)\\
&=& \sin\theta_0\sin^2(\theta-\theta_0)+(\cos\theta_0\sin(\theta-\theta_0)+\sin\theta_0\cos(\theta-\theta_0))(1+\cos\theta_0)\\
& & +\sin(\theta-\theta_0)(1-\cos\theta_0)(1+\cos\theta_0) \\
&=& \sin\theta_0\sin^2(\theta-\theta_0)+\sin\theta(1+\cos\theta_0) +\sin(\theta-\theta_0)\sin^2\theta_0\\
&=& (\sin(\theta_0-\theta)-\sin\theta_0)\sin\theta_0\sin(\theta_0-\theta)+\sin\theta+ \sin\theta\cos\theta_0,\;\text{and}\\
g_2(\theta_0) &=& \sin(\theta_0-\theta)(1+\cos\theta_0)(\cos(\theta_0-\theta)-\cos\theta_0).
\end{eqnarray*}
Observe that $\sin\theta\cos\theta_0 \ge 0$. Since $0<\sin\theta_0\sin(\theta_0-\theta)<1$,
it suffices to show that
$(\sin(\theta_0-\theta)-\sin\theta_0)+\sin\theta > 0$ to show $g_1(\theta_0) > 0$.
Let $\bar{g}(\theta_0)=\sin(\theta_0-\theta)-\sin\theta_0+\sin\theta$.
Since $\frac{\partial \bar{g}}{\partial \theta_0} = \cos(\theta_0-\theta)-\cos\theta_0>0$
and $\bar{g}(\theta) = 0$,
we have $\bar{g}(\theta_0) > 0$, implying $g_1(\theta_0) > 0$.
Observe that $1+\cos\theta_0>0$ and $(\cos(\theta_0-\theta)-\cos\theta_0) > 0$,
so we have $g_2(\theta_0) > 0$.
Thus, $\frac{\partial g}{\partial \theta_0} > 0$.
Since $p^*$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$ at $\theta=\theta_0$, $p^*$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{G3-tri-properties_second}
\caption{(a) An isosceles triangle $\triangle \bar{X}\bar{Y}\bar{Z}$.
(b) Convex hull $\ensuremath{\Phi}$ of $\triangle \bar{X}\bar{Y}\bar{Z}$ and $\triangle XYZ$.
(c) The rotated copy $\ensuremath{\Phi}(\theta')$ of $\ensuremath{\Phi}$ by $\theta'-\pi/3$.}
\label{fig:G3-tri-properties_second}
\end{figure}
By Lemma~\ref{lem:G3.curve.length}, any triangle of perimeter $2$
lying in between $\ell_{BE}$ and $\ell_{AF}$ can be translated
such that it is contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
Thus, we consider isosceles triangles with one corner lying on
$\gamma_{FA}$ and one corner lying above $\ell_{BE}$.
Let $\triangle{\bar{X}\bar{Y}\bar{Z}}$ be the isosceles triangle of perimeter $2$ contained in $\ensuremath{\Gamma_{\textsf{t}}}$
such that its base $\bar{Y}\bar{Z}$ is of length $\ge 2/3$, $\bar{Z}$ is at $B$, and $\bar{X}\in\gamma_{OA}$.
Observe that $|X\bar{X}| \le 1/3$;
if $|X\bar{X}|>1/3$, $\bar{Y}$ is not contained in $\ensuremath{\Gamma_{\textsf{t}}}$ because
$\triangle{\bar{X}\bar{Y}\bar{Z}}$ is an isosceles triangle of perimeter $2$
with base length $|\bar{Y}\bar{Z}|\ge 2/3$.
See Figure~\ref{fig:G3-tri-properties_second}(a).
Let $\ensuremath{\Phi}$ be the convex hull of $\triangle{XYZ}$ and $\triangle{\bar{X}\bar{Y}\bar{Z}}$. See Figure~\ref{fig:G3-tri-properties_second}(b).
Imagine that $\ensuremath{\Phi}$ rotates in a clockwise
direction such that $X$ moves along $\gamma_{OA}$ and $Y$ moves
along $\gamma_{EF}$ until $\bar{Y}$ reaches $\gamma_{FA}$.
Then $Z$ moves along $\gamma_{BC}$, and then moves
into the interior of $\ensuremath{\Gamma_{\textsf{t}}}$ during the rotation because $\triangle{XYZ}$ is an equilateral
triangle with $|XZ|=2/3$.
Let $\ensuremath{\Phi}(\theta')$ denote the rotated copy of $\ensuremath{\Phi}$ by $\theta'-\pi/3$,
where $\theta'=\angle{ZXA}$.
Then $\theta'$ increases monotonically from $\pi/3$ during the rotation.
See Figure~\ref{fig:G3-tri-properties_second}(c).
Consider the rotation for $\theta'$ from $\pi/3$ to $\pi/2$.
Observe that $\bar{Y}$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$ during the rotation
by the argument in the first paragraph of the proof.
Since $Z=\bar{Z}$ and $Z\in\ensuremath{\Gamma_{\textsf{t}}}$ during the rotation (shown in Section~\ref{sec:g3.gt.construction}),
$\bar{Z}$ is also in $\ensuremath{\Gamma_{\textsf{t}}}$.
Now we claim that $\bar{X}$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$ during the rotation.
For any $\theta'$ with $\pi/3<\theta' \le \pi/2$, $\bar{X}$ lies above
$\ell_{OA}$, but it lies below $\ell_{BE}$ because $\angle \bar{X}FA\le\pi/6$
for $\theta'\le \pi/2$.
Since $|\bar{X}F|\leq |\bar{X}X|+|XF| \leq |\bar{X}X|+|XY| \le 1$
as $|\bar{X}X|\le 1/3$, $\bar{X}$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
Thus, $\triangle{\bar{X}\bar{Y}\bar{Z}}$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$ during the rotation.
Thus the lemma holds.
\end{proof}
\begin{figure}[b]
\centering
\includegraphics[width=.6\textwidth]{lem2425}
\caption{(a) $\ell_X$ lies below $\ell_B$ or $\ell_X=\ell_B$. (b) $X$ lies below or on $\ell_{\bar{X}}$.}
\label{fig:lem2425}
\end{figure}
\begin{lemma}
\label{lem:LX.BF.intersect}
Let $\triangle{XYZ}$ be an isosceles triangle of perimeter $2$
such that its base $YZ$ is of length $\ge 2/3$, $Y\in\gamma_{OA}$,
and $Z\in\gamma_{EF}$. The line through $X$ and
parallel to $YZ$ intersects $\gamma_{OB}$.
\end{lemma}
\begin{proof}
Let $\ell_X$ and $\ell_{B}$ be the lines parallel to $YZ$ such that
$\ell_X$ passes through $X$ and $\ell_{B}$ passes through $B$.
See Figure~\ref{fig:lem2425}(a).
Let $\theta=\angle{ZYF}$ and let $y$ denote the $y$-coordinate of $Z$.
Then $0 \leq y \leq \sqrt{3}/3$ and
$\sin^{-1}y \leq \theta \leq \sin^{-1}(\frac{3y}{2})$.
Let $f(y, \theta)$ be the distance between $\ell_{B}$ and $X$,
which is $f(y, \theta) =(\frac{\sqrt{3}}{3}-y)\cos\theta+(\frac{5-3y^2}{6})\sin\theta-\sqrt{1-\frac{y}{\sin\theta}}$.
We will show that
for $y$ with $0 \leq y \leq \sqrt{3}/3$
and for $\theta$ with $\sin^{-1}y \leq \theta \leq \sin^{-1}(\frac{3y}{2})$ (or equivalently $\frac{2\sin\theta}{3} \le y \le \min\{\sqrt{3}/3,\sin\theta\}$ and $0\le\theta\le\pi/3$),
\begin{equation*}
\frac{\partial f}{\partial y} = -y\sin\theta-\cos\theta+\frac{1}{2\sqrt{\sin^2\theta-y\sin\theta}}\geq 0.
\end{equation*}
We have
$\frac{\partial^2 f}{\partial y^2} = \sin\theta\left(\frac{1}{4\sin\theta(\sin\theta-y)^{3/2}}-1\right).$
Since $0\le\sin\theta-y \le \frac{\sin\theta}{3}$, we have $0\le4\sin\theta(\sin\theta-y) \le \frac{4\sin^2\theta}{3} \le 1$.
Thus, $0\le4\sin\theta(\sin\theta-y)^{3/2} \le 1$, implying $\frac{\partial^2 f}{\partial y^2} \ge 0$.
So it suffices to show $\frac{\partial f}{\partial y} \ge 0$ for $y=\frac{2\sin\theta}{3}$.
\begin{equation*}
\frac{\partial f}{\partial y}\mid_{y=\frac{2\sin\theta}{3}}=-\frac{2\sin^2\theta}{3}-\cos\theta+\frac{\sqrt{3}}{2\sin\theta}
= \frac{1}{\sin\theta}\left(-\frac{2\sin^3\theta}{3}-\frac{\sin2\theta}{2}+\frac{\sqrt3}{2}\right)
\end{equation*}
Let $g(\theta)=-\frac{2\sin^3\theta}{3}-\frac{\sin2\theta}{2}+\frac{\sqrt3}{2}$. Then
\begin{eqnarray}
\frac{\partial g}{\partial \theta}&=-\cos2\theta-2\sin^2\theta\cos\theta\\
&=-\cos2\theta+2\cos^3\theta-2.
\end{eqnarray}
For $\theta \le\pi/4$, we have $\frac{\partial g}{\partial \theta} \le 0$ by equation (1).
For $\theta$ with $\pi/4 < \theta \le\pi/3$, we have $2\cos^3\theta \le 1$ and $-\cos2\theta \le 1$ so $\frac{\partial g}{\partial \theta} \le 0$ by equation (2).
Therefore, $g(\theta) \ge g(\pi/3)=0$, implying $\frac{\partial f}{\partial y} \ge 0$.
Now we show that $f(\frac{2\sin\theta}{3}, \theta) \ge 0$ for $\theta$ with $0 \le \theta \le \pi/3$.
Let $h(\theta):= f(\frac{2\sin\theta}{3}, \theta)$. Then
\begin{equation*}
h(\theta)=\frac{5\sin\theta}{6}-\frac{2\sin^3\theta}{9}+\frac{\cos\theta}{\sqrt3}-\frac{2\sin\theta\cos\theta}{3}-\frac{1}{\sqrt3}.
\end{equation*}
Let $t=\sin\theta$. Then $\sqrt{1-t^2}=\cos\theta$ and $0\le t \le \frac{\sqrt3}{2}$.
\begin{eqnarray*}
h(\theta)=0
&\Rightarrow& -\frac{2}{9}t^3+\frac{5}{6}t-\frac{1}{\sqrt3}=\left(\frac{2}{3}t-\frac{1}{\sqrt3}\right)\sqrt{1-t^2} \\
&\Rightarrow & \left(\frac{2}{3}t-\frac{1}{\sqrt3}\right)\left(-\frac{1}{3}t^2-\frac{1}{2\sqrt3}t+1\right)=\left(\frac{2}{3}t-\frac{1}{\sqrt3}\right)\sqrt{1-t^2} \\
&\Rightarrow & \left(-\frac{1}{3}t^2-\frac{1}{2\sqrt3}t+1\right)^2=1-t^2 \;\text{or}\; t=\frac{\sqrt3}{2} \\
&\Rightarrow & \frac{1}{9}t^4+\frac{1}{3\sqrt3}t^3+\frac{5}{12}t^2-\frac{1}{\sqrt3}t=0\; \text{or}\; t=\frac{\sqrt3}{2} \\
&\Rightarrow & t(t-\frac{\sqrt3}{2})(\frac{1}{9}t^2+\frac{1}{2\sqrt3}t+\frac{2}{3})\; \text{or}\; t=\frac{\sqrt3}{2} \\
&\Rightarrow & t= 0\; \text{or}\; t=\frac{\sqrt3}{2} \; \Leftrightarrow \; \theta= 0\; \text{or}\; \theta=\frac{\pi}{3}
\end{eqnarray*}
Since $h(\pi/6)=8/9-\sqrt3/2>0$ and $h$ is continuous with roots only at $0$ and $\pi/3$, we have $h(\theta) \ge 0$ for $\theta$ with $0 \le \theta \le \pi/3$.
Observe that $f(y, \theta)$ is minimized at $y = \frac{2\sin\theta}{3}$ for a fixed $\theta$.
In other words, $f(y, \theta)$ is minimized when $\triangle{XYZ}$ is an equilateral triangle.
Since $f(y, \theta) \ge 0$ for $y = \frac{2\sin\theta}{3}$,
any isosceles triangle $\triangle XYZ$ such that
$\ell_X$ does not intersect $\gamma_{OB}$ has
height larger than $\sqrt{3}/3$.
Thus, $\ell_X$ intersects $\gamma_{OB}$.
\end{proof}
\begin{lemma} \label{lem:iso.rotate2}
Let $\triangle{XYZ}$ be an isosceles triangle of perimeter $2$
such that its base $YZ$ is of length $\geq 2/3$,
$X$ lies above or on $\ell_{YZ}$, $Y\in\gamma_{OA}$,
and $Z\in\gamma_{EF}$. Then $\triangle{XYZ}$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
\end{lemma}
\begin{proof}
Let $\triangle{XYZ}$ an isosceles triangle satisfying the conditions in the lemma
statement with $\varphi=\angle{XYZ}$ and $\theta=\angle{ZYF}$.
Observe that $0 \le \varphi, \theta \le \pi/3$.
Let $f(\varphi)$ be the $y$-coordinate of $X$.
Then $f(\varphi)=\frac{\sin(\varphi+\theta)}{1+\cos\varphi}$.
Since $f'(\varphi) \ge 0$, $f(\varphi)$ is maximized at $\varphi = \pi/3$,
implying that $f(\varphi)$ is maximized when $\triangle{XYZ}$ is the equilateral triangle.
Let $\triangle{\bar{X}\bar{Y}\bar{Z}}$ be the equilateral triangle such that
$\bar{Y}\in\gamma_{OA}$, $\bar{Z}\in\gamma_{EF}$, and $\bar{Y}\bar{Z}$
is parallel to $YZ$.
Let $\ell_{\bar{X}}$ be the line parallel to $\bar{Y}\bar{Z}$ and passing through $\bar{X}$.
See Figure~\ref{fig:lem2425}(b).
Then $X$ lies below or on $\ell_{\bar{X}}$ by the proof in Lemma~\ref{lem:LX.BF.intersect},
and its $y$-coordinate $f(\varphi)$ is smaller than or equal to the
$y$-coordinate of $\bar{X}$ by the argument in the previous paragraph.
Since $\bar{X}$ is on the boundary of $\ensuremath{\Gamma_{\textsf{t}}}$, $X$ is contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{G3-tri-cases-Z}
\caption{(a) $Z\in R_1$ and $ZX$ is the longest side.
(b) A copy $\triangle{\bar{X}\bar{Y}\bar{Z}}$ of $\triangle{XYZ}$ rotated by $2\pi/3$. (c) A copy $\triangle{\bar{X}\bar{Y}\bar{Z}}$ of $\triangle{XYZ}$ rotated by $-2\pi/3$.}
\label{fig:G3-tri-cases-Z}
\end{figure}
Now we are ready to prove Lemma~\ref{lem:triangle.R1}.
\begin{proof}
Translate $\triangle{XYZ}$
to the right until $Y$ meets $\gamma_{EF}$.
See Figure~\ref{fig:G3-tri-cases-Z}(a).
If $Z\in\ensuremath{\Gamma_{\textsf{t}}}$, then $\triangle{XYZ}\subset\ensuremath{\Gamma_{\textsf{t}}}$ and we are done.
Suppose that $Z\in R_1$.
If $X\in\gamma_{FO}$, $\triangle{XYZ}$ lies in the right of $\ell_{OB}$ (including the line), and by Lemma~\ref{lem:triangle.left.line}, $\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of
$\triangle{XYZ}$.
So assume that $X\in\gamma_{OA}$, and thus $|XY| \geq 1/2$.
There are three cases
that the longest side of $\triangle{XYZ}$ is (1) $XZ$, (2) $YZ$, or (3) $XY$.
For each case, we show that there is a rotated copy of $\triangle{XYZ}$
that is contained in $\ensuremath{\Gamma_{\textsf{t}}}$.
Consider case (1) that $XZ$ is the longest side. There are two subcases,
$|XY| \geq |YZ|$ or $|XY| < |YZ|$.
Suppose $|XY| \geq |YZ|$.
Let $\triangle{\bar{X}\bar{Y}\bar{Z}}$ be the copy of $\triangle{XYZ}$ rotated by $2\pi/3$
such that $\bar{Z}\in\gamma_{FA}$ and $\bar{X}$ lies on the right side of $\Gamma$.
Since $\angle{ZXF} > \pi/3$, $\bar{Z}$ is the lowest corner.
So $\bar{X}$ lies on the right side of $\Gamma^+$.
Since $\angle{\bar{Y}\bar{Z}F} \le 2\pi/3$,
$\bar{Y}$ lies in the right of $\ell_{OB}$ (including the line).
If $\bar{Z}\in\gamma_{FO}$,
$\triangle{\bar{X}\bar{Y}\bar{Z}}$ lies in the right of $\ell_{OB}$ (including the line),
and by Lemma~\ref{lem:triangle.left.line}, $\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of $\triangle{XYZ}$.
Thus, assume that $\bar{Z}\in\gamma_{OA}$.
If $\bar{X}$ lies in the left of $\ell_{OE}$ (including the line), then
$\triangle{\bar{X}\bar{Y}\bar{Z}}$ lies in the left of $\ell_{OE}$ (including the line),
and by Lemma~\ref{lem:triangle.left.line}, $\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of
$\triangle{XYZ}$.
So assume $\bar{X}\in\gamma_{EF}$.
See Figure~\ref{fig:G3-tri-cases-Z}(b).
Let $Y^{*}$ be a point such that $\triangle{\bar{X}Y^{*}\bar{Z}}$ is a isosceles triangle of perimeter 2
with base $\bar{Z}\bar{X}$ and lies in the left of $\ell_{\bar{Z}\bar{X}}$
(including the line).
Then both $\bar{Y}$ and $Y^{*}$ are on the same ellipse curve with foci $\bar{X}$ and $\bar{Z}$.
By Lemma~\ref{lem:iso.rotate2}, $Y^{*}\in\ensuremath{\Gamma_{\textsf{t}}}$.
Since $|\bar{X}\bar{Y}| \geq |\bar{Y}\bar{Z}|$, $\bar{Y}$ lies on or below the axis line of the ellipse through $Y^*$.
Since $\bar{Y}$ lies on or below the tangent line of the ellipse at $Y^*$, by Lemma~\ref{lem:LX.BF.intersect}, $\bar{Y}\in\ensuremath{\Gamma_{\textsf{t}}}$ and thus $\triangle{\bar{X}\bar{Y}\bar{Z}}\subset\ensuremath{\Gamma_{\textsf{t}}}$.
Suppose now $|XY| < |YZ|$.
Let $\triangle{\bar{X}\bar{Y}\bar{Z}}$ be the copy of $\triangle{XYZ}$ rotated by $-2\pi/3$
such that $\bar{X}\in\gamma_{BD}$ and $\bar{Z}\in\gamma_{EF}$.
Since $\angle{ZXF} < 2\pi/3$, $\bar{X}$ is the highest corner.
Let $Y^{*}$ be a point such that $\triangle{\bar{X}Y^{*}\bar{Z}}$ is a isosceles triangle with base $\bar{Z}\bar{X}$ and lies below $\ell_{\bar{Z}\bar{X}}$.
Then both $\bar{Y}$ and $Y^{*}$ are on the same ellipse curve with foci $\bar{X}$
and $\bar{Z}$.
By Lemma~\ref{lem:iso.rotate1}, $Y^{*}\in\ensuremath{\Gamma_{\textsf{t}}}$.
Since $|\bar{X}\bar{Y}| < |\bar{Y}\bar{Z}|$, $\bar{Y}$ lies on or above the axis line of
the ellipse through $Y^*$.
Since $\bar{Y}$ lies on or above the line tangent to the ellipse at $Y^{*}$,
$\bar{Y}$ lies above $\ell_{FA}$.
Since $XZ$ is the longest side, $\bar{Y}\in\ensuremath{\Gamma_{\textsf{t}}}$ and thus $\triangle{\bar{X}\bar{Y}\bar{Z}}\subset\ensuremath{\Gamma_{\textsf{t}}}$.
Now consider case (2) that $YZ$ is the longest side. Translate $\triangle{XYZ}$
such that $Y\in\gamma_{EF}$ and $Z\in\gamma_{BC}$.
If $|XY| \ge |ZX|$, then $X\in\ensuremath{\Gamma_{\textsf{t}}}$ by the argument for case (1).
So assume that $|XY|< |ZX|$.
Let $\triangle{\bar{X}\bar{Y}\bar{Z}}$ be the copy of $\triangle{XYZ}$ rotated by $-2\pi/3$
such that $\bar{Y}\in\ell_{FA}$ and $\bar{Z}$ lies on the right side of
$\Gamma^+$.
Then $\angle{\bar{Z}\bar{Y}F} \leq \pi/3$.
If $\triangle{\bar{X}\bar{Y}\bar{Z}}$ lies to the right of $\ell_{OB}$ or to the left of $\ell_{OE}$ (including the lines),
$\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of the triangle by Lemma~\ref{lem:triangle.left.line}.
So assume $\bar{Z}\in\gamma_{EF}$ and $\bar{Y}\in\gamma_{OA}$.
Since $|XY|<|ZX|$, $\bar{X}\in\ensuremath{\Gamma_{\textsf{t}}}$ by the argument for case (1),
and thus $\triangle{\bar{X}\bar{Y}\bar{Z}}\subset\ensuremath{\Gamma_{\textsf{t}}}$.
Finally, consider case (3) that $XY$ is the longest side. Translate $\triangle{XYZ}$
such that $X\in\gamma_{OA}$ and $Y\in\gamma_{EF}$.
If $|XZ| \leq |YZ|$, then $Z\in\ensuremath{\Gamma_{\textsf{t}}}$ by the argument for case (1).
So assume that $|XZ|>|YZ|$.
Let $\triangle{\bar{X}\bar{Y}\bar{Z}}$ be the copy of $\triangle{XYZ}$ rotated by $2\pi/3$
such that $\bar{X}$ lies on the right side of $\Gamma^+$ and $\bar{Z}\in\gamma_{FA}$.
Since $\angle{ZXF} > \pi/3$, $\bar{Z}$ is the lowest corner and $\angle{\bar{X}\bar{Z}F} \leq \pi/3$.
If $\triangle{\bar{X}\bar{Y}\bar{Z}}$ lies to the right of $\ell_{OB}$ or to the left of $\ell_{OE}$
(including the lines),
$\ensuremath{\Gamma_{\textsf{t}}}$ is a $G_3$-covering\xspace of the triangle by Lemma~\ref{lem:triangle.left.line}.
So assume $\bar{X}\in\gamma_{EF}$ and $\bar{Z}\in\gamma_{OA}$.
Since $\bar{Y}\bar{Z}$ is the shortest side, $\bar{Y}$ lies below $\ell_{CD}$. Thus, it belongs to
case (2).
\end{proof}
Combining Lemmas~\ref{lem:triangle.left.line},~\ref{lem:triangle.R2}, and~\ref{lem:triangle.R1},
we have the following result.
\begin{theorem} \label{thm:triangle.G3}
$\ensuremath{\Gamma_{\textsf{t}}}$ is a convex $G_3$-covering\xspace of all triangles of perimeter $2$.
\end{theorem}
\section{Conclusion}
We considered the smallest-area covering of planar objects of perimeter $2$
allowing translation and discrete rotations of multiples of $\pi, \pi/2,$ and $2\pi/3$.
We gave a geometric and elementary proof of the smallest-area convex coverings\xspace
for translation and rotation of $\pi$ while showing convex coverings
for the other discrete rotations.
We also gave the smallest-area convex coverings\xspace of all unit segments
under translation and discrete rotations $\pi/k$ for all integers $k\ge 3$.
Open problems include the optimality proof of the equilateral triangle
covering for the rotation of multiples of $\pi/2$,
and the smallest-area coverings allowing other discrete rotations
with clean mathematical solutions.
| {
"timestamp": "2022-11-29T02:13:12",
"yymm": "2211",
"arxiv_id": "2211.14807",
"language": "en",
"url": "https://arxiv.org/abs/2211.14807",
"abstract": "We consider the smallest-area universal covering of planar objects of perimeter 2 (or equivalently closed curves of length 2) allowing translation and discrete rotations. In particular, we show that the solution is an equilateral triangle of height 1 when translation and discrete rotation of $\\pi$ are allowed. Our proof is purely geometric and elementary. We also give convex coverings of closed curves of length 2 under translation and discrete rotations of multiples of $\\pi/2$ and $2\\pi/3$. We show a minimality of the covering for discrete rotation of multiples of $\\pi/2$, which is an equilateral triangle of height smaller than 1, and conjecture that the covering is the smallest-area convex covering. Finally, we give the smallest-area convex coverings of all unit segments under translation and discrete rotations $2\\pi/k$ for all integers $k\\ge 3$.",
"subjects": "Computational Geometry (cs.CG)",
"title": "Universal convex covering problems under translation and discrete rotations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462204837635,
"lm_q2_score": 0.8006920116079209,
"lm_q1q2_score": 0.7910406466395872
} |
https://arxiv.org/abs/1311.5051 | Separating path systems | We study separating systems of the edges of a graph where each member of the separating system is a path. We conjecture that every $n$-vertex graph admits a separating path system of size $O(n)$ and prove this in certain interesting special cases. In particular, we establish this conjecture for random graphs and graphs with linear minimum degree. We also obtain tight bounds on the size of a minimal separating path system in the case of trees. | \section{Introduction}
Given a set $S$, we say that a family $\mathcal{F}$ of subsets of $S$ \emph{separates} a pair of distinct elements $x,y \in S$ if there exists a set $A\in \mathcal{F}$ which contains exactly one of $x$ and $y$. If $\mathcal{F}$ separates all pairs of distinct elements of $S$, we say that $\mathcal{F}$ is a \emph{separating system} of $S$.
The study of separating systems was initiated by R\'enyi \citep{renyi} in 1961. It is essentially trivial that the minimal size of a separating system of an $n$-element set is $\lceil \log_2{n} \rceil$. However, the question of finding the minimal size of a separating system becomes much more interesting when one imposes restrictions on the elements of the separating system. For example, separating systems with restrictions on the cardinalities of their members have been studied by Katona \citep{katona}, Wegener \citep{wegener}, Ramsay and Roberts \citep{ramrob} and K\"undgen, Mubayi and Tetali \citep{mubtet}, amongst others. Stronger notions of separation as well as other extremal questions about separating systems have also been studied; see, for example, the papers of Spencer \citep{spencer}, Hansel \citep{hansel} and Bollob\'as and Scott \cite{belascott}.
Another interesting direction involves imposing a graph structure on the underlying ground set and imposing graph theoretic restrictions on the separating family (see, for instance, \citep{cheng} and \citep{belascott2}). In this paper, we investigate the question of separating the edges of a graph using paths. Given a graph $G=(V,E)$, we say that a family $\mathcal{P}$ of subsets of $E(G)$ is a \emph{separating path system of $G$} if $\mathcal{P}$ separates $E(G)$ and every element of $\mathcal{P}$ is a path of $G$.
Separating path systems arise naturally in the context of network design. We are presented with a communication network with a defective link and our goal is to identify this link. Of course, one could test every link, but this might not be very efficient - can we do better? A natural test to perform is to send a message between a pair of nodes along a predetermined path; if the message does not reach its intended destination, we conclude that the defective link lies on this path. If we model the communication network as a graph, a fixed set of such tests succeeds in locating any defective link if and only if the corresponding family of paths is a separating path system of the underlying graph. We are naturally led to the following question: what is the size of a minimal separating path system of a given graph?
For a graph $G$, let $f(G)$ be the size of a minimal separating path system of $G$. As a separating path system of $G$ is also a separating system of $E(G)$, it follows that $f(G) \geq \left\lceil \log_2{|E(G)|} \right\rceil$. In particular, for any connected $n$-vertex graph $G$, we have $f(G) = \Omega(\log{n})$. With a little work, we can construct graphs that come close to matching this bound. Let $L_n$ be the \emph{ladder} of order $2n$, that is, the Cartesian product of a path of length $n-1$ with a single edge. Given any subset $A$ of $[n-1]$, there is a natural way of mapping $A$ to a path $P_A$ in $L_n$ (see Figure \ref{ladder}). With this, it is an easy exercise to establish that $f(L_n) = O(\log{n})$.
\begin{figure}\label{ladder}
\begin{center}
\begin{tikzpicture}
\foreach \x in {1,2,3,4,5,6,7,8,9,10,11}
\node (\x) at (\x, 2.5) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {1,2,3,4,5,6,7,8,9,10,11}
\node (\x) at (\x, 3.5) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {1,2,3,4,5,6,7,8,9,10,11}
\draw (\x, 2.5) -- (\x, 3.5);
\draw (1,2.5) -- (11,2.5);
\draw (1,3.5) -- (11,3.5);
\foreach \x in {1,2,3,4,5,6,7,8,9,10,11}
\node (\x) at (\x, 0) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {1,2,3,4,5,6,7,8,9,10,11}
\node (\x) at (\x, 1) [inner sep=0.5mm, circle, fill=black!100] {};
\draw (1,0) -- (4,0);
\draw (6,0) -- (9,0);
\draw (10,0) -- (11,0);
\draw (4,1) -- (6,1);
\draw (9,1) -- (10,1);
\draw (4,0) -- (4,1);
\draw (6,0) -- (6,1);
\draw (9,0) -- (9,1);
\draw (10,0) -- (10,1);
\end{tikzpicture}
\end{center}
\caption{The graph $L_{11}$ and the path $P_A$ corresponding to the subset $A = \{ 4,5,9\}.$}
\end{figure}
A more interesting problem is to determine $f(n)$, the \emph{maximum} of $f(G)$ taken over all $n$-vertex graphs; this question was raised by G.O.H. Katona at the Eml\'ekt\'abla Workshop in August 2013.
Clearly, at most one edge of a graph can be left uncovered by the paths in a separating path system of the graph; it is thus unsurprising that the question of building small separating path systems is closely related to the well-studied question of covering a graph with paths. It would be remiss to not remind the reader of a beautiful conjecture of Gallai which asserts that every connected graph on $n$ vertices can be decomposed into $\lfloor \frac{n+1}{2} \rfloor$ paths. The following fundamental result of Lov\'asz \citep{lovasz}, which provides support for Gallai's conjecture, will prove useful; here and elsewhere, by a decomposition of a graph we mean a covering of its edges with edge disjoint subgraphs.
\begin{thm}[Lov\'asz]
\label{path_decomposition}
Every graph on $n$ vertices can be decomposed into at most $n/2$ paths and cycles. Consequently, every graph on $n$ vertices can be decomposed into at most $n$ paths.
\end{thm}
Let $G$ be any graph on $n$ vertices and let $E_1, E_2, \dots, E_k$ be a separating system of the edge set $E(G)$ where $k = \left\lceil \log_2{|E(G)|} \right\rceil \leq 2\log_2{n}$. Let $G_i$ be the subgraph of $G$ induced by the edges of $E_i$. By Theorem \ref{path_decomposition}, each $G_i$ may be decomposed into at most $n$ paths. Putting these together, we get a separating path system of $G$ of cardinality at most $kn$. Consequently, we note that $f(n) \leq 2n\log_2{n}$.
To bound $f(n)$ from below, let us consider $K_n$, the complete graph on $n$ vertices. Suppose that we have a separating path system $\mathcal{P}$ of $K_n$ with $k$ paths. Note that at most one edge of $K_n$ goes uncovered by the paths of $\mathcal{P}$ and further, at most $k$ edges of $K_n$ belong to exactly one path of $\mathcal{P}$. Since any path of $K_n$ has length at most $n-1$, we deduce that
\[ k(n-1) \geq 1 + k + 2\left(\binom{n}{2} - k - 1\right)\]
or equivalently, $k \geq n-1-1/n$. Thus, we note that $f(n)\geq n-1$. We believe that the lower bound, rather than the upper bound, is closer to the truth; we make the following conjecture.
\begin{conjecture}
\label{main_conj}
There exists an absolute constant $C$ such that for every graph $G$ on $n$ vertices, $f(G) \leq Cn$.
\end{conjecture}
Let us remark that it is not inconceivable that $f(n) = (1+o(1))n$ and Conjecture \ref{main_conj} is true for every $C>1$. In this paper, we shall prove Conjecture \ref{main_conj} in certain special cases. Our first result establishes the conjecture for graphs of linear minimum degree.
\begin{thm}\label{thm_linear_min_deg}
Let $c>0$ be fixed. Every graph $G$ on $n$ vertices with minimum degree at least $cn$ has a separating path system of cardinality at most $\frac{122n}{c^2}$ for all sufficiently large $n$.
\end{thm}
Building upon the ideas used to prove Theorem \ref{thm_linear_min_deg}, we shall prove Conjecture \ref{main_conj} for the Erd\H{o}s-R\'enyi random graphs using the fact that these graphs have good connectivity properties.
\begin{thm}\label{thm_random_graphs}
For any probability $p = p(n)$, w.h.p., the random graph $G(n,p)$ has a separating path system of size at most $48n$.
\end{thm}
Note in particular that Theorem \ref{thm_random_graphs} implies that Conjecture \ref{main_conj} is true for almost all $n$-vertex graphs with $C=48$. Using Theorem \ref{thm_linear_min_deg}, we shall also establish the conjecture for a class of dense graphs, which includes quasi-random graphs (in the sense of Chung, Graham and Wilson \citep{quasi_graph1} and Thomason \citep{quasi_graph2}) as a subclass.
\begin{thm}\label{thm_dense}
Let $c>0$ be fixed and let $G$ be a graph on $n$ vertices such that every subset $U \subseteq V(G)$ of size at least $\sqrt{n}$ spans at least $c|U|^2$ edges. Then $f(G) \leq \frac{638n}{c^3}$ for all sufficiently large $n$.
\end{thm}
The above results are far from best possible but we make no attempt to optimise our bounds since it seems unlikely that our methods will yield the best possible constants. In the case of trees however, we are able to obtain tight bounds.
\begin{thm}\label{thm_trees}
Let $T$ be a tree on $n\geq 4$ vertices. Then
\[\left\lceil\frac{n+1}{3}\right\rceil \leq f(T) \leq \left\lfloor\frac{2(n-1)}{3}\right\rfloor.\]
Furthermore, these bounds are best possible.
\end{thm}
We use standard graph theoretic notions and notation and refer the reader to \citep{belabook1} for terms and notation not defined here. We shall also make use of some well known results about random graphs without proof; see \citep{belabook2} for details.
In the next section, we describe a general strategy that we adopt to prove Theorems \ref{thm_linear_min_deg} and \ref{thm_random_graphs}. We then prove Theorems \ref{thm_linear_min_deg} and \ref{thm_random_graphs} in Sections \ref{min_deg} and \ref{random} respectively. Section \ref{dense} is devoted to the proof of Theorem \ref{thm_dense}. For the sake of clarity, we shall systematically omit ceilings and floors in Sections \ref{min_deg}, \ref{random} and \ref{dense}.
We then prove Theorem \ref{thm_trees} in Section \ref{trees}. We conclude the paper in Section \ref{conclusion} with a discussion of related questions and problems.
\section{A General Strategy}\label{strategy}
Theorems \ref{thm_linear_min_deg} and \ref{thm_random_graphs} are proved similarly, using the following strategy. Let $G_1,G_2$ be subgraphs of $G$ which partition the edge set of $G$. By Corollary \ref{path_decomposition}, there exists a path decomposition of $G_1$ with at most $n$ paths $P_1,\ldots,P_n$. We decompose the edges of $G_1$ into at most $3n$ matchings $M_1,\ldots,M_{3n}$ as follows. We start with $M_1=\ldots=M_{3n}=\emptyset$ and add the edges of $G_1$ one by one to a suitably chosen matching $M_i$. Given an edge $e=xy\in E(G_1)$, let $j$ be such that $e\in P_j$. We add $e$ to a matching $M_i$ which contains no edge of $P_j$ and no edge incident to $x$ or $y$. As the length of $P_j$ is at most $n-1$ and there are at most $2n$ edges incident to either $x$ or $y$, this process is well defined; indeed, we can always find a matching $M_i$ satisfying the required conditions.
For each $1\le i \le 3n$, we find a covering of $E(M_i)$ with paths using edges from $E(G_2)\cup E(M_i)$. These covering paths together with the paths $P_1,\ldots,P_n$ separate the edges of $G_1$ from each other and from the edges of $G_2$. Repeating this process with the roles of $G_1$ and $G_2$ reversed, we obtain a separating path system of $G$.
In order to prove the existence of a small separating path system, we shall partition the graph $G$ into $G_1$ and $G_2$ in a way that will enable us to keep the cardinalities of the above coverings small.
\section{Graphs of linear minimum degree}\label{min_deg}
\begin{proof}[Proof of Theorem \ref{thm_linear_min_deg}]
Let $G$ be a graph on $n$ vertices with minimum degree at least $cn$, for some fixed $0<c<1$. It is easy to decompose $G$ into two disjoint subgraphs $G_1$ and $G_2$ in such a way that both subgraphs have minimum degree at least $cn/3$. Indeed, one way to do this is to define $G_1$ by randomly selecting each edge of $G$ with probability $1/2$ and to take $G_2$ to be the complement of $G_1$ in $G$, i.e., we have $V(G_2) = V(G)$ and $E(G_2) = E(G) \backslash E(G_1)$; the minimum degree conditions follow from the standard estimates for the tail of the binomial distribution.
Following the strategy described in Section \ref{strategy}, let $P_1,\ldots,P_{n}$ be a path decomposition of $G_1$ and let $M_1,\ldots,M_{3n}$ be a decomposition of $G_1$ into matchings such that the intersection $M_i\cap P_j$ contains at most one edge for each pair $i,j$.
Define a graph $H$ on $V(G)$ as follows: two distinct vertices $x,y\in V(G)$ are adjacent in $H$ if they have at least $c^2n/24$ common neighbours in $G_2$. Note that $H$ has no independent set of size at least $4/c$. Indeed, if $A\subseteq V(G)$ is an independent set in $H$ of size $k=4/c$, then
\begin{equation*}
n=|V(G)|\ge
\sum\limits_{x\in A}\mbox{deg}(x,G_2)-\sum\limits_{x\neq y\in A}|\Gamma(x,G_2)\cap \Gamma(y,G_2)|>
\frac{kcn}{3}-\frac{k^2c^2n}{2\cdot 24}=(4/3-1/3)n=n,
\end{equation*}
which is a contradiction.
For each $1\le i\le 3n$, define a sequence of paths in $E(M_i)\cup E(H)$ as follows. Colour the edges of $M_i$ blue and the edges of $H$ red - note that there may be edges coloured both red and blue. Let $Q_{i,1}$ be a longest path alternating between blue and red edges and starting with a blue edge. Having defined $Q_{i,1},\ldots, Q_{i,j-1}$, we set $E_{i,j}=E(M_i)\backslash(E(Q_{i,1})\cup\ldots\cup E(Q_{i,j-1}))$. If $E_{i,j} = \emptyset$, we stop. If not, let $Q_{i,j}$ be a longest path alternating between blue edges from $E_{i,j}$ and red edges, starting with a blue edge. Note that we might reuse red edges in this process, but not blue edges.
Since the $Q_{i,j}$'s are defined to be longest paths, the starting vertices of the paths $Q_{i,j}$ form an independent set in $H$. Thus for each $1\le i\le 3n$, we have at most $4/c$ such paths $Q_{i,j}$ and consequently at most $12n/c$ paths in total. Note that every edge of $G_1$ appears exactly once in one of these $12n/c$ paths as a blue edge. Thus the sum of the lengths of these paths $Q_{i,j}$ is at most $2|E(G_1)|\le n^2$. We split each of the paths $Q_{i,j}$ into paths of length $c^2n/48$, where we allow one of the subpaths to have length less than $c^2n/48$. We thus obtain at most $n^2/(c^2n/48)+12n/c\le 60n/c^2$ red-blue paths. Note that for every red edge $xy$, the vertices $x,y$ have at least $c^2n/24$ common neighbours in $G_2$. Consequently, we can transform all the red-blue paths into simple paths in $G$: we replace every red edge with a path of length two in $G_2$ with the same end points. We can do this because the number of common neighbours in $G_2$ of the ends of a red edge is at least twice the length of the original red-blue path. The family consisting of these paths and the paths $P_1,\ldots,P_n$ separates the edges of $G_1$ and has size at most $60n/c^2+n\le 61n/c^2$.
By repeating the above process with the roles of $G_1$ and $G_2$ reversed, we obtain a separating path system of $G$ of size at most $122n/c^2$.
\end{proof}
\section{Random graphs}\label{random}
\begin{proof}[Proof of Theorem \ref{thm_random_graphs}]
We use different arguments for different ranges of the edge probability.
\subsection{\textbf{Case 1 : $p \geq 10\log{n}/n$}} Let $G$ be a copy of $G(n,2p)$, where $p\geq 5\log n/n$. We define graphs $G_1,G_2$ on the vertex set of $G$ as follows. We construct $G_1$ by randomly selecting each edge of $G$ with probability $1/2$ and we take $G_2$ to be the complement of $G_1$ in $G$; clearly, $G_1$ and $G_2$ are edge-disjoint copies of $G(n,p)$.
The following lemma is easily proved using the standard estimates for the tail of the binomial distribution.
\begin{lem}\label{lem_random_graph_properties}
Let $p\ge 5\log n/n$. Then w.h.p., the following assertions hold.
\begin{itemize}
\item $n^2p/4 \leq |E(G(n,p)|\leq n^2p$.
\item
$G(n,p)$ has minimum degree at least $np/5$.
\item
$G(n,p)$ is $np/10$-connected.
\end{itemize}
\end{lem}
We shall also need the notion of a \emph{$k$-linked graph}. A graph is said to be \emph{$k$-linked} if it has at least $2k$ vertices and for every sequence of $2k$ distinct vertices $v_1,\ldots,v_k,u_1,\ldots,u_k$, there exist vertex disjoint paths $P_1,\ldots, P_k$ such that the endpoints of $P_i$ are $v_i,u_i$. Bollob\'as and Thomason \cite{linked_graphs} showed that every $22k$-connected graph is $k$-linked. This was later improved by Thomas and Wollan \cite{linked_graphs_new}, who proved that every $2k$-connected graph on $n$ vertices with at least $5kn$ edges is $k$-linked. From the latter result and Lemma \ref{lem_random_graph_properties}, we conclude that w.h.p., both $G_1$ and $G_2$ are $np/20$-linked.
Following the strategy described in Section \ref{strategy}, we find a decomposition of $G_1$ into paths $P_1,\ldots,P_n$ and a decomposition of $G_1$ into matchings $M_1,\ldots,M_{3n}$ such that the intersection $M_i\cap P_j$ contains at most one edge for every $i,j$.
We decompose each matching $M_i$ into submatchings of size at most $np/20$. Since $G_1$ has at most $n^2p$ edges, we thus obtain at most $23n$ matchings $M_1',\ldots, M_{23n}'$. Since $G_2$ is $np/20$-linked, we can complete each such matching $M_i'$ into a path using the edges of $G_2$. These paths along with $P_1,\ldots,P_n$ constitute a separating path system of $G_1$ of size at most $24n$. Reversing the roles of $G_1$ and $G_2$, we obtain a set of $24n$ paths separating the edges of $G_2$. The union of these two families of paths is a separating path system of $G$ of cardinality at most $48n$.
\subsection{\textbf{Case 2 : $p \leq 10/n$}}
In this case, w.h.p., $G(n,p)$ has at most $20n$ edges and so the edges of $G$ constitute a separating path system of size at most $20n$.
\subsection{\textbf{Case 3 : $10/n \leq p \leq 10\log{n}/n$}}
We begin by collecting together some useful properties of sparse random graphs. We will need some notation: Given a graph $G$, write $B_i(v)=B_i(v,G)$ for the set of vertices at (graph-)distance at most $i$ from $v$ and let $\Gamma_i(v) = \Gamma_i(v,G) = B_i(v) \backslash B_{i-1}(v)$. The following lemma is somewhat technical; we defer its proof to the end of the section.
\begin{lem}\label{lem_sparse_random_graphs_properties}
Let $10\le d\le 10 \log n$. Then w.h.p., the following assertions hold for $G=G(n,d/n)$.
\begin{enumerate}[(i)]
\item \label{item_0}
$G$ has at most $dn$ edges.
\item \label{item_1}
$|\Gamma_i(x)|\le (2d)^i\log n$ for every $x\in V(G)$ and $i\le n$.
\item \label{item_2}
Every set of $i\le \sqrt{n}$ vertices spans at most $2i$ edges. Furthermore, every set of $i\le 10\log\log n$ vertices spans at most $i$ edges.
\item \label{item_3}
Let $G'$ be a subgraph of $G$ with minimum degree at least $10$. Then $|\Gamma_{i}(x,G')|\ge 2^i$ for every $x\in V(G')$ and every $1\le i\le 10\log\log n$.
\item \label{item_4}
Let $G'$ be a subgraph of $G$ obtained by deleting at most $20d\log n$ vertices and edges and let $l=3 \log\log n$. If $x,y\in V(G')$ are such that $|B_l(x,G')|,|B_l(y,G')|\ge (\log n)^{3}$, then there is a path of length at most $2\log n$ between $x$ and $y$ in $G'$.
\end{enumerate}
\end{lem}
The \emph{$k$-core} of a graph is its largest induced subgraph with minimum degree at least $k$. Let $H$ be the $15$-core of $G=G(n,p)$ and let $d = np$. By Theorem \ref{path_decomposition}, we can decompose $H$ into $n$ paths. Since by Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_0}) there are at most $dn$ edges in $G$, we can decompose these $n$ paths into at most $2n$ subpaths $Q_1, \ldots, Q_{2n}$, each of which has length at most $d$.
Let $l=3\log\log n$. We shall define a collection of at most $2n$ matchings in $H$ of size $d$ each using the paths $Q_1,\ldots,Q_{2n}$. Each of these matchings will consist of $d$ edges $e_1, e_2, \ldots, e_d$ chosen from some $d$ distinct paths $Q_{i_1}, Q_{i_2}, \ldots, Q_{i_d}$ which have the additional property that for every $j\neq j'$ and every $x\in V(Q_{i_j})$, $x'\in V(Q_{i_{j'}})$ we have $B_l(x)\cap B_l(x')=\emptyset$.
We begin with a collection of paths $R_1, \dots, R_{2n}$ which we modify as we go along. Initially we set $R_i=Q_i$ for every $i$. We define our collection of matchings in $H$ in a sequence of rounds.
At the beginning of a round, if we have fewer than $2\sqrt{n}$ non-empty paths $R_i$, we stop. Otherwise, we select $d$ of the $R_i$'s (in a way we specify below), remove the initial edge from each of these paths and use these $d$ removed edges to form a matching of size $d$. To choose our $d$ paths $R_{i_1}, \ldots, R_{i_d}$ we proceed as follows. Let $R_{i_1}$ be any non-empty path. Now, assume that we have chosen $R_{i_1},\ldots,R_{i_{t-1}}$, where $t\le d$. Let $N_t=\bigcup_x B_{2l+1}(x)$ where the union is taken over all $x\in V(Q_{i_1})\cup\ldots\cup V(Q_{i_{t-1}})$. From Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_1}), we see that
\[\vert N_t \vert < (t-1) d (2d)^{2l+2}\log n < (2d)^{2l+4} \log n <\sqrt{n}.\]
Thus by Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_2}), $N_t$ spans at most $2\sqrt{n}$ edges. Since we started the round with more than $2\sqrt{n}$ non-empty paths, there is a path which contains no edge induced by the vertex set $N_t$; let $R_{i_t}$ be any such a path, and repeat the procedure until the $d$ paths $R_{i_1}, R_{i_2}, \ldots, R_{i_d}$ have been obtained. Clearly the matchings defined by this process are disjoint and of size $d$, so there are at most $n$ of them; denote them by $M_1,\ldots,M_{n}$.
In Lemma \ref{lem_main_sparse_random_graphs} (stated below), we show that for each matching $M_i$, there is a path containing $E(M_i)$ and avoiding, for every $e\in E(M_i)$, the other edges of the path $Q\in \{Q_1, Q_2\ldots, Q_{2n}\}$ containing $e$.
We then obtain a separating system of size at most $19n$ by taking the union of the following families of paths.
\begin{itemize}
\item
The edges $E(G)\backslash E(H)$ of which there are at most $15n$.
\item
The paths $Q_1,\ldots,Q_{2n}$.
\item
The edges of $H$ which are not covered by the matchings $M_1,\ldots,M_{n}$ of which there are at most $2d\sqrt{n}\le n$.
\item
The set of $n$ paths promised by Lemma \ref{lem_main_sparse_random_graphs}.
\end{itemize}
We now state and prove Lemma \ref{lem_main_sparse_random_graphs}.
\begin{lem}\label{lem_main_sparse_random_graphs}
Let $G=G(n,p)$ be a graph satisfying Lemma \ref{lem_sparse_random_graphs_properties}. Let $S_1,\ldots,S_d$ be vertex-disjoint paths of length at most $d$ in the $15$-core $H$ of $G$. Set $l=3\log\log n$ and assume that $B_l(x)\cap B_l(y)=\emptyset$ for every $x\in V(S_i), y\in V(S_j)$ with $i\neq j$. For each $i$, select an edge $e_i=x_iy_i$ from $S_i$, and set $M=\{e_1, e_2, \ldots , e_d\}$. Then there exists a path in $G$ containing all the edges of $M$ and no other edge from $\bigcup_{1\leq i\leq d} E(S_i)$.
\end{lem}
\begin{proof}
Write $E' = (\bigcup_{1\le i\le d}E(S_i))\backslash E(M)$. Let $G_0$ be the graph on $V(G)$ with edge set $E(G)\backslash E'$ and let $G_1$ be the graph obtained from $G_0$ by deleting $x_1$. Consider the graph $H_1$ on the vertex set $V(H) \backslash \{ x_1 \}$ with edge set $E(H) \cap E(G_1)$. Note that $H_1$ has minimum degree at least $12$, since by removing the vertex-disjoint paths $S_1,\ldots,S_d$ and the vertex $x_1$ we decrease vertex degrees in $H$ by at most three. Thus by Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_3}), $|B_l(v,G_1)|\ge (\log n)^{3}$ for every $v\in V(M)$.
We define vertex-disjoint paths $P_1,\ldots,P_{d-1}$ of size at most $2\log n$ as follows. Suppose we have already defined the paths $P_1, P_2, \ldots, P_{i-1}$ for some $i<d$. Set $G_i = G_1\setminus \bigcup_{1\leq j < i} V(P_j)$ and let $P_i$ be a shortest path in $G_i$ connecting $y_i$ to a vertex from $\bigcup_{i+1 \leq j \leq d}\{x_j, y_j\}$. Relabelling the remaining vertices and edges if necessary, assume that this path connects $y_i$ to $x_{i+1}$.
We shall show by induction that $P_i$ has length at most $2\log n$. Assume that we have defined $P_1, \dots, P_{i-1}$. By the inductive hypothesis, note that we may assume that $G_i$ is obtained by removing at most $2d\log n$ vertices and at most $d^2\le 10d\log n$ edges from $G$. Consequently, the bound on the length of $P_i$ would follow from Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_4}) by showing that $|B_l(y_i, G_i)|, |B_l(x_{i+1}, G_i)| \geq (\log n)^{3}$.
First, we claim that $B_l(x_{i+1},G_i) = B_l(x_{i+1},G_1)$. To see this, first note that for every $j \leq i-1$, the sets $B_l(y_{j},G)$ and $B_l(x_{i+1},G)$ are disjoint and consequently, so are $B_l(y_{j},G_j)$ and $B_l(x_{i+1},G_j)$. Since $P_j$ is a shortest path from $y_j$ to $\bigcup_{j+1 \leq k \leq d}\{x_k, y_k\}$, it follows that $V(P_j) \cap B_l(x_{i+1},G_j) = \emptyset$. Hence, $B_l(x_{i+1},G_{j+1}) = B_l(x_{i+1},G_j)$ for every $j \leq i-1$. Therefore, we have $|B_l(x_{i+1},G_i)| \geq (\log n)^{3}$.
Notice that by the same argument, we have $ B_l(y_{i},G_{i-1}) = B_l(y_{i},G_1)$. Next, by the minimality of $P_{i-1}$, it is clear that the set $V(P_{i-1}) \cap B_l(y_{i},G_{i-1})$ is contained in the set $V_{i-1}'$ of the last $l+1$ vertices of $P_{i-1}$. Let $H_i$ be the subgraph of $H_1$ induced by the vertex subset $V(H_1)\backslash V_{i-1}'$. We deduce from Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_2}) that no vertex of $G_1$ has more than two neighbours in $V_{i-1}'$; so $H_i$ has minimum degree at least 10. By Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_3}) we then have $|B_{l}(y_i, H_i)|\ge(\log n)^{3}$. Since $V(H_i)\cap B_l(y_i,G_1)\subseteq V(G_i)\cap B_l(y_i,G_1)$, it follows that $B_{l}(y_i, H_i) \subseteq B_{l}(y_i, G_{i})$. Hence, $|B_{l}(y_i, G_{i})| \geq (\log n)^{3}$ and Lemma \ref{lem_main_sparse_random_graphs} follows by Lemma \ref{lem_sparse_random_graphs_properties}(\ref{item_4}).
\end{proof}
\subsection{Proof of Lemma \ref{lem_sparse_random_graphs_properties}}
We now complete the proof of Theorem \ref{thm_random_graphs} by proving Lemma \ref{lem_sparse_random_graphs_properties}.
\begin{proof}[Proof of Lemma \ref{lem_sparse_random_graphs_properties}]
Parts (\ref{item_0})--(\ref{item_1}) of Lemma \ref{lem_sparse_random_graphs_properties} follow easily from the standard Chernoff-type bounds for the tails of binomial random variables. Part (\ref{item_2}) is established using a straightforward first moment estimate.
To prove Part (\ref{item_3}), we assume that $G$ satisfies Parts (\ref{item_1}) and (\ref{item_2}). Let $G'$ be a subgraph of $G$ with minimum degree at least $10$. Let $x\in V(G')$ and write $\Gamma_i=\Gamma_{i}(x,G')$ and $B_i=B_{i}(x,G')$.
\begin{claim}\label{doubling}
$|\Gamma_i|\ge 2|B_{i-1}|$ for $1\le i\le 10\log\log n$.
\end{claim}
\begin{proof}
By Part (\ref{item_1}), $|\Gamma_i|\le (2d)^i\log n$ for $1\le i\le 10\log \log n$ and so, $|B_i|\le 2(2d)^{i+1}\log n\le \sqrt{n}$.
So by Part (\ref{item_2}), $B_i$ spans at most $2|B_i|$ edges for every $1\le i\le 10\log \log n$.
Since every vertex in $B_{i-1}$ has degree at least $10$ in $G'$ and $B_{i-1}$ spans at most $2|B_{i-1}|$ edges, there are at least $6|B_{i-1}|$ edges from $B_{i-1}$ to $\Gamma_i$. As $B_{i-1}$ is connected, $B_i$ must span at least $7|B_{i-1}|-1$ edges.
Since $B_i$ spans at most $2|B_i|$ edges, this implies that $|B_i|\ge (7|B_{i-1}|-1)/2\ge 3|B_{i-1}|$, i.e.,~$|\Gamma_i|\ge 2|B_{i-1}|$.
\end{proof}
Claim \ref{doubling} implies in particular that $|\Gamma_i|\ge 2^i$ for $i\le 10\log\log n$, proving Part (\ref{item_3}). In order to prove Part (\ref{item_4}), we shall need the following.
\begin{claim} \label{claim_big_neighbourhood}
Let $l=3\log \log n$. Let $G$ be a graph obtained from $G'$ by removing at most $20d\log n$ edges and let $x$ be a vertex in $G'$ satisfying $\vert B_l(x,G')\vert\geq (\log n)^3$. Then w.h.p.,~for every such $G'$ and $x$, there exists an $i < \log n$ such that $|\Gamma_{i}(x,G')|\ge n/2d$.
\end{claim}
\begin{proof}
Write $\Gamma_i=\Gamma_i(x,G')$, $B_i=B_i(x,G')$ and let $E'$ be the set of edges removed from $G$ to obtain $G'$. Note that the assumption on $x$ implies in particular that there exists $k\le l$ such that $|\Gamma_{k}|\ge (\log n)^{2.5}$. By Part (\ref{item_1}), w.h.p.~we also have $|B_k|=o(n/d)$.
We show that w.h.p.,~for every $G'$ and $x$ as above and $i\ge k$, either $|\Gamma_{i+1}|\ge (d/2)|\Gamma_i|$ or $|\Gamma_i|\ge (n/2d)$. Note that this would prove Claim \ref{claim_big_neighbourhood}.
Conditional on $|\Gamma_i|\le \frac{n}{2d}$ and on $|\Gamma_{j+1}|\ge \frac{d}{2}|\Gamma_j|$ for $k\le j<i$, we shall bound from above the probability that $|\Gamma_{i+1}|\le \frac{d}{2}|\Gamma_i|$. Write $A_i=V(G')\backslash (\Gamma_i\cup V(E'))$, and let $A_i'$ be the set of those vertices in $A_i$ which are adjacent to some vertex in $\Gamma_i$. It is not hard to check that since $|E'|\le 20d\log n$, we have $|A_i|\ge 9n/10$. We shall estimate the probability that $|A_i'|\le (d/2)|\Gamma_i|$, conditional on $|\Gamma_i|\ge (\log n)^{2.5}$ and $|A_i|\ge 9n/10$.
The probability that a particular vertex in $A_i$ is adjacent to some vertex in $\Gamma_i$ is
\begin{equation*}
(1-(1-d/n)^{|\Gamma_i|})\ge \frac{d|\Gamma_i|}{n}-\frac{1}{2}\left(\frac{d|\Gamma_i|}{n}\right)^2\ge\frac{3d|\Gamma_i|}{4n}.
\end{equation*}
Thus the expected size of $A_i'$ is at least $\frac{27d|\Gamma_i|}{40}$. By appealing to the standard bounds for the tail of a binomial random variable, we have
\[ \mathbb{P}\big[\,|A_i'|\le \frac{d}{2}|\Gamma_i|\,\big]\le \mathbb{P}\Big[\,|A_i'|\le \frac{3}{4}\mathbb{E}|A_i'|] \,\Big]\le \exp{(-\mathbb{E}|A_i'|/32)}\le \exp{(-d(\log n)^{2.5}/100)}.\]
Since we have $2^{O(d(\log n)^2)}$ choices for $G'$, $x$ and $i$, this implies that w.h.p., $\vert A_i'\vert \geq (d/2) \vert \Gamma_i \vert$ as required.
\end{proof}
We now complete the proof of Part (\ref{item_4}) of Lemma \ref{lem_sparse_random_graphs_properties}. Using Claim \ref{claim_big_neighbourhood}, we can find $s,t < \log n$ such that $|\Gamma_s(x,G')|, |\Gamma_t(y,G')|\ge n/2d$.
If $B_s(x,G')\cap B_t(y,G')\neq\emptyset$, the assertion of Part (\ref{item_4}) follows. Otherwise, note that the probability that there are no edges between $\Gamma_s(x,G')$ and $\Gamma_t(y,G')$ is at most $(1-\frac{d}{n})^{(\frac{n}{2d})^2}\le e^{-\frac{n}{4d}}$. Since we have $2^{O(d(\log n)^2)}$ choices for $x,y$ and $G'$, this implies that w.h.p., the assertion of Part (\ref{item_4}) holds. This completes the proof of Part (\ref{item_4}).
\end{proof}
We have established Lemma \ref{lem_sparse_random_graphs_properties}, thus completing the proof of Theorem \ref{thm_random_graphs}.
\end{proof}
\section{Dense graphs}\label{dense}
\begin{proof}[Proof of Theorem \ref{thm_dense}]
Let $c>0$ and let $G$ be a graph on $n$ vertices such that for every $k\ge \sqrt{n}$, every set of $k$ vertices spans at least $ck^2$ edges.
We define a sequence of subgraphs $G=G_0\supseteq G_1\supseteq \ldots \supseteq G_{l-1}$ and a related sequence of graphs $H_1,H_2,\ldots,H_l$ as follows. Start by setting $G_0=G$. If $|V(G_{i-1})|\le\sqrt{n}$, we stop and take $H_i = G_{i-1}$. Otherwise, we take $H_i$ to be the $(c|G_{i-1}|/2)$-core of $G_{i-1}$ and define $G_i$ to be the graph induced by $V(G_{i-1})\backslash V(H_i)$. Note that the sets $V(H_i)$ form a partition of $V(G)$.
Let us write $g_i$ and $h_i$ respectively for the number of vertices of $G_i$ and $H_i$. It is well known that the $k$-core of a graph can be found by removing vertices of degree at most $k-1$, in arbitrary order, until no such vertices exist. So the number of edges removed from $G_{i-1}$ to obtain its $(cg_{i-1}/2)$-core is at most $cg_{i-1}^2/2$. Thus, at least $cg_{i-1}^2/2$ edges remain, so $h_i\ge \sqrt{c}g_{i-1} \ge cg_{i-1}\ge cg_i$.
We first separate the internal edges of the graphs $H_i$. Note that $H_i$ has minimum degree at least $ch_i/2$. So we conclude from Theorem \ref{thm_linear_min_deg} that $H_i$ has a separating system of size at most $488h_i/c^2$ for every $1\le i<l$. Also, since $V(H_l)\leq \sqrt{n}$, we may separate the edges of $H_l$ trivially (by adding each edge individually to our separating path system); this contributes at most $n$ paths. Since the graphs $H_i$ are pairwise vertex disjoint, we may separate the internal edges of the $H_i$'s using at most $488n/c^2$ paths.
It remains to separate the crossing edges between the $H_i$'s. For $1\leq i<l$, let $E_i$ be the set of edges of the form $xy$ where $x\in V(H_i),y\in V(G_i)$ and let $E_i'$ be the set of such edges $xy$ where $y$ has at least three neighbours in $H_i$. Note that every edge of $G$ not contained in any of the $H_i$'s is contained in one of the $E_i$'s.
We define a $g_i$-edge-coloured multigraph $F_i$ on the vertex set of $H_i$ as follows. If $v\in G_i$ has at least three neighbours in $H_i$, say $x_{1},\ldots,x_k$, we add the edges $x_1x_2,x_2x_3,\ldots,x_kx_1$ to $F_i$ and colour these edges with the colour $v$; in other words, we add a $v$-coloured cycle through the neighbours of $v$. Note that the degree of every vertex in $F_i$ (as a multigraph) is at most $2g_i$ and every colour class contains at most $h_i$ edges. Since each edge has at most $4g_i + h_i\le 5h_i/c$ edges which are either incident to it or from the same colour class, we can, as in Section \ref{strategy}, decompose $F_i$ into at most $5h_i/c$ rainbow matchings $M_1,\ldots,M_{5h_i/c}$, where a rainbow matching is a matching containing at most one edge from each colour class.
We now construct another sequence of rainbow matchings decomposing $F_i$ with the following property. Denote by $e_1,\ldots,e_m$ the edges in $M_j$ and let $v_1,\ldots,v_m$ be their respective colour classes. Let $\alpha_k, \beta_k$ be the two neighbours of $e_k$ in the cycle whose edges have colour $v_k$. In our second sequence of rainbow matchings, we would like the matching containing $e_k$ to avoid $e_t, \alpha_t, \beta_t$ for every $t\neq k$. Since each edge has to avoid at most $4g_i + h_i + 3h_i\le 8h_i/c$ other edges, we can find such a decomposition into at most $8h_i/c$ matchings, say, $M_{5h_i/c+1}, \dots, M_{13h_i/c}$.
We now mimic the proof of Theorem \ref{thm_linear_min_deg}. Let us define a graph $H_i'$ on $V(H_i)$ where we join two vertices if they have more than $c^2h_i/24$ common neighbours in $H_i$. For each $1 \leq j \leq 13h_i/c$, we can find a collection of at most $4/c$ paths whose edges alternate between those of $M_j$ and $H_i'$ which cover each edge of $M_j$ exactly once; we obtain $52h_i/c^2$ such paths in total. We divide these paths into subpaths of length at most $c^2h_i/48$ each, resulting in a collection of at most $96h_i/c^3 + 52h_i/c^2$ paths. Each such path can be transformed into a path in $G$ by replacing every edge from $H_i'$ with a suitably chosen path of length two in $H_i$ and every coloured edge $e=xy$ from $M_j$ with the path $\{xv,vy\}$ where $v$ is the colour of $e$. Since the matchings are rainbow matchings, these paths are guaranteed to be simple. It is easy to see that the collection of $96h_i/c^3 + 52h_i/c^2$ paths defined above separates $E_i'$.
It remains to separate edges in $E_i\backslash E_i'$ for $1\le i< l$. Note that there are at most $2(g_1+\ldots+g_l)\le 2(h_1+\ldots+h_l)/c \le 2n/c$ such edges; we add each such edge to our separating path system.
It is easy to check that we have constructed a separating path system of $G$ of cardinality at most $488n/c^2 + 96n/c^3 + 52n/c^2 + 2n/c \leq 638n/c^3$. The result follows.
\end{proof}
\section{Trees}\label{trees}
We begin by collecting together a few simple observations into the following lemma.
\begin{lem}\label{lem_leaves_deg2}
Let $T$ be a tree on $n\geq 3$ vertices, and let $\mathcal{P}$ be a separating path system of $T$. Then the following assertions hold.
\begin{enumerate}[(i)]
\item With the exception of at most one leaf, every leaf of $T$ is an endpoint of a path in $\mathcal{P}$.
\item If a path in $\mathcal{P}$ has two leaves $u,v$ as its endpoints, then there must be at least one path in $\mathcal{P}$ which has exactly one of $u,v$ as an endpoint.
\item Every vertex of degree two in $T$ is an endpoint of a path in $\mathcal{P}$.
\end{enumerate}
\end{lem}
\begin{proof}
Clearly, a leaf must be an endpoint of any path through it. Since $\mathcal{P}$ separates $E(T)$, there is at most one edge of $T$ which is not covered by any path in $\mathcal{P}$. As $n\geq 3$, $T$ does not consist of a single edge and thus at most one leaf of $T$ is visited by no path in $\mathcal{P}$. This establishes (i).
Suppose that we have a path $P\in \mathcal{P}$ having two leaves $u,v$ of $T$ as its endpoints. Let $e_u, e_v$ be the edges incident to $u,v$ respectively. Since $\mathcal{P}$ separates $E(T)$, there must be some path $P'\in \mathcal{P}$ containing exactly one of $e_u, e_v$. This establishes (ii).
Suppose that $v$ is a vertex of degree two in $T$; let $e_1, e_2$ be the two edges of $T$ incident to $v$. Since $\mathcal{P}$ separates $E(T)$, there must be some path $P\in \mathcal{P}$ containing exactly one of $e_1, e_2$. Since $v$ has degree two, it must be an endpoint of this path $P$. This establishes (iii).
\end{proof}
We split the proof of Theorem \ref{thm_trees} into two parts.
\begin{proof}[Proof of the lower bound in Theorem \ref{thm_trees}]
Let $T$ be a tree on $n\geq 4$ vertices and let $\mathcal{P}$ be a separating system of $T$. We shall show that $|\mathcal{P}| \geq \left\lceil\frac{n+1}{3}\right\rceil$.
First, suppose that there is a leaf $v$ which is not the endpoint of any path in $\mathcal{P}$. Let $e_v$ be the edge of $T$ incident to $v$. Since $\mathcal{P}$ separates $E(T)$, $e_v$ is the unique edge of $T$ not covered by any path in $\mathcal{P}$. Delete $v$ from $T$ to obtain a tree $T'$ on $n-1\geq 3$ vertices. Let $d_1$ and $d_2$ denote the number of leaves and degree two vertices in $T'$.
The family $\mathcal{P}$ both covers and separates $E(T')$. From Lemma~\ref{lem_leaves_deg2}, we note that every leaf and every vertex of degree two of $T'$ is the endpoint of at least one path from $\mathcal{P}$. Furthermore, we know that if a path from $\mathcal{P}$ has a pair of leaves for its endpoints, then at least one of those leaves is the endpoint of at least one other path from $\mathcal{P}$.
We claim that $\mathcal{P}$ contains at least $(2d_1+d_2)/3$ paths. To see this, start by placing a red token on every leaf and a blue token on every vertex of degree two in $T'$. We then iterate through the paths of $\mathcal{P}$ in some order. In each iteration, we remove whatever tokens there are at the endpoints of the current path. If both the tokens removed are red, then we know that both the endpoints $u,v$ of the current path are leaves and that at least one of them, say $u$, is the endpoint of a different path; we then place a blue token on $u$. Writing $R,B$ respectively for the number of red and blue tokens remaining on the tree, we see that the quantity $2R + B$ does not decrease by more than three in any iteration. Since $\mathcal{P}$ is a separating path system, all the tokens must have been removed by the end of the procedure. It follows that
\[ \vert\mathcal{P} \vert \geq \frac{2d_1+d_2}{3}. \]
Now, note that
\[ 2e(T')=2(n-2)\geq d_1 + 2d_2+ 3(n-1-d_1-d_2), \]
which we can rearrange to get
\[ \frac{2d_1+d_2}{3}\geq \frac{n+1}{3}. \]
Taken together, these inequalities show that $\vert \mathcal{P}\vert \geq (n+1)/3$.
If on the other hand every leaf of $T$ is the endpoint of some path from $\mathcal{P}$, then, by repeating the argument above with $T$ instead of $T'$, we find that $\vert \mathcal{P}\vert \geq (n+2)/3$. We know from Lemma \ref{lem_leaves_deg2} that these are the only two possibilities; consequently, we are done.
\begin{figure}\label{comb}
\begin{center}
\begin{tikzpicture}[scale=2]
\foreach \x in {1,2,3,4,5,6}
\node (\x) at (\x, 0) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {1,2,3,4,5,6}
\node (1\x) at (\x, 1) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {1,2,3,4,5,6}
\node (2\x) at (\x, 2) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {1,2,3,4,5,6}
\draw (\x,0) -- (\x,2);
\draw (1,0) -- (6,0);
\draw (0.5, 1) -- (0.5, -0.5) -- (6.5, -0.5) -- (6.5, 1);
\draw (1, -0.25) -- (6, -0.25);
\foreach \x in {1,2,3,4,5}
\draw (\x+0.25, 2) -- (\x+0.25, 0.25) -- (\x+0.75, 0.25) -- (\x+0.75, 1);
\end{tikzpicture}
\end{center}
\caption{A hair comb of order $18$ and a separating system of $7$ paths.}
\end{figure}
To see that this lower bound is best possible, consider the family of \emph{hair combs}, where the hair comb of order $3n$ is obtained by starting with a \emph{spine} consisting of a path of length $n-1$ and then attaching a path of length two to each vertex of the spine. It is an easy exercise to show that this lower bound is tight for hair combs (see Figure 2 for an example of an optimal separating path system, or Proposition~\ref{proposition: comb} in the Appendix for a general construction).
\end{proof}
We now turn our attention to the second part of the proof of Theorem~\ref{thm_trees}.
\begin{proof}[Proof of the upper bound in Theorem~\ref{thm_trees}]
We shall show that $f(T) \leq \left\lfloor \frac{2(\vert V(T)\vert-1)}{3}\right\rfloor$ by induction on $n=\vert V(T)\vert$.
There is, up to isomorphism, only one tree of order $n$ for each of $n=1,2,3$, namely the path of length $n-1$. It is trivial to check that the claim holds for these trees.
Let $T$ be a tree of order $n>3$. If $T$ is a path, then it is easy to show using Lemma \ref{lem_leaves_deg2} that $f(T)=\lfloor\frac{n}{2}\rfloor$ (see Proposition~\ref{proposition: f(Pn)} in the Appendix); furthermore, $\lceil\frac{n-1}{2}\rceil \leq \lfloor \frac{2(n-1)}{3}\rfloor$ for all $n\geq 4$.
Suppose therefore that $T$ is not a path. It must contain at least one vertex $u$ with three distinct neighbours $v_1, v_2, v_3$. Contract the edges $uv_1$, $uv_2$, $uv_3$ to obtain a new tree $T'$ on $n-3$ vertices.
We find a separating path system $\mathcal{P}'$ of $T'$ of size at most $2(n-4)/3$. We may think of $\mathcal{P}'$ as a family of paths of $T$ since paths in $T'$ map to paths in $T$ in a natural way: a path in $T'$ is lifted up to a path in $T$ with the same endpoints (where we identify the vertex resulting from the contraction of $u,v_1,v_2, v_3$ with $u$). Consider the family
\[\mathcal{P}=\mathcal{P}'\cup \{v_1uv_2, v_2uv_3\}.\]
Since $\mathcal{P}'$ separates $E(T')$, it readily follows that $\mathcal{P}'$, when viewed as a family of paths of $T$, separates $E(T)\setminus \{uv_1, uv_2, uv_3\}$. The two paths $v_1uv_2, v_2uv_3$ then separate $uv_1, uv_2, uv_3$ from each other and from the rest of $E(T)$. Thus,
\[ \vert \mathcal{P} \vert \leq \frac{2(n-4)}{3}+2=\frac{2(n-1)}{3}. \]
We are done by induction.
To see that this upper bound is best possible, consider the family of \emph{stars}, where the star of order $n$ consists of a single internal vertex joined to $n-1$ leaves. By mimicking the proof of the lower bound using Lemma \ref{lem_leaves_deg2}, it is an easy exercise to verify that the upper bound is tight for stars.
\end{proof}
\section{Concluding Remarks}\label{conclusion}
There remain a number of interesting questions which merit investigation. While the main open problem of course is to establish that $f(n) = O(n)$, there are many other attractive related extremal questions. For instance, it would be interesting to determine the value of $f(K_n)$ exactly; one can also ask the same question for the the $d$-dimensional hypercube $Q_d$. It is easy to cover $Q_d$ with $d-1$ ladders, so $f(Q_d)=O(d^2)$. On the other hand, we know from the information theoretic lower bound that $f(Q_d)=\Omega(d)$. It would be interesting to nail down the value of $f(Q_d)$.
A different question, though of a similar flavour, raised by Bondy \citep{Bondy} and answered by Li \citep{HaoLi}, is that of finding \emph{perfect path double covers}, i.e., a set of paths of a graph such that each edge of the graph belongs to exactly two of the paths and each vertex of the graph is an endpoint of exactly two of the paths. We suspect that the tools developed to tackle this problem and its variants might prove useful in attempting to establish that $f(n) = O(n)$.
\textbf{Acknowledgements.} We would like to thank the organisers of the $5^{th}$ Eml\'et\'abla Workshop in Erd\H{o}tarsca, Hungary where some of the research in this paper was carried out.
| {
"timestamp": "2013-11-21T02:08:46",
"yymm": "1311",
"arxiv_id": "1311.5051",
"language": "en",
"url": "https://arxiv.org/abs/1311.5051",
"abstract": "We study separating systems of the edges of a graph where each member of the separating system is a path. We conjecture that every $n$-vertex graph admits a separating path system of size $O(n)$ and prove this in certain interesting special cases. In particular, we establish this conjecture for random graphs and graphs with linear minimum degree. We also obtain tight bounds on the size of a minimal separating path system in the case of trees.",
"subjects": "Combinatorics (math.CO)",
"title": "Separating path systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462222582661,
"lm_q2_score": 0.8006920044739461,
"lm_q1q2_score": 0.7910406410124338
} |
https://arxiv.org/abs/1106.3622 | On the connectivity of visibility graphs | The visibility graph of a finite set of points in the plane has the points as vertices and an edge between two vertices if the line segment between them contains no other points. This paper establishes bounds on the edge- and vertex-connectivity of visibility graphs.Unless all its vertices are collinear, a visibility graph has diameter at most 2, and so it follows by a result of Plesník (1975) that its edge-connectivity equals its minimum degree. We strengthen the result of Plesník by showing that for any two vertices v and w in a graph of diameter 2, if deg(v) <= deg(w) then there exist deg(v) edge-disjoint vw-paths of length at most 4. Furthermore, we find that in visibility graphs every minimum edge cut is the set of edges incident to a vertex of minimum degree.For vertex-connectivity, we prove that every visibility graph with n vertices and at most l collinear vertices has connectivity at least (n-1)/(l-1), which is tight. We also prove the qualitatively stronger result that the vertex-connectivity is at least half the minimum degree. Finally, in the case that l=4 we improve this bound to two thirds of the minimum degree. | \section{Introduction}
\seclabel{intro}
Let $P$ be a finite set of points in the plane. Two distinct points
$v$ and $w$ in the plane are \emph{visible} with respect to $P$ if no
point in $P$ is in the open line segment $vw$. The
\emph{visibility graph} of $P$ has vertex set $P$, where two vertices are adjacent if and only if they are visible with respect to $P$. So the visibility graph is obtained by drawing lines
through each pair of points in $P$, where two points are adjacent if they
are consecutive on a such a line. Visibility graphs have
interesting graph-theoretic properties.
For example, K\'ara, P\'or and Wood \cite{KPW} showed that $K_4$-free visibility graphs are $3$-colourable.
See \cite{pfender,KPW,porwood,matoudcg} for results and conjectures about the clique and chromatic number of visibility graphs. Further related results can be found in \cite{EmptyPentagon-GC,DPT09}.
The purpose of this paper is to study the edge- and vertex-connectivity of visibility graphs.
A graph $G$ on at least $k+1$ vertices is $k$-vertex-connected ($k$-edge-connected) if $G$ remains connected whenever fewer than $k$ vertices (edges) are deleted. Menger's theorem says that this is equivalent to the existence of $k$ vertex-disjoint (edge-disjoint) paths between each pair of vertices.
Let $\kappa(G)$ and $\lambda(G)$ denote the vertex- and
edge-connectivity of a graph $G$.
Let $\delta(G)$ denote the minimum
degree of $G$. We have $\kappa(G)\leq \lambda(G)\leq \delta(G)$. For these and other basic results in graph theory see a standard text book such as \cite{diestel}.
If a visibility graph $G$ has $n$ vertices, at most $\ell$ of which are collinear, then $\delta(G) \geq \frac{n-1}{\ell-1}$. We will show that both edge- and vertex-connectivity are at least $\frac{n-1}{\ell-1}$ (Theorem \ref{nledge} and Corollary \ref{nlvert}). Since there are visibility graphs with $\delta = \frac{n-1}{\ell-1}$ these lower bounds are best possible.
We will refer to visibility graphs whose vertices are not all collinear as \emph{non-collinear visibility graphs}.
Non-collinear visibility graphs have diameter $2$ \cite{KPW}, and it is known that graphs of diameter $2$ have edge-connectivity equal to their minimum degree \cite{plesnik}.
We strengthen this result to show that if a graph has diameter $2$ then between any two vertices $v$ and $w$ with $\deg(v) \leq \deg(w)$, there are $\deg(v)$ edge-disjoint paths of length at most $4$ (Theorem \ref{4paths}).
We also characterise minimum edge cuts in visibility graphs as the sets of edges incident to a vertex of minimum degree (Theorem \ref{edgecut}).
With regard to vertex-connectivity, our main result is that $\kappa\geq \frac{\delta}{2} +1$ for all non-collinear visibility graphs (Theorem \ref{halfdelta}).
This bound is qualitatively stronger than the bound $\kappa \geq \frac{n-1}{\ell-1}$ since it is always within a factor of $2$ of being optimal.
In the special case of at most four collinear points, we improve this bound to $\kappa \geq \frac{2\delta+1}{3}$ (Theorem \ref{4line}).
We conjecture that $\kappa \geq \frac{2\delta+1}{3}$ for all visibility graphs. This bound would be best possible since, for each integer $k$, there is a visibility graph with a vertex cut of size $2k+1$, but minimum degree $\delta = 3k+1$. Therefore the vertex-connectivity is at most $2k+1 =\frac{2\delta+1}{3}$. Figure~\ref{twothirds} shows the case $k=4$.
\begin{figure}[!h]
\includegraphics{Construction}
\caption{\label{twothirds}A visibility graph with vertex-connectivity
$\frac{2\delta+1}{3}$. The black vertices are a cut set. The minimum
degree $\delta = 3k+1$ is achieved, for example, at the top left vertex. Not all edges are drawn.}
\end{figure}
A central tool in this paper, which is of independent interest, is a kind of bipartite visibility graph.
Let $A$ and $B$ be disjoint sets of points in the plane.
The \emph{bivisibility graph} $\mathcal{B}(A,B)$ of $A$ and $B$ has vertex set $A\cup B$,
where points $v\in A$ and $w\in B$ are adjacent if and only if they are
visible with respect to $A\cup B$.
The following simple observation is used several times in this paper.
\begin{observation}\label{obs}
Let $G$ be a visibility graph. Let $\{A,B,C\}$ be a partition of $V(G)$ such that $C$ separates $A$ and $B$.
If $\mathcal{B}(A,B)$ contains $t$ pairwise non-crossing edges, then $\kappa(G) \geq |C| \geq t$
since there must be a distinct vertex in $C$ on each such edge.
\end{observation}
Finally, one lemma in particular stands out as being of independent interest. Lemma \ref{atlem} says that for any two properly coloured non-crossing geometric graphs that are separated by a line, there exists an edge joining them such that the union is a properly coloured non-crossing geometric graph.
\section{Edge Connectivity}
Non-collinear visibility graphs have diameter at most $2$ \cite{KPW}.
This is because even if two points cannot see each other, they can both see the point closest to the line containing them.
Plesn\'ik \cite{plesnik} proved that the edge-connectivity of a graph with diameter at most $2$ equals its minimum degree.
Thus the edge-connectivity of a non-collinear visibility graph equals its minimum degree.
There are several other known conditions that imply that the edge-connectivity of a graph is equal to the minimum degree; see for example \cite{volkmann, dankelvolk, pleznam}.
Here we prove the following
strengthening of the result of Plesn\'ik.
\begin{theorem}\label{4paths}
Let $G$ be a graph with diameter $2$.
Let $v$ and $w$ be distinct vertices in $G$.
Let $d:=\min\{\deg(v),\deg(w)\}$.
Then there are $d$ edge-disjoint paths of length at most $4$ between $v$ and $w$ in $G$.
\end{theorem}
\begin{proof}
First suppose that $v$ and $w$ are not adjacent.
Let $C$ be the set of common neighbours of $v$ and $w$.
For each vertex $c\in C$, take the path $(v,c,w)$.
Let $A$ be a set of $d-|C|$ neighbours of $v$ not in $C$.
Let $B$ be a set of $d-|C|$ neighbours of $w$ not in $C$.
Let $M_1$ be a maximal matching in the bipartite subgraph of $G$ induced by $A$ and $B$.
Call these matched vertices $A_1$ and $B_1$.
For each edge $ab\in M_1$, take the path $(v,a,b,w)$.
Let $A_2$ and $B_2$ respectively be the subsets of $A$ and $B$ consisting of the unmatched vertices.
Let $D:=V(G) \setminus (A_2 \cup B_2 \cup \{v,w\})$.
Let $M_2$ be an arbitrary pairing of vertices in $A_2$ and $B_2$.
For each pair $ab \in M_2$, take the path $(v,a,x,b,w)$, where $x$ is a common neighbour of $a$ and $b$ (which exists since $G$ has diameter $2$).
Since $x$ is adjacent to $a$, $x\neq w$, and by the maximality of $M_1$, $x\not\in B_2$.
Similarly, $x\neq v$ and $x\not\in A_2$, and so $x\in D$.
Thus there are three types of paths, namely $(v,C,w)$, $(v,A_1,B_1,w)$, and $(v,A_2,D, B_2,w)$.
Paths within each type are edge-disjoint.
Even though $D$ contains $A_1$ and $B_1$, edges between each pair of sets from $\{A_1,B_1,A_2,B_2,C,D,\{v\},\{w\}\}$ occur in at most one of the types, and all edges are between distinct sets from this collection.
Hence no edge is used twice, so all the paths are edge-disjoint.
The total number of paths is $|C|+|A_1|+|A_2|=d$.
This finishes the proof if $v$ and $w$ are not adjacent. If $G$ does contain the edge $vw$ then take this as the first path, then remove it and find $d-1$ paths in the same way as above.
\end{proof}
\begin{corollary}
Let $G$ be a non-collinear visibility graph.
Then the edge-connectivity of $G$ equals its minimum
degree. Moreover, for distinct vertices $v$ and $w$, there
are $\min\{\deg(v),\deg(w)\}$ edge-disjoint paths of length at most $4$ between $v$ and $w$ in $G$.
\end{corollary}
We now show that not only is the edge connectivity as high as possible, but it is realised by paths with at most one bend in them.
\begin{theorem}\label{nledge}
Let $G$ be a visibility graph with $n$ vertices, at most $\ell$ of which are collinear.
Then $G$ is $\ceil{\frac{n-1}{\ell-1}}$-edge-connected, which is best possible.
Moreover, between each pair of vertices, there are
$\ceil{\frac{n-1}{\ell-1}}$ edge-disjoint 1-bend paths.
\end{theorem}
\begin{proof}
Let $v$ and $w$ be distinct vertices of $G$.
Let $V^*$ be the set of vertices of $G$ not on the line $vw$.
Let $m := |V^*|$.
Thus $m \geq n - \ell$.
Let $\mathcal{L}$ be the pencil of lines through $v$ and the vertices in $V^*$.
Let $\mathcal{M}$ be the pencil of lines through $w$ and the vertices in $V^*$.
Let $H$ be the bipartite graph with vertex set
$\mathcal{L}\cup\mathcal{M}$, where $L\in\mathcal{L}$ is adjacent to
$M\in\mathcal{M}$ if and only if $L\cap M$ is a vertex in $V^*$.
Thus $H$ has $m$ edges, and maximum degree at most $\ell-1$.
Hence, by K\"onig's theorem \cite{Konig1916}, $H$ is $(\ell-1)$-edge-colourable.
Thus $H$ contains a matching of at least $\frac{m}{\ell-1}$ edges.
This matching corresponds to a set $S$ of at least $\frac{m}{\ell-1}$ vertices in $V^*$,
no two of which are collinear with $v$ or $w$.
For each vertex $x\in S$, take the path in the visibility graph from
$v$ straight to $x$ and then straight to $w$.
These paths are edge-disjoint.
Adding the path straight from $v$ to $w$, we get at least $\frac{m}{\ell-1} +1$ paths,
which is at least $\frac{n-1}{\ell-1}$.
Figure~\ref{lbound} shows that this bound is best possible.
\end{proof}
\begin{figure}
\includegraphics{lbound}
\caption{\label{lbound}If each ray from $v$ through $V(G)$ contains $\ell$ vertices, the degree of $v$ is $\frac{n-1}{\ell -1}$.}
\end{figure}
We now prove that minimum sized edge cuts in non-collinear visibility graphs are only found around a vertex. To do this, we first characterise the diameter $2$ graphs for which it does not hold.
\begin{proposition}\label{deltacut}
Let $G$ be a graph with diameter at most $2$ and minimum degree $\delta \geq 2$.
Then $G$ has an edge cut of size $\delta$ that is not the set of edges incident to a single vertex if and only if $V(G)$ can be partitioned into $A\cup B\cup C$ such that:
\begin{itemize}
\item $G[A]\cong K_\delta$ and $|B\cup C|\geq \delta$,
\item each vertex in $A$ has exactly one neighbour in $B$ and no neighbours in $C$,
\item each vertex in $B$ has at least one neighbour in $A$, and
\item each vertex in $B$ is adjacent to each vertex in $C$.
\end{itemize}
\end{proposition}
\begin{proof}
If $G$ has the listed properties then the edges between $A$ and $B$ form a cut of size $\delta$ that is not the set of edges incident to a single vertex.
Conversely, suppose an edge cut of size $\delta$ separates the vertices of $G$ into two sets $X$ and $Y$ with $|X|>1$ and $|Y| >1$.
Each vertex of $X$ is incident to at least $\delta-(|X|-1)$ edges of the cut.
It follows that $\delta \geq |X|(\delta-(|X|-1))$.
Consequently, $|X|(|X|-1)\geq \delta(|X|-1)$ and thus $|X|\geq \delta$.
Analogously, $|Y|\geq \delta$.
Since $G$ has diameter $2$, there are no vertices $x \in X$ and $y \in Y$, such that
all the neighbours of $x$ are in $X$, and all the neighbours of $y$ are in $Y$.
Thus we may assume without loss of generality that all vertices in $X$ have a neighbour in $Y$.
Since there are only $\delta$ edges between $X$ and $Y$, $|X|=\delta$ and each vertex in $X$ has exactly one neighbour in $Y$.
The minimum degree condition implies that all edges among $X$ are present.
Let $A:=X$, $B:= \bigcup_{x\in X}\{N(x) \setminus X \} $ and $C:= V(G) \setminus (A \cup B)$.
If there is a vertex $c \in C$ then $c$ must be joined to all vertices in $B$, otherwise there would be a vertex in $A$ at distance greater than $2$ from $c$.
\end{proof}
We now prove that diameter $2$ graphs such as those described in Proposition~\ref{deltacut} cannot be visibility graphs.
\begin{theorem}\label{edgecut} Every minimum edge-cut in a non-collinear visibility graph is the set of edges incident to some vertex.
\end{theorem}
\begin{proof}
Let $G$ be a non-collinear visibility graph. Suppose for the sake of contradiction that $G$ has an edge cut of $\delta(G)$ edges that are not all incident to a single vertex.
Since $G$ is non-collinear, $\delta \geq 2$.
By Proposition~\ref{deltacut}, $V(G)$ can be partitioned into $A \cup B \cup C$ with $|A|=\delta$, $|B \cup C| \geq \delta$, and $\delta$ edges between $A$ and $B$.
Furthermore, the vertices in $A$ can pairwise see each other and each vertex in $A$ has precisely one neighbour in $B$.
Choose any $a \in A$ and draw the pencil of $\delta$ rays from $a$ to all other vertices of the graph.
All rays except one contain a point in $A\setminus\{a\}$.
Say two rays are \emph{neighbours} if they bound a sector of angle less than $\pi$ with no other ray inside it. Observe that every ray has at least one neighbour.
First suppose $a$ is in the interior of the convex hull of $V(G)$, as in Figure~\ref{minedgecut}(a).
Then every ray has two neighbours, so each point in $B \cup C$ can see at least one point of $A \setminus \{a\}$ on a neighbouring ray.
Hence $C$ is empty and $|B|\geq \delta$.
Along with the edge from $a$ to its neighbour in $B$ we have at least $\delta +1$ edges between $A$ and $B$, a contradiction.
If we cannot choose $a$ in the interior of $\conv(V(G))$, then $A$ is in strictly convex position because no three points of $A$ are collinear.
Let the rays from $a$ containing another point from $A$ be called \emph{$A$-rays}.
The $A$-rays are all extensions of diagonals or edges of $\conv(A)$.
There is one more ray $r$ that contains only points of $B\cup C$.
In fact, $r$ has only one point $b$ on it, since all of $r$ is visible from the point in $A$ on a neighbouring ray. Furthermore, the rays which extend diagonals of $\conv(A)$ contain no points of $B\cup C$ since $A$ lies in the boundary of $\conv(V(G))$.
Hence the rest of $B \cup C$ must lie in the two rays which extend the sides of $\conv(A)$.
If these rays both have a neighbouring $A$-ray, then we can argue as before and find $\delta +1$ edges between $A$ and $B$.
We are left with the case where some $A$-ray has $r$ as its only neighbour.
If $b$ lies outside $\conv(A)$ and $\delta >2$ (Figure~\ref{minedgecut}(b)), then we can change our choice of $a$ to a point $a'$ on a ray neighbouring $r$, and then we are back to the previous case.
(If $\delta =2$ then the other point of $A$ will see $b$ so there can be no more points of $B\cup C$).
Otherwise $b$ is the only point in the interior of $\conv(A)$ (Figure~\ref{minedgecut}(c)), and is therefore the only point in $B$ since it sees all of $A$.
In this case $C$ must be empty since $b$ blocks $c$ from at most one point in $A$.
Thus $|B\cup C| =1$, a contradiction.
\end{proof}
\begin{figure}
\includegraphics{minedgecut}
\caption{\label{minedgecut}In each case the remaining points of $B \cup C$ must lie on the solid segments of the rays.}
\end{figure}
\section{A Key Lemma}
We call a plane graph drawn with straight edges a \emph{non-crossing geometric graph}.
The following interesting fact about non-crossing geometric graphs will prove useful. It says that two properly coloured non-crossing geometric graphs that are separated by a line can be joined by an edge such that the union is a properly coloured non-crossing geometric graph.
Note that this is false if the two graphs are not separated by a line, as demonstrated by the example in Figure~\ref{counterex}.
\begin{figure}[!h]
\includegraphics{counterex}
\caption{\label{counterex}Two properly coloured non-crossing geometric graphs with no black-white edge between them.}
\end{figure}
\begin{lemma}\label{atlem}
Let $G_1$ and $G_2$ be two properly coloured non-crossing geometric graphs with at least one edge each. Suppose their convex hulls are disjoint and that $V(G_1) \cup V(G_2)$ is not collinear.
Then there exists an edge $e \in V(G_1) \times V(G_2)$ such that $G_1 \cup G_2 \cup \{e\}$ is a properly coloured non-crossing geometric graph.
\end{lemma}
\begin{proof}
Let $h$ be a line separating $G_1$ and $G_2$. Assume that $h$ is vertical with $G_1$ to the left. Let $G:=G_1 \cup G_2$.
Call a pair of vertices $v_1 \in V(G_1)$ and $v_2 \in V(G_2)$ a \emph{visible pair} if the line segment between them does not intersect any vertices or edges of $G$.
We aim to find a visible pair with different colours, so assume for the sake of contradiction that every visible pair is monochromatic.
We may assume that $G_1$ and $G_2$ are edge maximal with respect to the colouring, since the removal of an edge only makes it easier to find a bichromatic visible pair.
Suppose the result holds when there are no isolated vertices in $G$. Then, if there are isolated vertices,
we can ignore them and find a bichromatic visible pair $(v_1,v_2)$ in the remaining graph.
If the edge $v_1v_2$ contains some of the isolated vertices, then it has a sub-segment joining two vertices of different colours. If these vertices lie on the same side of $h$ then the graphs were not edge maximal after all. If they are on different sides, then they are a bichromatic visible pair. Thus we may assume that there are no isolated vertices in $G$.
Let $l$ be the line containing a visible pair $(v_1,v_2)$, then the \emph{height} of the pair is the point at which $l$ intersects $h$.
Call the pair \emph{type-1} if $v_1$ and $v_2$ both have a neighbour strictly under the line $l$ (Figure \ref{atlemma}(a)).
Call the pair \emph{type-2} if there are edges $v_1w_1$ in $G_1$ and $v_2w_2$ in $G_2$ such that
the line $g$ containing $v_1w_1$ intersects $v_2w_2$ (call this point $x$),
$w_2$ lies strictly under $g$,
and the closed triangle $v_1v_2x$ contains no other vertex (Figure \ref{atlemma}(b)).
Here $x$ may equal $v_2$, in which case $g = l$. A visible pair is also type-2 in the equivalent case with the subscripts interchanged.
A particular visible pair may be neither type-1 nor type-2, but we may assume there exists a type-1 or type-2 pair. To see this, consider the highest visible pair $(v_1,v_2)$ and assume it is neither type-1 nor type-2 (see Figure \ref{atlemma}(c)). Note that $v_1v_2$ is an edge of the convex hull of $G$.
Since all of $G$ lies on or below the line $l$ containing $v_1v_2$, both vertices must have degree $1$ and their neighbours $w_1$ and $w_2$ must lie on $l$.
For $i=1,2$, let $x_i$ be a vertex of $G_i$ not on $l$ that minimizes the angle $\angle v_iw_ix_i$. Since $V(G)$ is not collinear, at least one of $x_i$ exists.
By symmetry, we may assume that either only $x_1$ exists, or both $x_1$ and $x_2$ exist and
${\rm dist}(x_1,l)\leq{\rm dist}(x_2,l)$.
In either case, $(x_1,v_2)$ and $(x_1,w_2)$ are visible pairs and at least one of them is bichromatic.
So now assuming there exists a type-1 or type-2 visible pair, let $(u_1,u_2)$ be the lowest such pair:
\emph{Case (i)}
The pair $(u_1,u_2)$ is type-1 (see Figure \ref{atlemma}(d)).
Let $u_1w_1$ be the first edge of $G_1$ incident to $u_1$ in a clockwise direction, starting at $u_1u_2$.
Let $u_2w_2$ be the first edge of $G_2$ incident to $u_2$ in a counterclockwise direction, starting at $u_2u_1$.
Let $x$ be the point on the segment $u_1w_1$ closest to $w_1$ such that the open triangle $u_1u_2x$ is disjoint from $G$.
Similarly, let $y$ be the point on the segment $u_2w_2$ closest to $w_2$ such that the open triangle $u_1u_2y$ is disjoint from $G$.
Without loss of generality, the intersection of $u_1y$ and $u_2x$ is to the left of $h$, or on $h$.
Therefore the segment $xu_2$ is disjoint from $G_2$.
Let $v \in V(G_1)$ be the vertex on $xu_2$ closest to $u_2$.
Thus $(v,u_2)$ is a visible pair of height less than $(u_1,u_2)$.
We may assume that $v \neq w_1$, otherwise $(v,u_2)$ would be bichromatic.
The point $w_2$ is under the line $vu_2$ and $v$ has no neighbour above the line $vu_2$.
Hence $v$ either has a neighbour under the line $vu_2$ and $(v,u_2)$ is type-1, or
$v$ has a neighbour on the line $vu_2$ and $(v,u_2)$ is type-2.
This contradicts the assumption that $(u_1,u_2)$ was the lowest pair of either type.
\begin{figure}[t]
\includegraphics{atlemma}
\caption{\label{atlemma}Proof of Lemma \ref{atlem}. The shaded areas are empty. (a) A type-1 visible pair. (b) A type-2 visible pair. (c) The highest visible pair. (d) The lowest pair is type-1. (e) The lowest pair is type-2. }
\end{figure}
\emph{Case (ii)}
The pair $(u_1,u_2)$ is type-2 with neighbours $w_1$ and $w_2$, such that the line $u_2w_2$ intersects the edge $u_1w_1$ at some point $x$ (see Figure \ref{atlemma}(e)).
Let $y$ be the point on the segment $u_1w_1$ closest to $w_1$ such that the open triangle $u_1u_2y$ is disjoint from $G$. Note $y$ is below $x$.
First assume that $G_2$ intersects $yu_2$.
Let $v_2$ be the closest vertex to $u_2$ on $yu_2$.
Thus $(u_1,v_2)$ is a visible pair of height less than $(u_1,u_2)$.
Let $z$ be a neighbour of $v_2$.
If $z$ is under the line $u_1v_2$ then $(u_1,v_2)$ is type-1 since $w_1$ is also under this line.
Note $z$ cannot lie above the line $yu_2$ since $u_1$ and $u_2$ see each other and the open triangle $u_1u_2y$ is empty.
Furthermore, if $z$ lies on $yu_2$ then $z=u_2$ and $(u_1,v_2)$ is bichromatic.
Thus if $z$ is not under the line $u_1v_2$, the line $v_2z$ must intersect the edge $u_1w_1$ at a point above $y$, so $(u_1,v_2)$ is a type-2 pair.
Hence the pair $(u_1,v_2)$ is type-1 or type-2, a contradiction.
Now assume that $yu_2$ does not intersect $G_2$, and therefore does intersect $G_1$, and let $v_1 \in V(G_1)$ be the vertex on $yu_2$ closest to $u_2$.
Thus $(v_1,u_2)$ is a visible pair of height less than $(u_1,u_2)$.
We may assume that $v_1 \neq w_1$, otherwise $(v_1,u_2)$ would be bichromatic.
Since $w_2$ is under the line $v_1u_2$,
if $v_1$ has a neighbour under the line $v_1u_2$ then $(v_1,u_2)$ is a type-1 pair.
Otherwise the only neighbour of $v_1$ is on the line $v_1u_2$ which makes $(v_1,u_2)$ a type-2 pair.
Hence the pair $(v_1,u_2)$ is type-1 or type-2, a contradiction.
\end{proof}
\section{Vertex Connectivity}
As is common practice, we will often refer to vertex-connectivity simply as connectivity. Connectivity of visibility graphs is not as straightforward as edge-connectivity since there are visibility graphs with connectivity strictly less than the minimum degree (see Figure \ref{twothirds}). Our aim in this section is to show that the connectivity of a visibility graph is at least half the minimum degree (Theorem \ref{halfdelta}). This follows from Theorem~\ref{Pavel} below, which says that bivisibility graphs contain large non-crossing subgraphs. In the proof of Theorem~\ref{Pavel} we will need a version of the Ham Sandwich Theorem for point sets in the plane, and also Lemma~\ref{pavlem}.
\begin{theorem}\label{hamsand} (Ham Sandwich. See \cite{matou}.) Let $A$ and $B$ be finite sets of points in the plane. Then there exists a line $h$ such that each closed half-plane determined by $h$ contains at least half of the points in $A$ and at least half of the points in $B$.
\end{theorem}
\begin{lemma}\label{pavlem} Let $A$ be a set of points lying on a line $l$.
Let $B$ be a set of points, none of them lying on $l$.
Let $|A|\geq|B|$.
Then there is a non-crossing spanning tree in the bivisibility graph of $A$ and $B$.
\end{lemma}
\begin{proof}
We proceed by induction on $|B|$.
If $|B|=1$ then the point in $B$ sees every point in $A$, and we are done.
Now assume $1 < |B| \leq |A| $.
First suppose that all of $B$ lies to one side of $l$ and consider the convex hull $C$ of $A \cup B$. An end point $a$ of $A$ is a corner of $C$ and there is a point $b$ of $B$ visible to it in the boundary of $C$. There exists a line $h$ that separates $\{a,b\}$ from the rest of $A \cup B$. Applying induction and Lemma~\ref{atlem} we find a non-crossing spanning tree among $A\cup B \setminus \{a,b\}$ and an edge across $h$ to the edge $ab$, giving a non-crossing spanning tree of $\mathcal{B}(A,B)$.
Now suppose that there are points of $B$ on either side of $l$. Then we may apply the inductive hypothesis on each side to obtain two spanning trees. Their union is connected, and thus contains a spanning tree.
\end{proof}
\begin{theorem}\label{Pavel} Let $A$ and $B$ be disjoint sets of points in the plane with $|A|=|B|=n$ such that $A\cup B$ is not collinear. Then the bivisibility graph $\mathcal{B}(A,B)$ contains a non-crossing subgraph with at least $n+1$ edges.
\end{theorem}
\begin{proof}
We proceed by induction on $n$. The statement holds for $n=1$, since no valid configuration exists.
For $n=2$, any triangulation of $A\cup B$ contains at least five edges.
At most one edge has both endpoints in $A$, and similarly for $B$.
Removing these edges, we obtain a non-crossing subgraph of $\mathcal{B}(A,B)$ with at least three edges.
Now assume $n>2$.
\emph{Case (i)} First suppose that there exists a line $l$ that contains at least $n$ points of $A \cup B$.
Let $A_0 := A \cap l$, $B_0 := B \cap l$, $A_1 := A \setminus l$ and $B_1 := B \setminus l$.
Without loss of generality, $|A_0| \geq |B_0|$.
If $|A_0|>|B_0|$ then $|A_0|+|B_1|> |B_0| + |B_1|=n$.
Since $|A_0| + |B_0| \geq n = |B_1| + |B_0|$ we have $|A_0| \geq |B_1|$,
so we may apply Lemma~\ref{pavlem} to $A_0$ and $B_1$.
We obtain a non-crossing subgraph of $\mathcal{B}(A,B)$ with $|A_0|+|B_1|-1 \geq n$ edges, and
by adding an edge along $l$ if needed, we are done.
Now assume $|A_0|=|B_0|$.
We apply Lemma~\ref{pavlem} to $A_0$ and $B_1$, obtaining a non-crossing subgraph with $n-1$ edges, to which we may add one edge along $l$.
We still need one more edge.
Suppose first that one open half-plane determined by $l$ contains points of both $A_1$ and $B_1$.
Let $a$ and $b$ be the furthest points of $A_1$ and $B_1$ from $l$ in this half-plane.
Since $|A_0|=|B_0|$ we may assume that $a$ is at least as far from $l$ as $b$.
Then we may add an edge along the segment $ab$, because none of the edges from $A_0$ to $B_1$ cross it.
It remains to consider the case where $l$ separates $A_1$ from $B_1$. Then applying Lemma~\ref{pavlem} on each side of $l$ we find a non-crossing subgraph with $2n-1$ edges: $|A_0|+|B_1|-1$ on one side, $|B_0|+|A_1|-1$ on the other side, and one more along $l$.
\emph{Case (ii)} Now assume that no line contains $n$ points in $A \cup B$.
By Theorem~\ref{hamsand} there exists a line $h$ such that each of the closed half-planes determined by $h$ contains at least $\frac{n}{2}$ points from each of $A$ and $B$.
Assume that $h$ is horizontal.
Let $A^+$ be the points of $A$ that lie above $h$ along with any that lie on $h$ that we choose to assign to $A^+$. Define $A^-$, $B^+$ and $B^-$ in a similar fashion.
Now assign the points on $h$ to these sets so that each has exactly $\ceil{\frac{n}{2}}$ points.
In particular, assign the required number of \emph{leftmost} points of $h\cap A$ to $A^+$ and \emph{rightmost} points of $h\cap A$ to $A^-$. Do the same for $h \cap B$ with left and right interchanged.
If $n$ is even then $A^+\cup A^-$ and $B^+\cup B^-$ are partitions of $A$ and $B$.
If $n$ is odd then $|A^+ \cap A^-| = |B^+ \cap B^-| =1$.
Since there is no line containing $n$ points of $A \cup B$, the inductive hypothesis may be applied on either side of $h$. Thus there is a non-crossing subgraph with $\ceil{\frac{n}{2}}+1$ edges on each side.
The union of these subgraphs has at least $n+2$ edges, but some edges along $h$ may overlap. Due to the way the points on $h$ were assigned, one of the subgraphs has at most one edge along $h$.
(If $n$ is odd, this is the edge between the two points that get assigned to both sides.)
Deleting this edge from the union yields a non-crossing subgraph of $\mathcal{B}(A,B)$ with at least $n+1$ edges.
\end{proof}
\begin{theorem}\label{halfdelta}
Every non-collinear visibility graph with minimum degree $\delta$ has connectivity at least $\frac{\delta}{2} +1$.
\end{theorem}
\begin{proof}
Suppose $\{A,B,C\}$ is a partition of the vertex set of a non-collinear visibility graph such that $C$ separates $A$ and $B$, and $|A|\leq |B|$. By considering a point in $A$ we see that $\delta \leq |A| + |C| -1$. By removing points from $B$ until $|A|=|B|$ whilst ensuring that $A \cup B$ is not collinear, we may apply Theorem~\ref{Pavel} and Observation \ref{obs} to get $|C|\geq |A| +1$.
Combining these inequalities yields $|C| \geq \frac{\delta}{2}+1$.
\end{proof}
The following observations are corollaries of Theorem \ref{halfdelta}, though they can also be proven directly by elementary arguments.
\begin{proposition}
The following are equivalent for a visibility graph $G$:
(1) $G$ is not collinear,
(2) $\kappa(G)\geq2$,
(3) $\lambda(G)\geq2$ and
(4) $\delta(G)\geq2$.
\end{proposition}
\begin{proposition}
The following are equivalent for a visibility graph $G$:
(1) $\kappa(G)\geq3$,
(2) $\lambda(G)\geq3$ and
(3) $\delta(G)\geq3$.
\end{proposition}
\section{Vertex Connectivity with Bounded Collinearities}
For the visibility graphs of point sets with $n$ points and at most $\ell$ collinear, connectivity is at least $\frac{n-1}{\ell-1}$, just as for edge-connectivity. Bivisibility graphs will play a central role in the proof of this result.
For point sets $A$ and $B$ an \emph{$AB$-line} is a line containing points from both sets.
\begin{theorem} \label{thm:bivis1}
Let $A \cup B$ be a non-trivial partition of a set of $n$ points with at most $\ell$ on any $AB$-line. Then the bivisibility graph $\mathcal{B}(A,B)$ contains a non-crossing forest with at least $\frac{n-1}{\ell-1}$ edges. In particular, if $\ell =2$ then the forest is a spanning tree.
\end{theorem}
\begin{proof}
The idea of the proof is to cover the points of $A \cup B$ with a large set of disjoint line segments each containing an edge of $G:= \mathcal{B}(A,B)$.
Start with a point $v \in A$.
Consider all open ended rays starting at $v$ and containing a point of $B$.
Each such ray contains at least one edge of $G$ and at most $\ell-1$ points of $(A \cup B) \setminus v$.
For each ray $r$, choose a point $w \in B \cap r$.
Draw all maximal line segments with an open end at $w$ and a closed end at a point of $A$ in the interior of the sector clockwise from $r$. Figure~\ref{bivisvertl} shows an example.
If one sector $S$ has central angle larger than $\pi$ then some points of $A$ may not be covered.
In this case we bisect $S$, and draw segments from each of its bounding rays into the corresponding half of $S$ (assign points on the bisecting line to one sector arbitrarily).
Like the rays, these line segments all contain at least one edge of $G$ and at most $\ell-1$ points of $(A \cup B) \setminus \{v,w\}$.
Together with the rays, they are pairwise disjoint and cover all of $(A \cup B) \setminus v$.
Hence the edges of $G$ contained in them form a non-crossing forest with at least $\frac{n-1}{\ell-1}$ edges.
Note that if $\ell = 2$ we have a forest with $n-1$ edges, hence a spanning tree.
\end{proof}
Note that the $\ell=2$ case of Theorem~\ref{thm:bivis1} is well known \cite{kanekokano}.
\begin{figure}
\includegraphics{bivisvertl}
\caption{\label{bivisvertl}Covering $A\cup B$ with rays and segments (a), each of which contains an edge of the bivisibility graph (b).}
\end{figure}
\begin{corollary}\label{nlvert}
Let $G$ be the visibility graph of a set of $n$ points with at most $\ell$ collinear. Then $G$ has connectivity at least $\frac{n-1}{\ell-1}$, which is best possible.
\end{corollary}
\begin{proof}
Let $\{A,B,C\}$ be a partition of $V(G)$ such that $C$ separates $A$ and $B$.
Consider the bivisibility graph of $A \cup B$.
Applying Observation \ref{obs} and Theorem~\ref{thm:bivis1} (with $n'=n - |C|$ and $\ell' = \ell -1$) yields $|C| \geq \frac{n-|C|-1}{\ell-2}$, which implies $|C| \geq \frac{n-1}{\ell-1}$.
As in the case of edge-connectivity, the example in Figure~\ref{lbound} shows that this bound is best possible.
\end{proof}
In the case of visibility graphs with at most three collinear vertices, it is straightforward to improve the bound in Theorem~\ref{halfdelta}.
\begin{proposition}\label{3line}
Let $G$ be a visibility graph with minimum degree $\delta$ and at most three collinear vertices. Then $G$ has connectivity at least $\frac{2\delta+1}{3}$.
\end{proposition}
\begin{proof}
Let $\{A,B,C\}$ be a partition of $V(G)$ such that $C$ separates $A$ and $B$.
Thus each $AB$-line contains only two vertices in $A\cup B$.
Applying Theorem~\ref{thm:bivis1} (with $\ell=2$) and Observation \ref{obs} to $\mathcal{B}(A,B)$ gives $|C| \geq |A| +|B|-1$. For $v \in A$ and $w \in B$ note that $\delta \leq \deg(v) \leq |A| + |C| -1$ and $\delta \leq \deg(w) \leq |B|+|C|-1$. Combining these inequalities gives $|C| \geq \frac{2\delta+1}{3}$.
\end{proof}
In the case of visibility graphs with at most four collinear vertices, the same improvement is found as a corollary of the following theorem about bivisibility graphs. Lemma~\ref{atlem} is an important tool in the proof.
\begin{theorem}\label{tree}
Let $A$ and $B$ be disjoint point sets in the plane with $|A| = |B| = n$ such that $A\cup B$ has at most three points on any $AB$-line. Then the bivisibility graph $\mathcal{B}(A,B)$ contains a non-crossing spanning tree.
\end{theorem}
\begin{proof}
We proceed by induction on $n$. The statement is true for $n=1$.
Apply Theorem~\ref{hamsand} to find a line $h$ such that each closed half-plane defined by $h$ has at least $\frac{n}{2}$ points from each of $A$ and $B$.
Assume that $h$ is horizontal.
The idea of the proof is to apply induction on
each side of $h$
to get two spanning trees, and then find an edge joining them together.
In most cases the joining edge will be found by applying Lemma~\ref{atlem}.
We will construct a set $A^+$ containing the points of $A$ that lie above $h$ along with any
that lie on $h$ that we choose to assign to $A^+$. We will also construct $A^-$, $B^+$ and $B^-$ in a similar fashion.
By the properties of $h$, there exists an
assignment\footnote{
We need only consider one of the sets, say $A$. Say there are $x$ points above $h$, $y$ points on $h$ and $z$ points below $h$. Then $x+y \geq \ceil{n/2} \geq \floor{n/2} \geq x$ so we can ensure $|A^+| = \ceil{n/2}$. $A^-$ is the complement and therefore has $\floor{n/2}$ points.
}
of each point in $h \cap (A\cup B)$ to one of these sets such that $|A^+|=|B^+| = \ceil{\frac{n}{2}}$ and $|A^-|=|B^-| = \floor{\frac{n}{2}}$.
Consider the sequence $s_h$ of signs ($+$ or $-$) given by the chosen assignment of points on $h$ from left to right.
If $s_h$ is all the same sign, or alternates only once from one sign to the other, then it is possible to perturb $h$ to $h'$ so that $A^+ \cup B^+$ lies strictly above $h'$ and $A^-\cup B^-$ lies strictly below $h'$.
Thus we may apply induction on
each side to obtain non-crossing spanning trees in $\mathcal{B}(A^+,B^+)$ and $\mathcal{B}(A^-,B^-)$. Then apply Lemma~\ref{atlem} to find an edge between these two spanning trees, creating a non-crossing spanning tree of $\mathcal{B}(A,B)$.
Otherwise, $s_h$ alternates at least twice (so there are at least three points on $h$).
This need never happen if there are only points from one set on $h$, since the points required above $h$ can be taken from the left and those required below $h$ from the right.
Without loss of generality, the only remaining case to consider is that $h$ contains one point from $A$ and two from $B$.
If the two points from $B$ are consecutive on $h$, then without loss of generality $s_h = (+,-,+)$ and the points of $B$ are on the left. In this case the signs of the points from $B$ may be swapped so $s_h$ becomes $(-,+,+)$.
If the point from $A$ lies between the other two points, it is possible that $s_h$ must alternate twice.
In this case, use induction to find spanning trees in $\mathcal{B}(A^+,B^+)$ and $\mathcal{B}(A^-,B^-)$. These spanning trees have no edges along $h$, so we may add an edge along $h$ to connect them, as shown in Figure~\ref{4linelem}.
\end{proof}
\begin{figure
\includegraphics{4linelem}
\caption{\label{4linelem}The only case in which $h$ may not be perturbed to separate the points assigned above $h$ from those assigned below.}
\end{figure}
\begin{theorem}\label{4line}
Let $G$ be a visibility graph with minimum degree $\delta$ and at most four collinear vertices.
Then $G$ has connectivity at least $\frac{2\delta+1}{3}$.
\end{theorem}
\begin{proof}
Let $\{A,B,C\}$ be a partition of $V(G)$ such that $C$ separates $A$ and $B$ and $|A|\leq |B|$.
By considering a point in $A$ we can see that $\delta \leq |A| + |C| -1$.
If necessary remove points from $B$ so that $|A|=|B|$.
Applying Theorem~\ref{tree} and Observation \ref{obs} yields $|C|\geq 2|A| -1$.
Combining these inequalities yields $|C| \geq \frac{2\delta+1}{3}$.
\end{proof}
It turns out that Proposition~\ref{3line} and Theorem~\ref{4line} are best possible. There are
visibility graphs with at most three collinear vertices and connectivity $\frac{2\delta+1}{3}$. The construction was discovered by Roger Alperin, Joe Buhler, Adam Chalcraft and Joel Rosenberg in response to a problem posed by Noam Elkies. Elkies communicated their solution to Todd Trimble who published it on his blog \cite{tvblog}. Here we provide a brief description of the construction, but skip over most background details. Note that the original problem and construction were not described in terms of visibility graphs, so we have translated them into our terminology.
The construction uses real points on an elliptic curve.
For our purposes a \emph{real elliptic curve} $\mathcal{C}$ is a curve in the real projective plane (which we model as the Euclidean plane with an extra `line at infinity') defined by an equation of the form $y^2 = x^3 +\alpha x +\beta$. The constants $\alpha$ and $\beta$ are chosen so that the discriminant $\Delta = -16(4\alpha^3+27\beta^2)$ is non-zero, which ensures that the curve is non-singular. We define a group operation `$+$' on the points of $\mathcal{C}$ by declaring that $a+b+c=0$ if the line through $a$ and $b$ also intersects $\mathcal{C}$ at $c$, that is, if $a$, $b$ and $c$ are collinear. The identity element $0$ corresponds to the point at infinity in the $\pm y$-direction, so that for instance $a+b+0=0$ if the line through $a$ and $b$ is parallel to the $y$-axis. Furthermore, $a+a+b=0$ if the tangent line at $a$ also intersects $\mathcal{C}$ at $b$. It can be shown that this operation defines an abelian group structure on the points of $\mathcal{C}$.
We will use two facts about real elliptic curves and the group structure on them. Firstly, no line intersects an elliptic curve in more than three points.
Secondly, the group acts continuously: adding a point $e$ which is close to $0$ (i.e.~very far out towards infinity) to another point $a$ results in a point close to $a$ (in terms of distance along $\mathcal{C}$).
\begin{proposition}\label{ellprop} \emph{(Alperin, Buhler, Chalcraft and Rosenberg)}
For infinitely many integers $\delta$, there is a visibility graph with at most three vertices collinear, minimum degree $\delta$, and connectivity $\frac{2\delta+1}{3}$.
\end{proposition}
\begin{proof}
Begin by choosing three non-zero collinear points $a$, $b$ and $c$ on a real elliptic curve $\mathcal{C}$, such that $c$ lies between $a$ and $b$.
Then choose a point $e$ very close to $0$. Now define
\begin{align*}
A:=& \{ a + ie : 0\leq i \leq m-1 \} \\
B:=& \{ b + je : 0\leq j \leq m-1 \} \\
C:=& \{ -(a+b + ke) : 0 \leq k \leq 2m-2\}.
\end{align*}
Let $G$ be the visibility graph of $A \cup B \cup C$. Since the points are all on $\mathcal{C}$, $G$ has at most three vertices collinear.
Observe that the points $a+ie$ and $b+je$ are collinear with the point $-(a+b+ (i+j)e)$.
Since $e$ was chosen to be very close to $0$, by continuity the set $A$ is contained in a small neighbourhood of $a$, and similarly for $B$ and $C$. Therefore, the point from $C$ is the middle point in each collinear triple, and so $C$ is a vertex cut in $G$, separating $A$ and $B$.
By choosing $a$, $b$ and $c$ away from any points of inflection, we can guarantee that there are no further collinear triples among the sets $A$, $B$ or $C$.
Thus a point in $A$ sees all other points in $A\cup C$,
a point in $B$ sees all other points in $B\cup C$,
and a point in $C$ sees all other points.
Therefore the minimum degree of $G$ is $\delta = 3m-2$, attained by the vertices in $A\cup B$.
Hence (also using Proposition~\ref{3line}) the connectivity of $G$ is $|C|=2m-1 = \frac{2\delta+1}{3}$.
\end{proof}
\begin{figure}[!h]
\includegraphics{elliptic}
\caption{\label{elliptic}(a) The elliptic curve $y^2 = x^3 -x$. (b) The black points separate the white points from the grey points.}
\end{figure}
In Figure~\ref{elliptic} we have chosen $\mathcal{C}$ to be the curve $y^2 = x^3 - x$ and the points $a$, $b$ and $c$ on the $x$-axis. We have taken advantage of the symmetry about the $x$-axis to choose $A = \{ a \pm ie \}$ (and similarly for $B$ and $C$), which is slightly different to the construction outlined in Proposition~\ref{ellprop}.
We close our discussion of the connectivity of visibility graphs with the following conjecture.
\begin{conjecture}
Every visibility graph with minimum degree $\delta$ has connectivity at least $\frac{2\delta+1}{3}$.
\end{conjecture}
\section{Connectedness of Bivisibility Graphs}
Visibility graphs are always connected, but bivisibility graphs may have isolated vertices. However, we now prove that non-collinear bivisibility graphs have at most one component that is not an isolated vertex.
\begin{lemma}\label{trilem}
Let $A$ and $B$ be disjoint point sets such that $A\cup B$ is not collinear.
Let $T$ be a triangle with vertices $a \in A$, $b \in B$ and $c \in A \cup B$. Then $a$ or $b$ has a neighbour in $\mathcal{B}(A,B)$ lying in $T \setminus ab$.
\end{lemma}
\begin{proof}
There is at least one point in $T$ not lying on the line $ab$. The one closest to $ab$ sees both $a$ and $b$, and is therefore adjacent to one of them.
\end{proof}
\begin{theorem}
Let $A$ and $B$ be disjoint point sets such that $A\cup B$ is not collinear.
Then $\mathcal{B}(A,B)$ has at most one component
that is not an isolated vertex.
\end{theorem}
\begin{proof}
Assume for the sake of contradiction that $\mathcal{B}(A,B)$ has two components with one or more edges.
Choose a pair of edges $ab$ and $a'b'$, one from each component, such that the area of $C :=\conv(a,b,a',b')$ is minimal.
If $ab$ and $a'b'$ lie on one line, they are joined by a path through the closest point to that line, a contradiction.
If they do not lie on a line then both ends of at least one of the edges are vertices of $C$. Assume this
edge is $ab$ and let $v$ be another vertex of $C$ ($v$ is either $a'$ or $b'$).
Then by Lemma~\ref{trilem}, $a$ or $b$ has a neighbour $w$ in $\triangle abv \setminus ab$. Without loss of generality, $w$ is a neighbour of $a$. If $w = v$, then $ab$ and $a'b'$ are in the same component, a contradiction. If $w\neq v$, then there is a pair of edges with a smaller convex hull, namely $a'b'$ and $aw$, because $w \in C$, but $w$ is not a vertex of $C$. This contradicts the assumption that $C$ was minimal.
\end{proof}
\begin{corollary}
A non-collinear bivisibility graph is connected if and only if it has no isolated vertices.
\end{corollary}
\bibliographystyle{siam}
| {
"timestamp": "2011-06-21T02:01:06",
"yymm": "1106",
"arxiv_id": "1106.3622",
"language": "en",
"url": "https://arxiv.org/abs/1106.3622",
"abstract": "The visibility graph of a finite set of points in the plane has the points as vertices and an edge between two vertices if the line segment between them contains no other points. This paper establishes bounds on the edge- and vertex-connectivity of visibility graphs.Unless all its vertices are collinear, a visibility graph has diameter at most 2, and so it follows by a result of Plesník (1975) that its edge-connectivity equals its minimum degree. We strengthen the result of Plesník by showing that for any two vertices v and w in a graph of diameter 2, if deg(v) <= deg(w) then there exist deg(v) edge-disjoint vw-paths of length at most 4. Furthermore, we find that in visibility graphs every minimum edge cut is the set of edges incident to a vertex of minimum degree.For vertex-connectivity, we prove that every visibility graph with n vertices and at most l collinear vertices has connectivity at least (n-1)/(l-1), which is tight. We also prove the qualitatively stronger result that the vertex-connectivity is at least half the minimum degree. Finally, in the case that l=4 we improve this bound to two thirds of the minimum degree.",
"subjects": "Combinatorics (math.CO)",
"title": "On the connectivity of visibility graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946221548465,
"lm_q2_score": 0.8006920020959544,
"lm_q1q2_score": 0.7910406380947738
} |
https://arxiv.org/abs/1210.3799 | Some remarks on the joint distribution of descents and inverse descents | We study the joint distribution of descents and inverse descents over the set of permutations of n letters. Gessel conjectured that the two-variable generating function of this distribution can be expanded in a given basis with nonnegative integer coefficients. We investigate the action of the Eulerian operators that give the recurrence for these generating functions. As a result we devise a recurrence for the coefficients but are unable to settle the conjecture. We examine generalizations of the conjecture and obtain a type B analog of the recurrence satisfied by the two-variable generating function. We also exhibit some connections to cyclic descents and cyclic inverse descents. Finally, we propose a combinatorial model in terms of statistics on inversion sequences. | \section{Introduction}
Let $\mathfrak{S}_n$ denote the set of permutations of $\{1, \dotsc, n\}$. The
number of \emph{descents} in a permutation $\pi = \pi_1\dotso\pi_n$ is defined as
$\des(\pi) = |\{ i : \pi_i > \pi_{i+1}\}|.$
Our object of study is the two-variable generating function of descents and \emph{inverse descents}:
\[ A_n(s,t) = \sum_{\pi \in \mathfrak{S}_n} s^{\des(\pi^{-1})+1} t^{\des(\pi)+1}\,. \]
The specialization of this polynomial to a single variable reduces to the \emph{classical
Eulerian polynomial}:
\[A_n(t) = A_n(1,t) = \sum_{\pi \in \mathfrak{S}_n} t^{\des(\pi)+1} =
\sum_{k=1}^n \eulerian{n}{k} t^k\,.\]
Eulerian polynomials and their coefficients
play an important role (not only) in enumerative combinatorics. The classical, univariate polynomials
are quiet well-studied---see, for example, \cite{Car59,FS70}.
This cannot be said for the bivariate generating function for the
pair of statistics $(\des, \ides)$.
Here and throughout this note
we will use the shorthand
$\ides(\pi) = \des(\pi^{-1}).$
Our main motivation to study these polynomials is the following conjecture of Gessel
which appeared in a recent article by \cite{Bra08}; see also a nice exposition by \cite{Pet12}.
\begin{conj}[Gessel]
\label{conj:gessel}
For all $n \ge 1$,
\[A_n(s,t) = \sum_{i,j} \gamma_{n,i,j} (st)^i (s+t)^j (1+st)^{n+1-j-2i}\,,\]
where $\gamma_{n,i,j}$ are nonnegative integers for all $i,j \in \mathbb{N}.$
\end{conj}
If true, this decomposition would refine the following classical
result, the {$\gamma$-\emph{nonnegativity}} for the Eulerian polynomials $A_n(t)$.
\begin{thm}[Th\'eor\`eme 5.6 of \citet{FS70}]
\label{thm:FS}
\[A_n(t) = \sum_{i=1}^{\lceil n/2 \rceil} \gamma_{n,i}
t^i (1+t)^{n+1-2i}\,,\]
where $\gamma_{n,i}$ are nonnegative integers
for all $i \in \mathbb{N}.$
\end{thm}
Before giving their proof, let us
recall the recurrence satisfied by the Eulerian polynomials:
\begin{equation}
A_n (t) = n t A_{n-1}(t) + t(1-t) \frac{\partial}{\partial t} A_{n-1}(t)\,,
\mbox{ for } n\ge 2\, ,
\label{eq:eulerian}
\end{equation}
with initial value $A_1(t) = t$.
\citet[Chapitre V]{FS70} give a purely algebraic proof of Theorem~\ref{thm:FS}
by considering the \emph{homogenized} Eulerian polynomial, of degree $n+1$,
\begin{equation}\begin{split}
A_n(t;y) &= y^{n+1}A_n(t/y) \\
&= \sum_{\pi \in \mathfrak{S}_n} t^{\des(\pi)+1}y^{\asc(\pi)+1}\,,
\label{eq:hompolydefn}
\end{split}
\end{equation}
where $\asc(\pi)$ denotes the number of ascents ($\pi_i<\pi_{i+1}$)
in the permutation $\pi = \pi_1 \dotso \pi_n$. Note that this polynomial is different
from and therefore should not be confused with
$A_n(s,t)$. To avoid confusion we use a semicolon and different variables.
We include their proof next, as we will be applying the same idea to the
joint generating polynomial of descents and inverse descents in
Section~\ref{sec:recurrencegamma}.
\begin{proof}[Proof of Theorem~\ref{thm:FS}]
The homogenized Eulerian polynomials defined in (\ref{eq:hompolydefn})
satisfy the recurrence
\begin{equation}
\label{eq:homogenizedEulerianRec}
A_{n}(t;y) = ty\left(\frac{\partial}{\partial t} A_{n-1}(t;y)+
\frac{\partial}{\partial y}A_{n-1}(t;y)\right), \mbox{ for } n\ge 2\,,
\end{equation}
which follows from observing the effect on the number of descents and ascents
of inserting the letter $n$ into a permutation
of $\{1, \dotsc, n-1\}$. Compare this with the recurrence in (\ref{eq:eulerian}).
It is clear from symmetry observations that $A_n(t;y)$ can be written (uniquely)
in the basis
\[\left\{(ty)^i(t+y)^{n+1-2i}\right\}_{i=1\dotso \lceil\frac{n}{2}\rceil}\]
with some coefficients $\gamma_{n,i}$. To show that $\gamma_{n,i}$ are in fact
nonnegative integers consider the action of the operator
$T = ty\left({\partial}/{\partial t} + {\partial}/{\partial y}\right)$ on
a basis element.
Apply $T$ on the $i$th basis element we get that
\[ T[(ty)^i(t+y)^{n+1-2i}] = i(ty)^i(t+y)^{n+2-2i} +
2(n+1-2i)(ty)^{i+1}(t+y)^{n-2i}, \]
which in turn implies the following recurrence on the coefficients:
\begin{equation}
\gamma_{n+1,i} = i \gamma_{n,i} + 2(n+3-2i)\gamma_{n,i-1}.
\label{rec:gamma_ni}
\end{equation}
The statement of Theorem~$\ref{thm:FS}$ now follows, since the initial values are
nonnegative integers, in particular, $\gamma_{1,1}= 1$ and $\gamma_{1,i} = 0$
for $i\ne 1$. Furthermore, the constraint $1\le i \le \lceil\frac{n}{2}\rceil$ assures that
both positivity and integrality are preserved by recurrence~(\ref{rec:gamma_ni}).
\end{proof}
\begin{rem}
The study of these so-called Eulerian operators goes back to Carlitz as was
pointed out to the author by I.~Gessel. See, for example, \cite{Car73} for a slightly different
variant of $T$. Also, the operator $t\left(n+ (1-t) (\partial/\partial t)\right)$ is closely related to
a special case of a generalized derivative operator already studied by Laguerre, called
\textit{\'emanant} or polar derivative; see, for example, 6.~in \cite{Mar35}.
\end{rem}
Finally, we must also mention the ``valley-hopping'' proof of Theorem~\ref{thm:FS}
by \citet*[Proposition 4]{SWG83}
which is a beautiful construction that proves that the coefficients $\gamma_{n,i}$ are not only
nonnegative integers but that they are, in fact, cardinalities of certain equivalence
classes of permutations. Their proof is part of a more general phenomenon, an action of
transformation groups on the symmetric group $\mathfrak{S}_n$ studied by
\citet{FS74}.
\section{Symmetries of $A_n(s,t)$ and a homogeneous recurrence}
The polynomials $A_n(s,t)$ were first studied by \cite*{CRS66}.
They proved a recurrence for the coefficients of $A_n(s,t)$ (see equation (7.8) in their article---note
there is an obvious typo in the last row of the equation, cf.~equation (7.7) in the same article). The
recurrence they provide for the coefficients is equivalent to the following one for
the generating functions.
\begin{thm}[Equation (9) of \citet*{Pet12}] For $n\ge 2$,
\[
\begin{split}
nA_n(s,t) = &\left(n^2st+(n-1)(1-s)(1-t)\right)A_{n-1}(s,t)\\
&+ nst(1-s)\frac{\partial}{\partial s}A_{n-1}(s,t) +
nst(1-t)\frac{\partial}{\partial t}A_{n-1}(s,t)\\
&+st(1-s)(1-t)\frac{\partial^2}{\partial s\partial t}A_{n-1}(s,t)\,,
\end{split}
\]
with initial value $A_1(s,t) = st.$
\label{thm:Carlitz}
\end{thm}
At first glance, this recurrence might not seem very useful at all. However, if we introduce additional
variables---to count ascents ($\asc$) and inverse ascents ($\iasc$)---we obtain a more transparent
recurrence.
So, let us first define
\begin{align}
A_n(s,t;x,y) &= \sum_{\pi \in \mathfrak{S}_n}
s^{\ides(\pi)+1} t^{\des(\pi)+1}x^{\iasc(\pi)+1} y^{\asc(\pi)+1}\\
&=\sum_{\pi \in \mathfrak{S}_n}
s^{\ides(\pi)+1} t^{\des(\pi)+1}x^{n-\ides(\pi)} y^{n-\des(\pi)}\\
&= (xy)^{n+1} A_n(s/x, t/y)\,.\label{eq:homogenized4var}
\end{align}
\begin{prop}
$A_n(s,t;x,y)$ is homogeneous of degree $2n+2$ and is invariant under the
action of the Klein 4-group $V \cong \langle \mathrm{id}, (12)(34), (13)(24), (14)(23)\rangle$,
where the action of $\sigma \in V$ on $A_n(s,t;x,y)$ is permutation of the variables
accordingly (e.g., $\sigma=(13)(24)$ swaps $x$ with $s$ and $y$ with $t$, simultaneously).
\end{prop}
\begin{proof}
The homogeneity is immediate from the second line of the equation above. The invariance
is a consequence of
the symmetry properties of
$A_n(s,t)$, such as $A_n(s,t) = A_n(t,s)$; see, for example, equations (12--14)
in \citep{Pet12}.
Note that, due to the introduction of the new variables, for $n \ge 4$, the
polynomial $A_n(s,t;x,y)$ is \emph{not} symmetric.
\end{proof}
Now we are in position to give our homogeneous recurrence.
\begin{thm} For $n\ge 2$,
\begin{equation}
\begin{split}
nA_n(s,t;x,y) = &(n-1)(s-x)(t-y)A_{n-1}(s,t;x,y)\\
&+ stxy\left(\frac{\partial}{\partial s} +
\frac{\partial}{\partial x}\right)\left(\frac{\partial}{\partial t}+
\frac{\partial}{\partial y}\right)A_{n-1}(s,t;x,y)
\end{split}
\label{eq:fourvariable}
\end{equation}
with initial value $A_1(s,t;x,y) = stxy.$
\label{thm:fourvariable}
\end{thm}
\begin{proof}
Consider the bivariate recurrence given in Theorem \ref{thm:Carlitz} and
observe that it can be rewritten as
\[
nA_n(s,t) = \left((n-1)(1-s)(1-t)
+ st\left(n+(1-s)\frac{\partial}{\partial s}\right)
\left(n + (1-t)\frac{\partial}{\partial t}\right)\right)
A_{n-1}(s,t).
\]
Now we can make both sides of the equation homogeneous using (\ref{eq:homogenized4var}).
Since the two Eulerian operators act on different variables each of them can be
replaced by their symmetric two-variable homogenized counterpart and the theorem follows.
\end{proof}
\begin{rem}
The invariance of $A_n(s,t;x,y)$ under the Klein-group action also follows easily
from recurrence (\ref{eq:fourvariable}) directly. Clearly, $A_1(s,t;x,y) = stxy$
is invariant under the action of the group (in fact, it is symmetric) and the
operator acting on $A_{n}(s,t;x,y)$ denoted by
\begin{equation}
T_{n} = n(s-x)(t-y) + stxy\left(\frac{\partial}{\partial s} +
\frac{\partial}{\partial x}\right)\left(\frac{\partial}{\partial t}+
\frac{\partial}{\partial y}\right).
\label{eq:Toperator}
\end{equation} itself is invariant under the action of the Klein-group.
\end{rem}
Finally, Theorem~\ref{thm:fourvariable} allows us to give a (homogenized) restatement of
Gessel's conjecture:
\begin{conj}
\[A_n(s,t;x,y) = \sum_{i,j} \gamma_{n,i,j} (stxy)^i(st+xy)^j
(tx+sy)^{n+1-2i-j} ,\]
where $\gamma_{n,i,j} \in \mathbb{N}$ for all $i,j \in \mathbb{N}.$
\end{conj}
For example, we have (cf. page 18 of \cite{Pet12}):
\begin{align*}
A_1(s,t;x,y) &= stxy \\
A_2(s,t;x,y) &= stxy(st+xy) \\
A_3(s,t;x,y) &= stxy(st+xy)^2 +2(stxy)^2\\
A_4(s,t;x,y) &= stxy(st+xy)^3 + 7(stxy)^2(st+xy) + (stxy)^2(tx+sy)\\
A_5(s,t;x,y) &= stxy(st+xy)^4+ 16(stxy)^2(st+xy)^2 +6(stxy)^2(st+xy)(tx+sy) +16(stxy)^3.
\end{align*}
\begin{rem}
It is not too hard to see that Theorem~\ref{thm:fourvariable} is, in fact, equivalent
to Theorem~\ref{thm:Carlitz}. At the same time, the symmetric nature of the
homogeneous operator is more suggestive to combinatorial interpretation.
It would be nice to find such an interpretation (perhaps in terms of non-attacking
rook placements on a rectangular board).
\end{rem}
\medskip
\section{A recurrence for the coefficients $\gamma_{n,i,j}$}
\label{sec:recurrencegamma}
Following the ideas of \citet[Chapitre V]{FS70} that were used to devise a recurrence
for $\gamma_{n,i}$, we apply the operator $T_n$ to the basis elements to obtain a
recurrence for the coefficients $\gamma_{n,i,j}$. As a result, we obtain the following recurrence.
\begin{thm} Let $n\ge 1$. For all $i\ge1$ and $j \ge 0$, we have
\begin{equation}
\begin{split}
(n+1) \gamma_{n+1,i,j} =&\quad (n+i(n+2-i-j))\gamma_{n,i,j-1}
+ (i(i+j)-n) \gamma_{n,i,j} \\&+
(n+4-2i-j)(n+3-2i-j) \gamma_{n, i-1,j-1}+\\&+
(n+2i+j)(n+3-2i-j)\gamma_{n, i-1,j} \\&+
(j+1)(2n+2-j)\gamma_{n,i-1,j+1} + (j+1)(j+2)\gamma_{n,i-1,j+2}\, ,
\end{split}
\label{rec:gamma_nij}
\end{equation}
with $\gamma_{1,1,0} = 1$, $\gamma_{1,i,j} = 0$ (unless $i=1$ and $j=0$) and
$\gamma_{n,i,j} = 0$ if $i < 1$ or $j < 0$.
\end{thm}
\begin{proof}
Denote the basis elements by $B^{(n)}_{i,j} = (stxy)^i(st+xy)^j(tx+sy)^{n+1-2i-j}$ for convenience, and recall the definition of $T_n$ given in (\ref{eq:Toperator}).
A quick calculation shows that
\begin{equation}
n(s-x)(t-y)B^{(n)}_{i,j} = n \left(B^{(n+1)}_{i,j+1}- B^{(n+1)}_{i,j}\right) \,.
\label{eq:M}
\end{equation}
To calculate the action of the differential operators on the basis elements, we use the product rule,
which for second-order partial derivatives is given by the following formula:
\[
\begin{split}
\partial_{zw}(fgh) =& \quad \partial_{zw}(f)gh+\partial_{z}(f)\partial_{w}(g)h+\partial_{z}(f)g\partial_{w}(h) \\
&+\partial_{w}(f)\partial_z(g)h+f\partial_{zw}(g)h+f\partial_z(g)\partial_{w}(h) \\
&+\partial_w(f)g\partial_{z}(h)+f\partial_{w}(g)\partial_z(h)+fg\partial_{zw}(h)\, ,
\end{split}
\] where $f,g,h$ are functions, $\partial_z = \partial/\partial z$ and $\partial_w = \partial/\partial w$
denote the partial differential operators with respect to $z$ and $w$, and $\partial_{zw} = \partial_z\partial_w$
is the second-order differential operator.
After some calculations, this gives the following:
\begin{equation}
\begin{split}
stxy\left(\frac{\partial^2}{\partial s \partial t} + \frac{\partial^2}{\partial x \partial y}\right)
B^{(n)}_{i,j} &= i(n+1-i-j)B^{(n+1)}_{i,j+1}+ j(2n+3-j) B^{(n+1)}_{i+1,j-1}\\
&\quad+ (n+1-2i-j)(n-2i-j)B^{(n+1)}_{i+1,j+1}\,.
\end{split}
\label{eq:D1}
\end{equation}
\begin{equation}
\begin{split}
stxy\left(\frac{\partial^2}{\partial s \partial y} + \frac{\partial^2}{\partial t \partial x}\right)B^{(n)}_{i,j} &= i(i+j)B^{(n+1)}_{i,j} + j(j-1) B^{(n+1)}_{i+1,j-2}+\\&\quad
(n+1-2i-j)(n+2+2i+j)B^{(n+1)}_{i+1,j}\,.
\end{split}
\label{eq:D2}
\end{equation}
Summing (\ref{eq:M}), (\ref{eq:D1}) and
(\ref{eq:D2}) we arrive at the following expression.
\[
\begin{split}
T_n[B^{(n)}_{i,j}] = &(n+i(n+1-i-j)) B^{(n+1)}_{i,j+1} +
(i(i+j)-n) B^{(n+1)}_{i,j} \\
&+ (n+1-2i-j)(n-2i-j)B^{(n+1)}_{i+1,j+1} +
(n+2+2i+j)(n+1-2i-j)B^{(n+1)}_{i+1,j} \\
&+ j(2n+3-j) B^{(n+1)}_{i+1,j-1}+
j(j-1) B^{(n+1)}_{i+1,j-2}.
\end{split}
\]
Finally, collecting together all terms $T_n[B^{(n)}_{k,\ell}]$ which
contribute to $B^{(n+1)}_{i,j}$ we obtain (\ref{rec:gamma_nij}).
\end{proof}
\begin{rem} If we sum up both sides of (\ref{rec:gamma_nij}) for all
possible $j$ then we get (\ref{rec:gamma_ni}) back.
\end{rem}
One could study the generating function
\[G(u,v,w) = \sum_{i,j}\gamma_{n,i,j} u^n v^{i} w^{j}\] with coefficients satisfying
the above recurrence. Gessel's conjecture is equivalent to saying that its coefficients are
nonnegative integers. Unfortunately, these properties are not immediate from the
recurrence (\ref{rec:gamma_nij}).
\section{Generalizations of the conjecture}
\cite{Ges12} noted that the following equality of \citet{CRS66}
\[ \sum_{i,j=0}^\infty \binom{ij+n-1}{n} s^it^j =
\frac{A_n(s,t)}{(1-s)^{n+1}(1-t)^{n+1}}\]
can be generalized as follows.
Let $\tau \in \mathfrak{S}_n$ with $\des(\tau) = k-1.$
Define $A_n^{(k)}(t)$ by
\[ \sum_{i,j=0}^\infty \binom{ij+n-k}{n} s^it^j =
\frac{A^{(k)}_n(s,t)}{(1-s)^{n+1}(1-t)^{n+1}}.\]
Then the coefficient of $s^it^j$ in $A_n^{(k)}$ is the number of pairs of
permutations $(\pi,\sigma)$ such that $\pi\sigma = \tau$, $\des(\pi) = i$
and $\des(\sigma) = j$. \cite{Ges12} also pointed out that these polynomials
arise implicitly in \cite{MP70}; compare (11.10) there with the above equation.
This suggests that Conjecture~\ref{conj:gessel} holds in a more general form
(this version of the conjecture appeared as Conjecture 10.2 in \cite{Bra08}).
\begin{conj}[Gessel]
\label{conj:gesseltau}
Let $\tau \in \mathfrak{S}_n$. Then
\[\sum_{\pi \in \mathfrak{S}_n} s^{\des(\pi)+1}t^{\des(\pi^{-1}\tau)+1} =
\sum_{i,j} \gamma^{\tau}_{n,i,j} (st)^i (s+t)^j (1+st)^{n+1-j-2i}\,,\]
where $\gamma^{\tau}_{n,i,j}$ are nonnegative integers for all $i,j \in \mathbb{N}.$
Furthermore, the coefficients $\gamma^{\tau}_{n,i,j}$ do not depend on the actual
permutation $\tau$, only on the number of descents in $\tau$.
\end{conj}
In the special case when $\tau = n (n-1) \dotso 2 1$ (and hence $\des(\tau) = n-1$) the roles
of descents and ascents interchange.
\begin{thm} For $n\ge 2$,
\begin{equation}
\begin{split}
nA^{(n)}_n(s,t;x,y) = &(n-1)(x-s)(t-y)A^{(n-1)}_{n-1}(s,t;x,y)\\
&+ stxy\left(\frac{\partial}{\partial s} +
\frac{\partial}{\partial x}\right)\left(\frac{\partial}{\partial t}+
\frac{\partial}{\partial y}\right)A^{(n-1)}_{n-1}(s,t;x,y)
\end{split}
\end{equation}
with initial value $A^{(1)}_1(s,t;x,y) = stxy$.
\end{thm}
In particular, we have the following identity.
\begin{cor}
\[A^{(n)}_n(s,t;x,y) = A_n(s,y;x,t).\]
\end{cor}
\subsection{A type B analog}
\citet{Ges12} also noted that there is an analogous definition for the hyperoctahedral
group $\mathfrak{B}_n$. The elements of $\mathfrak{B}_n$ can be thought of as
signed permutations of $\{1, \dotsc, n\}$, and the type $B$ descents are defined as
$\des_B(\sigma) = \{ i \in \{0,1, \dotsc, n\} : \sigma_i > \sigma_{i+1}\}$ with
$\sigma_0 := 0$ for
$\sigma = \sigma_1\dotso\sigma_n \in \mathfrak{B}_n$.
\[ \sum_{i,j=0}^\infty \binom{2ij+i+j +1+n-k}{n} s^it^j =
\frac{B^{(k)}_n(s,t)}{(1-s)^{n+1}(1-t)^{n+1}}\,,\]
where
\[B^{(k)}_n(s,t) = \sum_{\sigma\in \mathfrak{B}_n}
s^{\des_B(\sigma)}t^{\des_B(\sigma^{-1}\tau)},\]
with $\tau \in \mathfrak{B}_n$ such that $\des_B(\tau) = k-1$ (here
$\des_B$ denotes the descents of type $B$).
Therefore, mimicking the proof of Theorem~\ref{thm:Carlitz} given by
\citet{Pet12}, we get an analog of
Theorem~\ref{thm:Carlitz} for the type $B$
two-sided Eulerian polynomials, $B_n(s,t) = B_n^{(1)}(s,t)
\begin{thm} For $n\ge 2$,
\begin{equation}
\begin{split}
nB_n(s,t) = &(2n^2st -nst + n)B_{n-1}(s,t)\\ &+
(2nst(1-s) + s(1-s)(1-t))\frac{\partial}{\partial s}B_{n-1}(s,t)\\ &+
(2nst(1-t) + t(1-s)(1-t)) \frac{\partial}{\partial t}B_{n-1}(s,t)\\ &+
2st(1-s)(1-t)\frac{\partial^2}{\partial s\partial t}B_{n-1}(s,t)\,.
\end{split}
\end{equation}
with initial value $B_1(s,t) = 1+st$.
\end{thm}
\begin{proof}
Following the proof for the case of the symmetric group in \cite[eq.~(9)]{Pet12},
we use the corresponding identity of binomial coefficients:
\[n\binom{2ij+i+j +n}{n} = (2ij+i+j)\binom{2ij+i+j +n-1}{n-1} +
n\binom{2ij+i+j +n-1}{n-1}\,.\]
Multiplying both sides by the monomial $s^it^j$ and summing over all integers $i,j$ we get
\[\begin{split}
\sum_{i,j=0}^\infty &n\binom{2ij+i+j +n}{n} s^it^j = \\
&\sum_{i,j=0}^\infty (2ij+i+j)\binom{2ij+i+j +n-1}{n-1} s^it^j +
\sum_{i,j=0}^\infty n\binom{2ij+i+j +n-1}{n-1} s^it^j
\,,
\end{split}
\]
from which we obtain the following recurrence for
$F_n(s,t) = B_n(s,t)/(1-s)^{n+1}(1-t)^{n+1}$:
\[nF_n(s,t) = 2st \frac{\partial^2}{\partial s\partial t} F_{n-1}(s,t) +
s \frac{\partial}{\partial s} F_{n-1}(s,t) +
t \frac{\partial}{\partial t} F_{n-1}(s,t) +
nF_{n-1}(s,t)\,.
\]
Now substitute back the expression for $F_n(s,t)$, multiply both sides with
$(1-s)^{n+1}(1-t)^{n+1}$ and with a little work we get that
\[\begin{split}
nB_n(s,t) = &(2n^2st + nt(1-s) + ns(1-t) + n(1-s)(1-t))B_{n-1}(s,t)\\ &+
(2nst(1-s) + s(1-s)(1-t))\frac{\partial}{\partial s}B_{n-1}(s,t)\\ &+
(2nst(1-t) + t(1-s)(1-t)) \frac{\partial}{\partial t}B_{n-1}(s,t)\\ &+
2st(1-s)(1-t)\frac{\partial^2}{\partial s\partial t}B_{n-1}(s,t)\,.
\end{split}
\]
\end{proof}
It would be of interest to find a homogeneous version of this theorem
(an analogue of Theorem~\ref{thm:fourvariable}) and a recurrence
for the corresponding $\gamma_{n,i,j}$ coefficients in the case of type $B$.
\subsection{Cyclic descents}
One can also consider two-sided Eulerian-like polynomials using cyclic descents.
A \emph{cyclic descent} of a permutation $\pi$ in
$\mathfrak{S}_n$ is defined as
\[\cdes(\pi) = |\{ i : \pi_i > \pi_{(i+1)\bmod n}\}| =
\des(\pi) + \chi(\pi_n > \pi_1)\,,\]
where
\[
\chi(a>b) = \begin{cases}
1, $ if $ a>b $, and $\\
0, $ otherwise.$
\end{cases}\]
The following theorem refines a result of \citet[Corollary 1]{Ful00}.
\begin{thm}
\label{thm:cyclic}
For $n\ge 1$,
\[(n+1)A_n(s,t) =
\sum_{\pi \in \mathfrak{S}_{n+1}} s^{\cdes(\pi^{-1})} t^{\cdes{\pi}}\,.\]
\end{thm}
\begin{lem}
\label{lem:cyclic}
Let $\sigma = 23\dotso n 1$ denote the cyclic rotation in $\mathfrak{S}_n$ (for
$n \ge 2)$. Then
\[ (\cdes(\pi), \cdes(\pi^{-1})) = (\cdes(\pi\sigma), \cdes((\pi\sigma)^{-1})).\]
In other words, the cyclic rotation simultaneously preserves the cyclic descent and the
cyclic inverse descent stastics.
\end{lem}
\begin{rem} Lemma~\ref{lem:cyclic} is essentially the same as Theorem 6.5 in \citep{LP12}.
We give an elementary proof of it, for the sake of completeness.
\end{rem}
\begin{proof}
The part that $\cdes(\pi) = \cdes(\pi\sigma)$ is obvious since cyclical rotation does not
effect the cyclic descent set. For the other part, it is equivalent to show that
$\cdes(\pi) = \cdes(\sigma^{-1}\pi)$. In other words, the cyclic descent statistic is invariant under the operation when we cyclically shift the values of a permutation, i.e., add $1$ to each entry modulo $n$. For $\pi = \pi_1\dotso\pi_n$ an arbitrary permutation in
$\mathfrak{S}_n$ denote the entry preceding $n$ and following $n$ by $a$ and $b$,
respectively. Then $\pi = \pi_1\dotso a n b \dotso \pi_n$ and
$\sigma^{-1}\pi = (\pi_1+1)\dotso (a+1)1(b+1) \dotso (\pi_n+1)$. Clearly, in all but one position the cyclic descents are preserved, same is true for the cyclic ascents. The $a\nearrow n$ cyclic ascent is replaced by the $(a+1) \searrow 1$ cyclic descent and similarly,
$n \searrow b$ gets replaced by $1\nearrow (b+1)$. Thus, the total number of cyclic descents remains the same.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:cyclic}]
Using Lemma~\ref{lem:cyclic} we can apply the cyclic rotation to any permutation in
$\mathfrak{S}_{n+1}$ until $\pi_{n+1} = n+1$. This will map exactly $n+1$ permutations
in $\mathfrak{S}_{n+1}$ to the same permutation $\pi_1\dots\pi_n(n+1)$. Clearly,
$\cdes(\pi_1\dots\pi_n(n+1)) = \des(\pi_1\dots\pi_n)+1$ and
$\cdes((\pi_1\dots\pi_n(n+1))^{-1}) = \des((\pi_1\dots\pi_n)^{-1})+1$ and the theorem
follows.
\end{proof}
\section{Connection to inversion sequences}
\label{sec:invseq}
We conclude by proposing a combinatorial model for the joint distribution of descents
and inverse descents.
A permutation $\pi \in \mathfrak{S}_n$ can be encoded as its inversion sequence
$e = (e_1, \dotsc, e_n)$, where \[e_j = |\{i : i < j, \pi_i > \pi_j\}|.\] Let
$I_n = \{(e_1, \dotsc, e_n) \in \mathbb{Z}^n : 0\le e_i \le i-1\}$
denote the set of inversion sequences for $\mathfrak{S}_n$.
Recently, \cite{SS12} studied the \emph{ascent} statistic
$\asc_I(e) = \left|\left\{i : e_i < e_{i+1} \right\}\right|$
for inversion sequences (and
their generalizations)
and showed that this statistic is \emph{Eulerian}, i.e., it is equidistributed with the
descent statistic over permutations. We use the subscript $I$ to emphasize that this is a statistic
for inversion sequences which is different from the ascent statistic for permutations used
earlier in the paper.
\citet{MR01} also studied this representation of permutations under the name
``subexceedant functions''. They considered the statistic that counts that
distinct entries in $e \in I_n$, $\mathrm{dst}(\pi) = | \{e_i : 1\le i \le n\}|$. They gave
multiple proofs of the following observation (which they attributed to Dumont) that
this statistic is also Eulerian.
\begin{prop}[Dumont]
\[
A_n(x) = \sum_{e\in I_n} x^{\mathrm{dst}(e)}\,.
\]
\label{prop:dumont}
\end{prop}
In fact, the joint distribution ($\asc_I,\mathrm{dst}$) over inversion sequences
seems to agree
with the joint distribution $(\des, \ides)$ of descents and inverse descents over permutations.
\begin{conj}
\[A_n(s,t) = \sum_{e\in I_n} s^{\mathrm{dst}(e)}t^{\asc_I(e)+1} \,. \]
\end{conj}
This observation clearly deserves a bijective proof. Such a proof
might shed light on a combinatorial
proof of recurrence (\ref{eq:fourvariable}). Note that it is not even clear to begin with
why the right-hand side should be a symmetric polynomial in variables $s$ and $t$.
\section*{Acknowledgments}
I thank Ira Gessel for discussing his conjecture with me and for giving me feedback several
times during the preparation of this manuscript. His feedback, suggestions on notation and missing
references improved the presentation substantially.
I also thank Mireille Bousquet-M\'elou for sharing
her observations on the recurrence of the $\gamma$ coefficients, and Carla Savage for
numerous discussions on inversion sequences and Eulerian polynomials that inspired
Section~\ref{sec:invseq}. I thank T.~Kyle Petersen for enlightening discussions,
Petter Br\"and\'en for helpful comments and for pointing
out that Proposition~\ref{prop:dumont} was already known.
Finally, I thank my advisor, Jim Haglund
for his comments and guidance. A preliminary version of this article appeared in my dissertation written under his supervision.
\bibliographystyle{abbrvnat}
| {
"timestamp": "2012-10-16T02:03:22",
"yymm": "1210",
"arxiv_id": "1210.3799",
"language": "en",
"url": "https://arxiv.org/abs/1210.3799",
"abstract": "We study the joint distribution of descents and inverse descents over the set of permutations of n letters. Gessel conjectured that the two-variable generating function of this distribution can be expanded in a given basis with nonnegative integer coefficients. We investigate the action of the Eulerian operators that give the recurrence for these generating functions. As a result we devise a recurrence for the coefficients but are unable to settle the conjecture. We examine generalizations of the conjecture and obtain a type B analog of the recurrence satisfied by the two-variable generating function. We also exhibit some connections to cyclic descents and cyclic inverse descents. Finally, we propose a combinatorial model in terms of statistics on inversion sequences.",
"subjects": "Combinatorics (math.CO)",
"title": "Some remarks on the joint distribution of descents and inverse descents",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462211935646,
"lm_q2_score": 0.8006919997179627,
"lm_q1q2_score": 0.7910406354612799
} |
https://arxiv.org/abs/1410.3755 | 3-manifolds Modulo Surgery Triangles | Surgery triangles are an important computational tool in Floer homology. Given a connected oriented surface $\Sigma$, we consider the abelian group $K(\Sigma)$ generated by bordered 3-manifolds with boundary $\Sigma$, modulo the relation that the three manifolds involved in any surgery triangle sum to zero. We show that $K(\Sigma)$ is a finitely generated free abelian group and compute its rank. We also construct an explicit basis and show that it generates all bordered 3-manifolds in a certain stronger sense. Our basis is strictly contained in another finite generating set which was constructed previously by Baldwin and Bloom. As a byproduct we confirm a conjecture of Blokhuis and Brouwer on spanning sets for the binary symplectic dual polar space. | \section{Introduction}
Floer homology theories can be used to define a number of different invariants of closed, oriented $3$-manifolds. Many of these theories satisfy a ``surgery triangle'', which is a relationship between the invariants of different Dehn surgeries $Y_{\alpha}(K)$ on a knot $K$ in a fixed manifold $Y = Y_{\infty}(K)$. In general, if $\alpha$, $\beta$, $\gamma$ are three oriented surgery curves on the boundary torus of $Y \setminus \nu(K)$ such that
\[ \alpha \cdot \beta = \beta \cdot \gamma = \gamma \cdot \alpha = - 1 \]
in integral homology, then there is an exact triangle of Floer homology groups,
\[ \cdots \to HF(Y_{\alpha}(K)) \to HF(Y_{\beta}(K)) \to HF(Y_{\gamma}(K)) \to HF(Y_{\alpha}(K)) \to \cdots .\]
Surgery triangles are useful because one can always use them to express the invariants of a given manifold in terms of the invariants of simpler manifolds.
To make this observation precise, it is useful to borrow some language from finite geometry. We view diffeomorphism classes of oriented $3$-manifolds as points in a geometry $X$, whose lines are surgery triangles $(Y_{\alpha},Y_{\beta},Y_{\gamma})$. Note that some lines in this geometry may contain ``doubled'' points, because a manifold can be involved in a surgery triangle with itself (for example, consider Dehn surgeries on the unknot). We say that a set of $3$-manifolds $S$ is a subspace if each line meeting $S$ in two (not necessarily distinct) points is entirely contained in $S$. We define the span of a set of $3$-manifolds to be the smallest subspace that contains it. We say that a set of $3$-manifolds is a generating set if its span is all of $X$. Using this terminology, we can precisely formulate the sense in which surgery triangles allow one to understand an arbitrary $3$-manifold in terms of simpler manifolds.
\begin{proposition} \label{gen} The $3$-sphere generates all closed $3$-manifolds. \end{proposition}
Similar concepts apply to $3$-manifolds with boundary.
\begin{definition} Let $\Sigma$ be a connected, oriented surface. A bordered $3$-manifold with boundary $\Sigma$ is a pair $(Y,\phi)$, where $Y$ is a connected, oriented $3$-manifold and $\phi: \partial Y \to \Sigma$ is an orientation preserving diffeomorphism. \end{definition}
One can form a geometry whose points are isomorphism classes of bordered $3$-manifolds with boundary $\Sigma$, and whose lines are surgery triangles. Denote this geometry by $X(\Sigma)$. Then Proposition \ref{gen} has the following generalization, due to Baldwin and Bloom \cite{BaBl}.
\begin{theorem} \cite{BaBl} $X(\Sigma_g)$ is finitely generated. In fact, it has a generating set $S_g$ of cardinality
\[ N(g) = \sum_{k=0}^g {g \choose k} C_k \]
where $C_k = \frac{1}{k+1} {2k \choose k}$ is the $k$-th Catalan number. \end{theorem}
In this paper we prove that some elements of $S_g$ can be removed.
\begin{theorem} \label{minimalgen} $X(\Sigma_g)$ has a generating set $M_g \subset S_g$ of cardinality
\[ n(g) = \frac{(2^g +1)(2^{g-1}+1)}{3}. \]
\end{theorem}
Definitions of $M_g$ and $S_g$ can be found in Section $2$, and a proof of Theorem \ref{minimalgen} can be found in Section $3$. It turns out that the generating set $S_g$ has two special redundancies, one in genus $5$ and another in genus $6$. These redundancies imply many more redundancies in higher genera, so the difference between $n(g)$ and $N(g)$ grows rapidly with $g$. We include for convenience a table of the numbers $n(g)$ and $N(g)$:
\begin{center}
\begin{tabular}{ | l | c | c | c | c | c | c | c | c |}
\hline
g & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline
N(g) & 1 & 2 & 5& 15 & 51 & 188 & 731 & 2950 \\ \hline
n(g) & 1 & 2 &5 & 15 & 51 & 187 & 715 & 2795 \\
\hline
\end{tabular}
\end{center}
It turns out that there are no further redundancies - the generating set $M_g$ is minimal. In Section $4$ we will prove an even stronger result:
\begin{theorem} \label{minimality} Any generating set for $X(\Sigma_g)$ has cardinality at least $n(g)$. \end{theorem}
To explain the concept behind our proof, it is useful to think about a sort of classifying space. Define a CW complex $B(\Sigma_g)$ by first taking an infinite wedge of circles, one for each bordered $3$-manifold with boundary $\Sigma_g$, and then attaching a triangle for every surgery triangle $Y_{\alpha},Y_{\beta},Y_{\gamma}$. Note that there is a crucial ambiguity here in orienting the attaching maps. In section $4$ we prove the following result, from which Theorem \ref{minimality} follows as a corollary:
\begin{proposition} \label{homologyrank} There exists a choice of attaching maps so that $H_1(B(\Sigma_g);\Z)$ is a free abelian group of rank n(g). \end{proposition}
Interestingly, our proof of Proposition $\ref{homologyrank}$ does not rely on the existence of $M_g$. Instead, it exploits a relationship between $X(\Sigma_g)$ and another geometry: the binary symplectic dual polar space, or $DSp(2g,2)$. The points of this geometry are Lagrangian subspaces of $\F_2^{2g}$ and the lines are triples of distinct Lagrangians whose intersection has dimension $g-1$. In section $3$ we construct an explicit surjective map $\mu: X(\Sigma_g) \to DSp(2g,2)$ which takes surgery triangles to lines (or possibly tripled points). Under this map, any generating set for $X(\Sigma_g)$ maps to a generating set for $DSp(2g,2)$.
It is a result of Brouwer that the ``universal embedding dimension'' of $DSp(2g,2)$ is at least $n(g)$. The proof uses spectral graph theory and is sketched in \cite{BlBr}. Theorem \ref{minimality} follows from Brouwer's result, and the fact that the universal embedding dimension is a lower bound for the cardinality of any generating set for $DSp(2g,2)$.
In fact, it was shown by Li \cite{Li}, and independently Blokhuis and Brouwer \cite{BlBr}, that the universal embedding dimension of $DSp(2g,2)$ is exactly equal to $n(g)$. Our Theorem \ref{minimalgen} implies a stronger result, that $DSp(2g,2)$ has a spanning set of cardinality $n(g)$. This confirms a conjecture of Blokhuis and Brouwer.
It should be noted that Li (and earlier McClurg \cite{McC}) proposed a spanning set for $DSp(2g,2)$. However, their set is different from the image of $M_g$, so we have not verified that it generates.
The number $n(g)$ was initially suggested to us by some lengthy computer calculations. The author would like to thank Gabriel Gaster for his help in carrying out those calculations.
\section{Special Handlebodies}
In this section we define a special set of bordered $3$-manifolds. We will construct these manifolds by doing surgery on links in a thickened punctured disk.
Explicitly, let $D$ be the unit disk in $\R^2$, and let $D_g^*$ be the punctured disk obtained by removing $g$ smaller disks $p_1, p_2, \dots,p_g$ from the interior of $D$. Thickening $D_g^*$ yields a handlebody $H_g = D_g^* \times [-1,1]$, whose boundary is a genus $g$ surface $\Sigma_g$. We choose a collection of disjoint arcs $\alpha_1,\dots,\alpha_g \subset D_g^*$, which connect the punctures to the boundary of the disk as shown:
\begin{center}\includegraphics[ scale = 0.5]{handlebodyarcs}\end{center}
Let $A_i$ denote the disk $\alpha_i \times [-1,1]$, so that $A_1,\dots,A_g$ are a set of compressing disks for the handlebody $H_g$.
\begin{definition} We say that a framed link $L \subset H_g$ is \emph{minimal} if it intersects each disk $A_i$ at most once geometrically, and each of its components intersects at least one disk. We say that $L$ is \emph{crossingless} if it is contained in the slice $D_g^* \times \{0\}$. \end{definition}
It can be shown that any manifold obtained by surgery on a minimal crossingless link is homeomorphic to a handlebody, and that if two minimal crossingless links are not isotopic then the corresponding handlebodies are not homeomorphic. Strictly speaking we do not need either of these facts, but we will make use of them implicitly.
\begin{definition} A bordered manifold $H$ is said to be an \emph{almost-special handlebody} if it is obtained by doing $0$-surgery on each component of a minimal crossingless link $L \subset H_g$. We denote the set of almost-special handlebodies by $S_g$. \end{definition}
The set of almost-special handlebodies was previously considered by Baldwin and Bloom \cite{BaBl}, although their description of these manifolds differs from ours. We now describe a smaller set $M_g \subset S_g$. The definition of $M_g$ requires several concepts, which we now explain.
Using the distinguished arcs $\alpha_i$, we can identify each puncture with a point on the boundary of $D$. This identification induces a cyclic ordering on the set of punctures. The definition of $M_g$ requires us to choose a total ordering that is compatible with this cyclic ordering. Equivalently, we can choose a ``left-most'' puncture $p_1$, then list the remaining punctures $p_2, p_3, \dots, p_g$ as they appear in clockwise order around the boundary.
Given a handlebody of genus $g$ and a puncture $p$, we can form a new handlebody of genus $g-1$ by filling $p$. When we fill a puncture, the total ordering on the punctures of $D^*_g$ induces a total ordering on the punctures of $D^*_{g-1}$. This allows us to induct on the genus of a handlebody.
\begin{definition} We say that an almost-special handlebody $H$ is \emph{reducible} if the corresponding crossingless link $L$ satisfies one of two conditions:
\begin{enumerate}
\item There is a puncture $p$ which is not circled by any component of $L$.
\item There is a component of $L$ which circles exactly one puncture.
\end{enumerate}
If $H$ is not reducible, we say that it is irreducible. The \emph{reduction} of a reducible handlebody is the irreducible handlebody obtained by first deleting each component of $L$ that circles exactly one puncture, then filling every puncture that is not circled by any component.
\end{definition}
Given the ordering on punctures, we can order the components of any minimal crossingless link $L$, by declaring that a component $L_1$ is to the left of another component $L_2$ if the leftmost puncture circled by $L_1$ is to the left of the leftmost puncture circled by $L_2$. It is therefore sensible to talk about the leftmost component of a minimal crossingless link.
We now give an inductive definition of $M_g$.
\begin{definition} We say that an irreducible, almost-special handlebody is \emph{special} if the corresponding crossingless link $L$ satisfies each of the following three conditions:
\begin{enumerate}
\item The handlebody obtained by deleting the left-most component of $L$ and filling every puncture it circles is special.
\item The punctures circled by the left-most component are consecutive with respect to the cyclic ordering (not necessarily with respect to the total ordering).
\item If the left-most component circles $p_1$, $p_g$, and $p_{g-1}$, then it circles every puncture.
\end{enumerate}
In general, we say that an almost-special handlebody is special if its reduction is special. We denote the set of special handlebodies by $M_g$.
\end{definition}
Note that the empty link in a genus zero handlebody satisfies each condition vacuously, so even though the definition is inductive it does not require a base case.
\begin{proposition} For any $g \geq 1$, the number of irreducible special handlebodies is given by
\[ m(g) = \frac{2^{g-1} +(-1)^g}{3}. \]
\end{proposition}
\begin{proof} It suffices to show that $m(g)$ satisfies the recursion
\[ m(g) = m(g-1) + 2m(g-2), \]
together with the initial conditions $m(1) = 0$, $m(2) = 1$. The initial conditions are clear from the definitions. To prove the recursion, observe that any irreducible element of $M_g$ is obtained in a unique way by one of three constructions:
\begin{enumerate}
\item Starting with an irreducible element of $M_{g-1}$, insert a puncture inside the left-most component, just to the right of the left-most puncture it contains.
\item Starting with an irreducible element of $M_{g-2}$, insert two new left-most punctures, and create a new component circling both of them.
\item Starting with an irreducible element of $M_{g-2}$, insert a new left-most puncture and a new right-most puncture, and create a new component circling both of them.
\end{enumerate}
\end{proof}
Be warned that the special case $m(0) = 1$ is not consistent with the formula above.
\begin{proposition} The number of special handlebodies is given by the formula
\[ n(g) = \frac{(2^g+1)(2^{g-1}+1)}{3} \]
\end{proposition}
\begin{proof} A special handlebody is determined uniquely by a set uncircled punctures, a set of components that circle exactly one puncture, and an irreducible special handlebody. Therefore, the number of special handlebodies is equal to:
\[ \sum_{k=0}^g \sum_{l=0}^{g-k} {g \choose k} {g - k \choose l} m(g-k-l) \]
where $m(h)$ is the number of irreducible handlebodies of genus $h$. This iterated summation can be carried out in an elementary (but tedious) manner, by repeatedly applying the binomial theorem. The result is $n(g)$.
\end{proof}
\section{Generation}
In this section, we prove that $M_g$ generates $X(\Sigma_g)$. First we introduce some algebraic notation for the manifolds we are considering.
Observe that $X(\Sigma_g)$ has a simple binary operation, given by stacking thickened punctured disks. Given bordered manifolds $Y_1$ and $Y_2$ we denote the result of stacking $Y_1$ on top of $Y_2$ by $Y_1Y_2$. We denote the result of $0$-surgery on a minimal crossingless knot by the symbol $(i_1 i_2 \dots i_k)$, where $p_{i_1},p_{i_2},\dots,p_{i_k}$ are the punctures circled by the knot, listed in increasing order.
A special handlebody can therefore be represented by a symbol like $(247)(56)(8)$. On the other hand, not every symbol like this corresponds to a special handlebody. For example, $(123)(13)$ is crossingless but not minimal, and $(13)(24)$ is minimal but not crossingless. Note that stacking is not commutative, so for example $(13)(24) \neq (24)(13)$.
Finally, we will also need to change the surgery coefficients on some links in our diagram. To represent $p$-framed surgery on a minimal crossingless knot, we will use a symbol like $(247)_p$.
\begin{proposition} To show that $M_g$ generates $X(\Sigma_g)$, it suffices to show that the following two handlebodies lie in the span of $M_g$:
\begin{itemize}
\item Handlebody $A$ = the genus $5$ handlebody $(145)(23)$.
\item Handlebody $B$ = the genus $6$ handlebody $(14)(23)(56)$.
\end{itemize}
\end{proposition}
\begin{proof} By the result of Baldwin and Bloom, it suffices to show that every irreducible element of $S_g$ lies in the span of $M_g$. Suppose that there is an irreducible $H$ in $S_g$ that is not generated by $M_g$. Since $H$ is not special it violates one of the three conditions in the definition of special handlebody. If it violates the third condition then we can simplify it using the reduction of handlebody $A$. If it does not violate the third condition, but does violate the second condition, then we can simplify it using the reduction of handlebody $B$. If it does not violate the second and third conditions, but does violate the first, then we can forget about the leftmost component and proceed by induction on $g$.
\end{proof}
We are therefore reduced to doing two explicit computations. There are two key results that make these computations possible.
\begin{proposition} \label{genus3prop} The genus $3$ handlebody $(123)$ lies in the span of $(12)(23)$ and the reducible elements of $M_3$. \end{proposition}
\begin{proof} For any set of punctures $X$ there is a surgery triangle relating $X_{p}$,$X_{p+1}$, and the empty diagram. Therefore, $(123)$ lies in the span of the empty diagram and $(123)_1$, which in turn lies in the span of $(123)_{1}(13)$ and $(123)_{1}(13)_{-1}$. The former is equivalent by a handleslide to $(1)_1(13)$, which lies in the span of the reducible diagrams $(1)(13)$ and $(13)$. To simplify $(123)_1(13)_{-1}$ we can apply the famous lantern relation, in the form
\[ (123)_{1}(13)_{-1} = (12)_{1}(23)_{1}(1)_{-1}(2)_{-1}(3)_{-1} \]
Modifying surgery coefficients on the right hand side of this equation shows that it lies in the span of $(12)(23)$ and reducibles, as desired.
\end{proof}
We can easily modify the argument above to show that $(123)$ lies in the span of $(23)(12)$ and reducibles. Thus Proposition \ref{genus3prop} remains valid no matter how we permute the punctures.
\begin{proposition} \label{genus4prop} The genus $4$ handlebody $(12)(34)$ lies in the span of $(13)(24)$, $(14)(23)$, $(1234)$, and the reducible elements of $M_4$. \end{proposition}
\begin{proof} By the result of Baldwin and Bloom, $(13)(24)$ lies in the span of the special handlebodies. It is therefore enough to observe that there is a diffeomorphism $\phi$ of the standard genus $4$ handlebody which takes the diagram of $(12)(34)$ to the diagram of $(13)(24)$, and whose inverse sends the diagram of any special handlebody to a diagram which lies in the span of $(13)(24)$, $(14)(23)$, $(1234)$, and the reducible elements of $M_4$.
Such a diffeomorphism can be constructed as follows. Arrange the punctures $1,2,3,4$ so that they lie on the vertices of a square, with $1$ in the upper right corner. Draw a vertical arc that separates punctures $1$ and $4$ from punctures $2$ and $3$. Flip over the part of $H_g$ which lies to the right of the arc, thereby switching punctures $2$ and $3$. The result of applying this diffeomorphism to $(12)(34)$ is a two-component diagram whose components link punctures $1$ and $3$ and $2$ and $4$, respectively. This diagram is not crossingless, however we can make it crossingless by flipping over punctures $2$ and $3$ individually, in a direction opposite to the original flip. The combination of these three flips is a diffeomorphism $\phi: H_g \to H_g$ that takes $(12)(34)$ to $(13)(24)$.
Finally, one applies the inverse diffeomorphism $\phi^{-1}$ to all special handlebodies of genus 4. Using the result of Baldwin and Bloom for genera 2 and 3, one checks (tediously) that the resulting handlebodies all lie in the combined span of $(13)(24)$, $(14)(23)$, $(1234)$, and the reducible special handlebodies.
\end{proof}
We are now ready to eliminate handlebodies $A$ and $B$.
\begin{proposition} \label{genus5prop} The handlebody $(145)(23)$ lies in the span of $M_5$. \end{proposition}
\begin{proof} We write $A \rightarrow X + Y + Z$ if the bordered manifold $A$ lies in the span of $X$,$Y$, $Z$, and manifolds already known to be in the span of $M_5$, like reducible handlebodies. The reduction is very complicated to write down in full, so we only show the important steps. The arrows below all follow from a combination of Propositions \ref{genus3prop} and \ref{genus4prop}:
\begin{eqnarray*}(145)(23) &\rightarrow& (15)(14)(23) \rightarrow (15)(12)(34) + (15)(13)(24) + (15)(1234) + (15)(234) \\
(15)(13)(24) &\rightarrow& (13)(35)(24) \rightarrow (13)(23)(45) + (13)(25)(34) + (13)(2345) + (13)(245) \\
(13)(2345) &\rightarrow& (13)(23)(345) + (13)(23)(45) \\
(13)(23)(345) &\rightarrow& (12)(23)(345) \rightarrow (12345) + (123)(45) + (12)(345) + (13)(245) \\
(12)(25)(34) &\rightarrow& (12)(35)(34) + (15)(23)(34) + (1235)(34) + (125)(34) \\
(1235)(34) &\rightarrow& (12345) + (125)(34) \end{eqnarray*} \end{proof}
\begin{proposition} \label{genus6prop} The handlebody $(14)(23)(56)$ lies in the span of $M_6$. \end{proposition}
\begin{proof} Again, we show the important parts of the reduction:
\begin{eqnarray*} (14)(23)(56) &\rightarrow& (13)(24)(56) + (12)(34)(56) + (1234)(56) \\
(13)(24)(56) &\rightarrow& (13)(25)(46) + (13)(26)(45) + (13)(2456) \\
(13)(25)(46) &\rightarrow& (12)(35)(46) + (15)(23)(46) + (1235)(46) \\
(13)(26)(45) &\rightarrow& (12)(36)(45) + (16)(23)(45) + (1236)(45) \\
(13)(2456)&\rightarrow& (12)(3456) + (1456)(23) + (123456) \\
(12)(35)(46) &\rightarrow& (12)(34)(56) + (12)(36)(45) + (12)(3456) \\
(15)(23)(46) &=& (23)(15)(46) \rightarrow (23)(16)(45) + (23)(14)(56) + (23)(1456) \\
(1235)(46) &\rightarrow& (1234)(56) + (1236)(45) + (123456)
\end{eqnarray*}
The only byproduct of the above reductions which does not lie in $M_6$ is $(23)(1456)$. Grouping together punctures $4$ and $5$ and applying Proposition \ref{genus5prop} shows that this handlebody lies in the span of $M_6$. \end{proof}
\section{Minimality}
In this section we establish a lower bound for the cardinality of any generating set of $X(\Sigma_g)$. Instead of working directly with $X(\Sigma_g)$ we use its ``universal embedding'' (see \cite{BlBr} or \cite{Li}, for example), or more precisely an integral lift of this embedding.
\begin{definition} Let $\Sigma$ be a connected oriented surface. We denote by $K(\Sigma)$ the free abelian group spanned by bordered $3$-manifolds with boundary $\Sigma$, modulo the relations
\[ Y_{\alpha} + Y_{\beta} + Y_{\gamma} = 0 \]
for every surgery triangle $(Y_{\alpha},Y_{\beta},Y_{\gamma})$. \end{definition}
There is a tautological map $X(\Sigma) \to K(\Sigma)$, and under this map any generating set for $X(\Sigma)$ maps to a spanning set for $K(\Sigma)$. The group $K(\Sigma)$ can be thought of as a sort of Grothendieck group of bordered $3$-manifolds.
A subspace $L \subset H_1(\Sigma;\F_2)$ is said to be Lagrangian if it is isotropic with respect to the intersection pairing and has dimension $g$. If $S \subset H_1(\Sigma;\F_2)$ is an isotropic subspace of dimension $g-1$, then the quotient $S^{\perp}/S$ is a $2$-dimensional $\F_2$ vector space, so $S$ is contained in exactly $3$ Lagrangian subspaces. The relationship between these $3$ Lagrangians is analogous to the relationship between the $3$ manifolds involved in a surgery triangle, so we call such triples ``isotropic triangles''.
\begin{definition} Let $\Sigma$ be a connected oriented surface. We denote by $L(\Sigma)$ the free abelian group spanned by Lagrangian subspaces $L \subset H_1(\Sigma;\F_2)$, modulo the relation
\[ L_{\alpha} + L_{\beta} + L_{\gamma} = 0 \]
for every isotropic triangle $(L_{\alpha}, L_{\beta}, L_{\gamma})$. When $\Sigma = S^2$ we consider there to be a single Lagrangian subspace, the zero space, so that $L(S^2) = \Z$. \end{definition}
From now on all homology groups will have coefficients in $\F_2$, so $H_1(\Sigma) = H_1(\Sigma;\F_2)$.
\begin{proposition} If $Y$ is a $3$-manifold with (possibly disconnected) boundary $\Sigma$, then the kernel $L(Y)$ of the map $H_1(\Sigma) \to H_1(Y)$ is Lagrangian. \end{proposition}
\begin{proof} Let $\alpha$ and $\beta$ be a curves on $\Sigma$, bounding surfaces $A$ and $B$ in $Y$. If we perturb $A$ and $B$ to intersect transversely, then their intersection is a union of arcs, and these arcs together show that $\alpha \cap \beta$ has even cardinality. This shows that $L(Y)$ is isotropic.
Now consider the map
\[ p: H_1(\Sigma)/L(Y) \to L(Y)^* \]
arising from the intersection pairing on $H_1(\Sigma)$. To show that $L(Y)$ has dimension exactly $g$, and is therefore Lagrangian, it suffices to show that $p$ is injective.
Therefore, suppose that $\gamma \in H_1(\Sigma)$ is nonzero in $H_1(\Sigma)/L(Y)$. Pushing forward by the inclusion $j:\Sigma \to Y$, we obtain a class $j_*(\gamma)$ in $H_1(Y)$. This class is nonzero, because $\gamma$ does not lie in $L(Y) = \ker j_*$. Therefore, by Poincar\'e duality, there is a class $A$ in $H_2(Y,\Sigma)$ such that $j_*(\gamma) \cdot A$ is nonzero. Represent $A$ by a surface in $Y$ whose boundary lies on $\Sigma$, and let $\alpha = \partial A$. Then $\alpha \in L(Y)$ and $\gamma \cdot \alpha = j_*(\gamma) \cdot A$ is nonzero, so $p(\gamma)$ is nonzero as desired.
\end{proof}
\begin{proposition} \label{triangleequalities} If $Y_{\alpha}$, $Y_{\beta}$, $Y_{\gamma}$ form a surgery triangle, then one of two possibilities holds:
\begin{enumerate}
\item The corresponding Lagrangians $L_{\alpha}$, $L_{\beta}$, $L_{\gamma}$ form an isotropic triangle and we have
\[ n_{\alpha} = n_{\beta} = n_{\gamma} \]
where $n_{\alpha}$ (for example) denotes the rank of $H_2(Y_{\alpha})$ as an $\F_2$ vector space.
\item The corresponding Lagrangians are equal and we have
\[ (-2)^{n_{\alpha}} + (-2)^{n_{\beta}} + (-2)^{n_{\gamma}} = 0 .\]
\end{enumerate}
\end{proposition}
\begin{proof} By assumption, the manifolds $Y_{\alpha}$, $Y_{\beta}$, $Y_{\gamma}$ are obtained by doing surgery on a knot $K$ in some bordered manifold $Y$ with boundary $\Sigma_g$. Denote by $Z$ the manifold $Y \setminus \nu(K)$, whose boundary is $\Sigma_g \sqcup T^2$. Let $\alpha$, $\beta$, and $\gamma$ be the surgery curves on $T^2$ whose fillings give rise to $Y_{\alpha}$, $Y_{\beta}$, and $Y_{\gamma}$.
Let $\tilde{L}$ be the kernel of the inclusion map $H_1(T^2) \oplus H_1(\Sigma_g) \to H_1(Z)$. Then $\tilde{L}$ is Lagrangian, so it has dimension $g+1$. The kernel of the projection of $\tilde{L}$ onto $H_1(T^2)$ is an isotropic subspace $S \subset H_1(\Sigma_g)$, which is contained in all three Lagrangians $L_{\alpha}$,$L_{\beta}$,$L_{\gamma}$. Since $S$ is obtained by intersecting $\tilde{L}$ with a codimension $2$ subspace, it has dimension at least $g-1$. Since it is isotropic it has dimension at most $g$. We therefore consider two separate cases.
If $S$ has dimension $g-1$, then the projection $\tilde{L} \to H_1(T^2)$ is surjective. Hence there are classes $\alpha'$, $\beta'$, and $\gamma'$ in $H_1(\Sigma_g)$ that are homologous in $Z$ to $\alpha$, $\beta$, and $\gamma$. Let $A$, $B$, and $C$ be surfaces witnessing these homologies. Then we have (for example)
\[ \alpha' \cdot \beta' = A \cdot \beta' = A \cdot \beta = \alpha \cdot \beta = 1 ,\]
so the classes $\alpha'$, $\beta'$, and $\gamma'$ are all distinct. Evidently these classes lie outside of $S$, and they bound in $Y_{\alpha}$, $Y_{\beta}$, and $Y_{\gamma}$ respectively, so together with $S$ they span the Lagrangians $L_{\alpha}$, $L_{\beta}$, and $L_{\gamma}$. Since these Lagrangians are all distinct, they form an isotropic triangle. Finally, note that a closed surface in $Y_{\alpha}$ (for example) cannot intersect the surgery curve in a homologically nontrivial way, because $\alpha$ does not bound in $Z$. Therefore, the Mayer-Vietoris sequence
\[ H_2(T^2) \to H_2(Z) \oplus H_2( S^1 ) \to H_2(Y_{\alpha}) \to H_1(T^2) \]
shows that the map $H_2(Z)/H_2(T^2) \to H_2(Y_{\alpha})$ is an isomorphism, and similarly for $\beta$ and $\gamma$, so $n_{\alpha} = n_{\beta} = n_{\gamma}$.
If the dimension of $S$ is $g$, then the image of the projection $\tilde{L} \to H_1(T^2)$ is $1$-dimensional and $S = L_{\alpha} = L_{\beta} = L_{\gamma}$. Without loss of generality, the image of the projection is spanned by $\alpha$. Without loss of generality, $\alpha$ is in the image of the projection, so there is some $\alpha'$ in $H_1(\Sigma)$ such that $(\alpha, \alpha') \in \tilde{L}$. By definition this means that $\alpha$ and $\alpha'$ are homologous in $Z$, so $\alpha'$ bounds in $Y_{\alpha}$, and therefore $\alpha' \in S = L_{\alpha}$. We conclude that $\tilde{L} = \mathrm{span}(\alpha) \oplus S$.
At any rate, we see that the kernel of $H_1(T^2) \to H_1(Z)$ is spanned by $\alpha$, hence the Mayer-Vietoris sequence
\[ H_2(Z) \oplus H_2(S^1) \to H_2(Y_{\alpha/\beta/\gamma}) \to H_1(T^2) \to H_1(Z) \oplus H_1(S^1) \]
shows that the rank of $H_2(Y_{\alpha})$ is one greater than the common rank of $H_2(Y_{\beta})$ and $H_2(Y_{\gamma})$. The identity
\[ (-2)^{n_{\alpha}} + (-2)^{n_{\beta}} + (-2)^{n_{\gamma}} = 0 \]
then follows after a moment of thought about powers of $2$.
\end{proof}
Motivated by Proposition \ref{triangleequalities}, we define a map $\tilde{\mu}: X(\Sigma_g) \to L(\Sigma_g)$ by
\[ \tilde{\mu}(Y) = (-2)^{n(Y)} [L(Y)] \]
where $n(Y)$ is the rank of $H_2(Y)$ as a vector space over $\F_2$.
\begin{proposition} The map $\tilde{\mu}$ induces a surjective homomorphism $\mu: K(\Sigma_g) \to L(\Sigma_g)$. \end{proposition}
\begin{proof} That $\tilde{\mu}$ induces a homomorphism follows directly from Proposition \ref{triangleequalities}. That it is surjective follows from the fact that the homomorphism $\mathrm{Mod}_g \to Sp_{2g}(\F_2)$ is surjective, and the fact that $Sp_{2g}(\F_2)$ acts transitively on Lagrangian subspaces.
\end{proof}
In fact, $\mu$ is an isomorphism. Before proving this we make a simple observation:
\begin{proposition} \label{keylemma} If $Y \in X(\Sigma_g)$ and $\alpha$ is any simple closed curve on $\Sigma_g$ whose homology class lies in $L(Y)$, then there exists $Y' \in X(\Sigma_g)$, equivalent to $Y$ in $K(\Sigma_g)$, such that $\alpha$ bounds an embedded disk in $Y'$. \end{proposition}
\begin{proof} Let $A \subset Y$ be a surface with boundary $\alpha$. Without loss of generality, $A$ is nonorientable, hence it is homeomorphic to a connected sum of real projective planes. Let $e_1,\dots,e_k$ denote the exceptional curves in these projective planes. Removing a tubular neighborhood of each of these curves from $Y$, we obtain a manifold with $k$ torus boundary components. With respect to the framings induced by $A$, the manifold $Y$ is obtained by filling curves of slope $(1,2)$ on each boundary torus. Let $Y'$ be the result of filling the curves of slope $(1,0)$ instead. Then the surgery triangles
\[Y_{(1,2)}(K) + Y_{(-1,-1)}(K) + Y_{(0,1)}(K) = 0 \]
\[Y_{(1,0)}(K) + Y_{(-1,-1)}(K) + Y_{(0,1)}(K) = 0, \]
which are valid for any framed knot $K$ in any $3$-manifold $Y$, together show that $Y' = Y$ in $K(\Sigma_g)$. The effect of these surgeries on the surface $A$ is to blow down all exceptional curves, the result being an embedded disk $A' \subset Y'$ with boundary $\alpha$.
\end{proof}
\begin{theorem} The map $\mu: K(\Sigma_g) \to L(\Sigma_g)$ is an isomorphism. \end{theorem}\begin{proof} First we treat the case $g=0$. Since every $3$-manifold can be reduced to $S^3$ by surgery triangles, we know that $K(S^2)$ is a cyclic group. Any generator of this cyclic group maps to a generator of $L(S^2) = \Z$, hence $K(S^2) = \Z$ and $\mu: K(S^2) \to L(S^2)$ is an isomorphism.
In general, any bordered manifold $Y_0$ with $L(Y_0) = L$ and $H_2(Y_0) = 0$ is equivalent in $K(\Sigma_g)$ to a ``standard'' example, namely the handlebody obtained by representing a standard basis of $L$ by disjoint simple closed curves on $\Sigma_g$ and attaching disks along these curves. To see why, suppose that $\alpha_1,\dots,\alpha_g$ are curves representing a basis for $L$. By Proposition \ref{keylemma}, $Y_0$ is equivalent in $K(\Sigma_g)$ to another bordered manifold $Y_1$ in which $\alpha_1$ bounds a disk. Examining the proof of Proposition $\ref{keylemma}$, we see that $H_2(Y_0)$ and $H_2(Y_1)$ are isomorphic, so $H_2(Y_1) = 0$.
Continuing this process inductively produces a manifold $Y_g$, equivalent to $Y_0$ in $K(\Sigma_g)$, such that all the curves $\alpha_i$ bound disks in $Y_g$. This manifold $Y_g$ is the connected sum of a standard handlebody $H_g$ and a closed $3$-manifold $Z$ such that $H_2(Z) = 0$. Because $\mu:K(S^2) \to L(S^2)$ is an isomorphism and $H_2(Z) = 0$, a connected sum with $Z$ is equivalent to a connected sum with $S^3$, hence $Y_g$ is equivalent to $H_g$ in $K(\Sigma_g)$. Thus the original bordered manifold $Y$ is equivalent to $H_g$, as claimed.
Any isotropic triangle can also be realized by a standard example, due to the transitive action of $\mathrm{Mod}_g$ on isotropic triangles. Hence there is an inverse map
\[ \mu^{-1}: L(\Sigma_g) \to K(\Sigma_g) \]
defined by sending a Lagrangian $L$ to any $Y$ with $L(Y) = L$ and $H_2(Y) = 0$. Evidently we have $\mu^{-1}(\mu(Y)) = Y$ in $K(\Sigma_g)$, from which we conclude that $\mu$ is injective.
\end{proof}
Any generating set for $X(\Sigma_g)$ maps to a spanning set for $K(\Sigma_g)$, or equivalently for $L(\Sigma_g)$. The group $L(\Sigma_g)$ has been computed by Blokhuis and Brouwer \cite{BlBr}, who found that it is a free abelian group of rank
\[n(g) = \frac{(2^g+1)(2^{g-1}+1)}{3}.\]
Since any spanning set for $X(\Sigma_g)$ maps to a spanning set for $K(\Sigma_g)$, we obtain:
\begin{corollary} Any generating set for $X(\Sigma_g)$ has cardinality at least $n(g)$. \end{corollary}
| {
"timestamp": "2014-10-31T01:06:55",
"yymm": "1410",
"arxiv_id": "1410.3755",
"language": "en",
"url": "https://arxiv.org/abs/1410.3755",
"abstract": "Surgery triangles are an important computational tool in Floer homology. Given a connected oriented surface $\\Sigma$, we consider the abelian group $K(\\Sigma)$ generated by bordered 3-manifolds with boundary $\\Sigma$, modulo the relation that the three manifolds involved in any surgery triangle sum to zero. We show that $K(\\Sigma)$ is a finitely generated free abelian group and compute its rank. We also construct an explicit basis and show that it generates all bordered 3-manifolds in a certain stronger sense. Our basis is strictly contained in another finite generating set which was constructed previously by Baldwin and Bloom. As a byproduct we confirm a conjecture of Blokhuis and Brouwer on spanning sets for the binary symplectic dual polar space.",
"subjects": "Geometric Topology (math.GT)",
"title": "3-manifolds Modulo Surgery Triangles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462194190618,
"lm_q2_score": 0.8006919949619792,
"lm_q1q2_score": 0.7910406293417938
} |
https://arxiv.org/abs/2102.09924 | A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions | Gradient descent optimization algorithms are the standard ingredients that are used to train artificial neural networks (ANNs). Even though a huge number of numerical simulations indicate that gradient descent optimization methods do indeed convergence in the training of ANNs, until today there is no rigorous theoretical analysis which proves (or disproves) this conjecture. In particular, even in the case of the most basic variant of gradient descent optimization algorithms, the plain vanilla gradient descent method, it remains an open problem to prove or disprove the conjecture that gradient descent converges in the training of ANNs. In this article we solve this problem in the special situation where the target function under consideration is a constant function. More specifically, in the case of constant target functions we prove in the training of rectified fully-connected feedforward ANNs with one-hidden layer that the risk function of the gradient descent method does indeed converge to zero. Our mathematical analysis strongly exploits the property that the rectifier function is the activation function used in the considered ANNs. A key contribution of this work is to explicitly specify a Lyapunov function for the gradient flow system of the ANN parameters. This Lyapunov function is the central tool in our convergence proof of the gradient descent method. |
\section{Introduction}
Gradient descent (GD) optimization schemes are the standard methods for the training of artificial neural networks (ANNs). Although a large number of numerical simulations hint that GD optimization methods do converge in the training of ANNs, in general there is no mathematical analysis in the scientific literature which proves (or disproves) the conjecture that GD optimization methods converge in the training of ANNs.
Even though the convergence of GD optimization methods is still an open problem of research, there are several promising approaches in the scientific literature which attack this problem. In particular, we refer, e.g., to \cite{Bach2017, BachMoulines2013, BachMoulines2011} and the references mentioned therein for convergence results for GD optimization methods in the training of convex neural networks, we refer, e.g., to \cite{AllenzhuLiLiang2019, AllenzhuLiSong2019, ChizatBach2018, DuLeeLiWangZhai2019, DuZhaiPoczosSingh2018arXiv, EMaWu2020, JacotGabrielHongler2020, LiLiang2019, SankararamanDeXuHuangGoldstein2020, EChaoWu2018, ZouCaoZhouGu2019} and the references mentioned therein for convergence results for GD optimization methods for the training of ANNs in the so-called overparametrized regime, we refer, e.g., to \cite{AkyildizSabanis2021, FehrmanGessJentzen2020, LeiHuLiTang2020, LovasSabanis2020} and the references mentioned therein for abstract convergence results for GD optimization methods which do not assume convexity of the considered objective functions, we refer, e.g., to \cite{Hanin2018, HaninRolnick2018, LuShinSuKarniadakis2020, ShinKarniadakis2020} and the references mentioned therein for results on the effect of initialization in the training of ANNs, and we refer, e.g., to \cite{CheriditoJentzenRossmannek2020, JentzenvonWurstemberger2020, LuShinSuKarniadakis2020} and the references mentioned therein for lower bounds and divergence results for GD optimization methods. For more detailed overviews and further references on GD optimization schemes we also refer, e.g., to \cite{Ruder2017overview}, \cite[Section 1]{JentzenKuckuckNeufeldVonWurstemberger2021}, and \cite[Section 1.1]{FehrmanGessJentzen2020}.
A key idea of this work is to attack this challenging open problem of convergence of GD optimization methods in the training of ANNs in the situation of very special target functions: Our program is to first establish convergence in the case of constant target functions, thereafter, to prove convergence in the case of affine linear target functions, thereafter, to consider suitable continuous piecewise affine linear target functions, and, finally, to pass to the limit of general continuous target functions. In particular, the central contribution of this work is to solve this problem in the case of constant target functions. More formally, the main result of this article (see \cref{theo:gd:loss} in \cref{subsection:theorem:gd} below) proves that the risk function of the standard GD process converges to zero in the training of fully-connected rectified feedforward ANNs with one input, one output, and one hidden layer in the special situation where the target function under consideration is a constant function and where the input data is continuous uniformly distributed. In the next result, \cref{theo:intro}, we illustrate the findings of this work in more detail within this introductory section. Below \cref{theo:intro} we add several
explanatory comments regarding the statement of and the mathematical objects in
\cref{theo:intro} and we also highlight the key ideas of the proof of \cref{theo:intro}.
\begin{theorem} \label{theo:intro}
Let $H \in \mathbb{N}$, $\alpha \in \mathbb{R}$,
$\gamma \in (0 , \infty ) $,
let $\norm{\cdot} \colon \mathbb{R}^{3 H + 1 } \to [0, \infty)$ satisfy for all $\phi = ( \phi_1 , \ldots, \phi_{ 3 H + 1 } ) \in \mathbb{R}^{3 H + 1}$ that $\norm{ \phi } = \br{ \sum_{i=1}^{3 H + 1 } \abs*{ \phi_i } ^2 } ^{ 1 / 2 }$,
let $\sigma_r \colon \mathbb{R} \to \mathbb{R}$, $r \in[1 , \infty]$, satisfy for all $r \in [1 , \infty)$, $x \in \mathbb{R}$ that $\sigma_r ( x ) = r^{-1} \ln \rbr{ 1 + r^{-1} e^{r x } }$ and $\sigma_\infty ( x ) = \max \{ x , 0 \}$,
let $\mathscr{N}_ r = (\realapprox{\phi}{r})_{\phi \in \mathbb{R}^{3 H + 1 } } \colon \mathbb{R}^{3 H + 1} \to C(\mathbb{R} , \mathbb{R})$, $r \in [1 , \infty]$, and $\mathcal{L} _ r \colon \mathbb{R}^{3 H + 1 } \to \mathbb{R}$, $r \in [1 , \infty]$,
satisfy for all $r \in [1 , \infty]$, $\phi = ( \phi_1 , \ldots, \phi_{ 3 H + 1 } ) \in \mathbb{R}^{3 H + 1}$, $x \in \mathbb{R}$ that $\realapprox{\phi}{r} (x) = \phi_{3 H + 1 } + \sum_{j=1}^H \phi_{2 H + j} \sigma_r (\phi_j x + \phi_{H + j} )$
and $\mathcal{L}_r(\phi) = \int_0^1 (\realapprox{\phi}{r} (y) - \alpha )^2 \d y$,
let $\mathcal{G} = ( \mathcal{G}_1, \ldots, \mathcal{G}_{3 H + 1} ) \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{3 H + 1}$ satisfy for all
$\phi \in \{ \varphi \in \mathbb{R}^{3 H + 1} \colon ((\nabla \mathcal{L}_r ) ( \varphi ) )_{r \in \mathbb{N} } \text{ is convergent} \}$ that $\mathcal{G} ( \phi ) = \lim_{r \to \infty} (\nabla \mathcal{L}_r ) ( \phi )$,
and let $\Theta = (\Theta_n)_{n \in \mathbb{N}_0} \colon \mathbb{N}_0 \to \mathbb{R}^{3 H + 1}$ satisfy for all $n \in \mathbb{N}_0$ that $\Theta_{n+1} = \Theta_n - \gamma \mathcal{G} ( \Theta_n)$ and $\gamma \leq (4 \norm{\Theta_0} + 6 \abs*{ \alpha } + 2 )^{-2}$. Then
\begin{enumerate} [(i)]
\item \label{theo:intro:item1} it holds for all $\phi \in \{ \varphi \in \mathbb{R}^{3 H + 1 } \colon \mathcal{L}_\infty \text{ is differentiable at } \varphi \}$ that $(\nabla \mathcal{L}_\infty) ( \phi ) = \mathcal{G} ( \phi)$,
\item \label{theo:intro:item2} it holds that $\sup_{n \in \mathbb{N}_0} \norm{\Theta_n} < \infty$, and
\item \label{theo:intro:item3} it holds that $\limsup_{n \to \infty} \mathcal{L}_\infty (\Theta_n) = 0$.
\end{enumerate}
\end{theorem}
Item \eqref{theo:intro:item1} in \cref{theo:intro} is a direct consequence of \cref{cor:loss:differentiable} below and items
\eqref{theo:intro:item2} and \eqref{theo:intro:item3} in \cref{theo:intro} are direct consequences of \cref{cor:gd:main} below. \cref{cor:gd:main}, in turn, follows from \cref{theo:gd:loss}, which is the main result of this article.
Let us next add a few comments regarding the mathematical objects appearing in \cref{theo:intro}.
In \cref{theo:intro} we study the training of ANNs with one input, one output, and one hidden layer. The natural number $H \in \mathbb{N}$ in \cref{theo:intro} specifies the number of neurons on the hidden layer (the dimension of the hidden layer) in the ANN. \cref{theo:intro} proves that the risk function of GD converges to zero in the special situation where the input data is continuous uniformly distributed and where the target function under consideration is a constant function. The real number $\alpha \in \mathbb{R}$ is precisely this constant with which the target function is assumed to coincide. The real number $\gamma \in (0,\infty)$ in \cref{theo:intro} specifies the learning rate of the GD method.
In \cref{theo:intro} we consider fully-connected feedforward ANNs with $1$ neuron on the input layer, $H $ neurons on the hidden layer, and $1$ neuron on the output layer. Therefore, the considered ANNs have precisely $2 H$ weights, $H + 1$ biases, and
$2 H + H + 1 = 3 H + 1$ ANN parameters overall. The function $\norm{\cdot} \colon \mathbb{R}^{3 H + 1 } \to \mathbb{R}$ in \cref{theo:intro} is nothing else but the standard norm on the space $\mathbb{R}^{ 3 H + 1 }$ of ANN parameters.
In \cref{theo:intro} we study the training of ANNs with the rectifier function
$\mathbb{R} \ni x \mapsto \sigma_{ \infty }( x ) = \max\{ x, 0 \} \in \mathbb{R}$ as the activation function. Since the rectifier function $\sigma_{ \infty } \colon \mathbb{R} \to \mathbb{R}$
in \cref{theo:intro} is not differentiable at 0, we have that the associated risk function also fails to be differentiable at some points in the ANN parameter space $\mathbb{R}^{ 3 H + 1 }$. In view of this, one needs to carefully choose the values for the driving gradient field in the GD optimization method at the points in the ANN parameter space $\mathbb{R}^{ 3 H + 1 }$ where the risk function is not differentiable. We accomplish this by approximating the rectifier function and the corresponding risk function through regularized versions of these functions. More formally, in \cref{prop:relu:approximation} in \cref{subsection:relu:approx} below we show that the functions $\sigma_r \colon \mathbb{R} \to \mathbb{R}$, $r \in [1,\infty]$, in \cref{theo:intro} satisfy that for all $x \in \mathbb{R}$, $y \in \mathbb{R} \backslash \{ 0 \}$ it holds that $\limsup_{ r \to \infty } | \sigma_r(x) - \sigma_{ \infty }(x) |$ = 0
and $\limsup_{ r \to \infty } | ( \sigma_r )'(y) - ( \sigma_{ \infty } )'(y) |$ = 0. \Nobs that for all $r \in [1,\infty)$ it holds that $\sigma_r \in C^{ \infty } ( \mathbb{R} , \mathbb{R})$.
The functions $\mathscr{N}_r \colon \mathbb{R}^{3 H + 1 } \to C ( \mathbb{R} , \mathbb{R})$, $r \in [1,\infty]$, in \cref{theo:intro} describe the realization functions of the considered ANNs. More formally, \nobs that for every $r \in [1,\infty]$ and every $\phi = ( \phi_1, \ldots , \phi_{ 3 H + 1 } ) \in \mathbb{R}^{ 3 H + 1 }$ we have that the
function $\mathbb{R} \ni x \mapsto \realapprox{ \phi }{r}( x) \in \mathbb{R}$ is the realization function
associated to the ANN with the activation function $\sigma_r \colon \mathbb{R} \to \mathbb{R}$ and the parameter vector $\phi = ( \phi_1, \ldots , \phi_{ 3 H + 1 } )$. In particular, \nobs that for every
ANN parameter vector $\phi \in \mathbb{R}^{ 3 H + 1 }$ we have that
$\mathbb{R} \ni x \mapsto \realapprox{ \phi }{\infty} ( x) \in \mathbb{R}$ is the realization function associated to the rectified ANN with the parameter vector $\phi$.
The process $\Theta = ( \Theta_n )_{ n \in \mathbb{N}_0 } \colon \mathbb{N}_0 \to \mathbb{R}^{3 H + 1 }$ in \cref{theo:intro} is the GD process with constant learning rate $\gamma$. \Nobs that the learning rate $\gamma$ in \cref{theo:intro} is assumed to be sufficiently small in the sense that $\gamma \leq (4 \norm{\Theta_0} + 6 \abs*{ \alpha } + 2 )^{-2}$. Under this assumption, \cref{theo:intro} reveals that the risk of the GD process $\mathcal{L}_{ \infty }( \Theta_n )$, $n \in \mathbb{N}_0$, does indeed converge to zero as the number of GD steps $n$ increases to infinity.
Let us also add a few comments on the proof of \cref{theo:intro}. A key new observation of this article is the fact that in the situation of \cref{theo:intro} we have that the function
\begin{equation} \label{eq:intro:lyapunov}
\mathbb{R}^{ 3 H + 1 } \ni ( \phi_1, \ldots, \phi_{ 3 H + 1 } ) \mapsto \rbr[\big]{\textstyle\sum_{i=1}^{3 H + 1 } \abs{\phi_i}^2 } + ( \phi_{3 H + 1 } - 2 \alpha ) ^2 \in \mathbb{R}
\end{equation}
is a Lyapunov function for the gradient flow system of the ANN parameters.
We refer to item \eqref{prop:lyapunov:gradient:item3} in \cref{prop:lyapunov:gradient} in \cref{subsection:lyapunov} and
\cref{lem:flow:lyapunov} in \cref{subsection:ito:lyapunov} for the proof of this statement.
In addition, in \cref{lem:vthetan:decreasing} in \cref{subsection:gd:lyapunov} we
show that the function in \eqref{eq:intro:lyapunov} is also a Lyapunov function for the time-discrete GD processes if the learning rate is sufficiently small. We also would like to emphasize that the term $( \phi_{3 H + 1 } - 2 \alpha ) ^2$ in \eqref{eq:intro:lyapunov} is essential for the function in \eqref{eq:intro:lyapunov} to serve as a Lyapunov function. In particular, we would like to point out that the function $\mathbb{R}^{3 H + 1 } \ni \phi \mapsto \norm{\phi} ^2 \in \mathbb{R}$ fails to be a Lyapunov function for the gradient flow system of the ANN parameters.
The remainder of this article is structured as follows. In \cref{section:risk:regularity} we
present the mathematical framework which we use to study the considered GD processes
and we also establish several regularity properties for the considered risk functions
and their gradients.
In \cref{section:gradientflow} we use the findings from \cref{section:risk:regularity} to establish that the risks of the considered time-continuous gradient flow processes converge to zero.
In \cref{section:gradientdescent} we prove that the risks of the considered time-discrete GD processes converge to zero.
The key ingredient in our convergence proofs for gradient flow and GD processes in Sections \ref{section:gradientflow} and \ref{section:gradientdescent} are suitable a priori estimates (which we achieve by means of the Lyapunov function in \eqref{eq:intro:lyapunov} above) for the gradient flow processes (see \cref{lem:flow:lyapunov} in \cref{subsection:ito:lyapunov}) and the GD processes (see \cref{lem:vthetan:decreasing} in \cref{subsection:gd:lyapunov}). In \cref{section:apriori:gen}
we derive -- to stimulate further research activities -- related a priori bounds in the case of general target functions.
\section{Regularity properties of the risk functions and their gradients}
\label{section:risk:regularity}
In \cref{section:risk:regularity} we present in \cref{setting:const} the mathematical framework which we use to study the considered GD processes and we also establish several regularity results for the considered risk functions and their gradients. Most notably, we establish in Propositions \ref{prop:lyapunov:norm} and \ref{prop:lyapunov:gradient} in \cref{subsection:lyapunov} below that the gradient flow system for the ANN parameters in \cref{setting:const} admits an appropriate Lyapunov function. In particular, in item \eqref{prop:lyapunov:gradient:item3} in \cref{prop:lyapunov:gradient} we prove that the function $V \colon \mathbb{R}^{3 H + 1 } \to \mathbb{R}$ in \cref{setting:const} serves as a Lyapunov function.
We also note that the results in \cref{prop:relu:approximation} in \cref{subsection:relu:approx}, in \cref{lem:interchange} in \cref{subsection:risk:differentiable}, and in \cref{cor:interchange} in \cref{subsection:risk:differentiable} are all well-known in the literature and we include in this section detailed proofs for \cref{prop:relu:approximation}, \cref{lem:interchange}, and \cref{cor:interchange} only for completeness.
\subsection{Mathematical description of rectified artificial neural networks}
\begin{setting} \label{setting:const}
Let $H \in \mathbb{N}$, $\alpha \in \mathbb{R}$,
let $\mathfrak{w} = (( \w{\phi} _ 1 , \ldots, \w{\phi} _ H ))_{ \phi \in \mathbb{R}^{3 H + 1}} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{H}$,
$\mathfrak{b} = (( \b{\phi} _ 1 , \ldots, \b{\phi} _ H ))_{ \phi \in \mathbb{R}^{3 H + 1}} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{H}$,
$\mathfrak{v} = (( \v{\phi} _ 1 , \ldots, \v{\phi} _ H ))_{ \phi \in \mathbb{R}^{3 H + 1}} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{H}$, and
$\mathfrak{c} = (\c{\phi})_{\phi \in \mathbb{R}^{3 H + 1 }} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}$
satisfy for all $\phi = ( \phi_1 , \ldots, \phi_{3 H + 1}) \in \mathbb{R}^{3 H + 1}$, $j \in \{1, 2, \ldots, H \}$ that $\w{\phi}_j = \phi_j$, $\b{\phi}_j = \phi_{H + j}$,
$\v{\phi}_j = \phi_{2H + j}$, and $\c{\phi} = \phi_{3 H + 1}$,
let $\sigma_r \colon \mathbb{R} \to \mathbb{R}$, $r \in[1 , \infty]$, satisfy for all $r \in [1 , \infty)$, $x \in \mathbb{R}$ that $\sigma_r ( x ) = r^{-1} \ln \rbr{ 1 + r^{-1} e^{r x } }$ and $\sigma_\infty ( x ) = \max \{ x , 0 \}$,
let $\mathscr{N}_ r = (\realapprox{\phi}{r})_{\phi \in \mathbb{R}^{3 H + 1 } } \colon \mathbb{R}^{3 H + 1} \to C(\mathbb{R} , \mathbb{R})$, $r \in [1 , \infty]$, and $\mathcal{L} _ r \colon \mathbb{R}^{3 H + 1 } \to \mathbb{R}$, $r \in [1 , \infty]$,
satisfy for all $r \in [1 , \infty]$, $\phi \in \mathbb{R}^{3 H + 1}$, $x \in \mathbb{R}$ that $\realapprox{\phi}{r} (x) = \c{\phi} + \sum_{j=1}^H \v{\phi}_j \sigma_r (\w{\phi}_j x + \b{\phi}_j )$
and $\mathcal{L}_r(\phi) = \int_0^1 (\realapprox{\phi}{r} (y) - \alpha )^2 \d y$,
let $\mathcal{G} = ( \mathcal{G}_1, \ldots, \mathcal{G}_{3 H + 1} ) \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{3 H + 1}$ satisfy for all
$\phi \in \{ \varphi \in \mathbb{R}^{3 H + 1} \colon ((\nabla \mathcal{L}_r ) ( \varphi ) )_{r \in \mathbb{N} } \text{ is convergent} \}$ that $\mathcal{G} ( \phi ) = \lim_{r \to \infty} (\nabla \mathcal{L}_r ) ( \phi )$,
let $\norm{ \cdot } \colon \rbr*{ \bigcup_{n \in \mathbb{N}} \mathbb{R}^n } \to [0, \infty)$ and $\langle \cdot , \cdot \rangle \colon \rbr*{ \bigcup_{n \in \mathbb{N}} (\mathbb{R}^n \times \mathbb{R}^n ) } \to \mathbb{R}$ satisfy for all $n \in \mathbb{N}$, $x=(x_1, \ldots, x_n), y=(y_1, \ldots, y_n ) \in \mathbb{R}^n $ that $\norm{ x } = [ \sum_{i=1}^n \abs*{ x_i } ^2 ] ^{1/2}$ and $\langle x , y \rangle = \sum_{i=1}^n x_i y_i$,
and let $I_j^\phi \subseteq \mathbb{R}$, $\phi \in \mathbb{R}^{3 H + 1 }$, $j \in \{1, 2, \ldots, H \}$, and $V \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}$ satisfy for all
$\phi \in \mathbb{R}^{3 H +1}$, $j \in \{1, 2, \ldots, H \}$ that $I_j^\phi = \{ x \in [0,1] \colon \w{\phi}_j x + \b{\phi}_j > 0 \}$ and
$V(\phi) = \norm{ \phi } ^2 + ( \c{\phi} - 2 \alpha) ^2$ .
\end{setting}
\subsection{Smooth approximations of the rectifier function}
\label{subsection:relu:approx}
\begin{prop} \label{prop:relu:approximation}
Let $\sigma_r \colon \mathbb{R} \to \mathbb{R}$, $r \in [1, \infty]$, satisfy for all $r \in [1 , \infty)$, $x \in \mathbb{R}$ that $\sigma_r ( x ) = r^{-1} \ln \rbr{ 1 + r^{-1} e^{r x } }$ and $\sigma_\infty ( x ) = \max \{ x , 0 \}$.
Then
\begin{enumerate} [(i)]
\item \label{prop:relu:approx:item1} it holds for all $r \in [1 , \infty)$ that $\sigma_r \in C^\infty ( \mathbb{R} , \mathbb{R} )$,
\item \label{prop:relu:approx:item2} it holds for all $r \in [1 , \infty)$, $x \in \mathbb{R}$ that $0 < \sigma_r ( x ) < \sigma_\infty (x) + 1$,
\item \label{prop:relu:approx:item3} it holds for all $x \in \mathbb{R}$ that $\limsup_{r \to \infty} \abs*{ \sigma_r(x) - \sigma_\infty (x) } = 0$,
\item \label{prop:relu:approx:item4} it holds for all $r \in [1 , \infty)$, $x \in \mathbb{R}$ that $0 < (\sigma_r)'(x) < 1$, and
\item \label{prop:relu:approx:item5} it holds for all $x \in \mathbb{R}$ that $\limsup_{r \to \infty} \abs*{ (\sigma_r)'(x) - \indicator{(0, \infty)} ( x ) } = 0$.
\end{enumerate}
\end{prop}
\begin{proof} [Proof of \cref{prop:relu:approximation}]
\Nobs that the fact that $(\mathbb{R} \ni x \mapsto e^x \in \mathbb{R}) \in C^\infty ( \mathbb{R} , \mathbb{R})$, the fact that $((0, \infty) \ni x \mapsto \ln (x) \in \mathbb{R}) \in C^\infty ( (0, \infty) , \mathbb{R})$, and the chain rule prove item \eqref{prop:relu:approx:item1}.
Next \nobs that for all $r \in [1 , \infty)$, $x \in (- \infty , 0 ]$ it holds that $1 < 1 + r^{-1} e^{rx} \leq 2$ and therefore
\begin{equation} \label{eq:relu:approximation:1}
0 < \sigma_r( x ) \leq r^{-1} \ln ( 2) < r^{-1} \leq 1 = \sigma_\infty (x) + 1.
\end{equation}
This establishes for all $x \in (- \infty , 0 ]$ that $\limsup_{r \to \infty} \abs*{ \sigma_r(x) - \sigma_\infty (x) } \leq \limsup_{r \to \infty} (r^{-1}) \allowbreak = 0 $.
Moreover, \nobs that for all $r \in [1 , \infty)$, $x \in (0, \infty)$ it holds that
\begin{equation}
\label{eq:reluapprox:2}
0 = r^{-1} \ln ( 1 ) < \sigma _ r ( x ) \leq r^{-1} \ln ( 2 e ^{ r x } ) = x + r ^{-1} \ln ( 2 ) < x + 1 = \sigma_\infty (x) + 1.
\end{equation}
This and \eqref{eq:relu:approximation:1} prove item \eqref{prop:relu:approx:item2}.
In addition, \nobs that for all $r \in [1 , \infty)$, $x \in (0, \infty)$ it holds that $\sigma _ r ( x ) \geq r^{-1} \ln ( r^{-1} e^{r x } ) = x - r^{-1} \ln (r)$.
Combining this with \eqref{eq:reluapprox:2} demonstrates for all $x \in (0, \infty)$ that
\begin{equation}
\begin{split}
\limsup_{r \to \infty} \abs*{ \sigma _ r ( x ) - \sigma _ \infty ( x ) }
&= \limsup_{r \to \infty} \abs*{ \sigma _ r ( x ) - x } \\
&\leq \limsup_{r \to \infty} \br*{ \max \cu*{ r^{-1} \ln(2) , r^{-1} \ln (r) } } = 0,
\end{split}
\end{equation}
which completes the proof of item \eqref{prop:relu:approx:item3}.
To prove item \eqref{prop:relu:approx:item4}, \nobs that the chain rule implies for all $r \in [1 , \infty)$, $x \in \mathbb{R}$ that
\begin{equation} \label{eq:reluapprox:3}
(\sigma_r ) ' ( x ) = \frac{1}{r} \br*{ \frac{e^{ r x }}{1 + r^{-1} e^{r x}} } = \frac{1}{1 + r e^{-r x }}.
\end{equation}
This demonstrates for all $r \in [1 , \infty)$, $x \in \mathbb{R}$ that $0 < (\sigma_r) ' ( x ) < 1$, which establishes item \eqref{prop:relu:approx:item4}. Next \nobs that \eqref{eq:reluapprox:3} and the fact that for all $r \in [1 , \infty)$, $x \in (- \infty , 0]$ it holds that $e^{-r x } \geq 1$ show that for all $r \in [1 , \infty)$, $x \in (- \infty , 0]$ it holds that $(\sigma_r) ' ( x ) \leq \frac{1}{1+r}$. On the other hand, \nobs that for all $x \in (0, \infty)$ we have that $\lim_{r \to \infty} (r e^{-r x }) = 0$ and thus $\lim_{r \to \infty} (\sigma_r) ' ( x ) = 1$. This establishes item \eqref{prop:relu:approx:item5}. The proof of \cref{prop:relu:approximation} is thus complete.
\end{proof}
\subsection{Differentiability properties of the risk functions}
\label{subsection:risk:differentiable}
\begin{prop} \label{prop:limit:lr}
Assume \cref{setting:const} and let $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c) \in \mathbb{R}^{3 H + 1}$. Then
\begin{enumerate} [(i)]
\item \label{prop:limit:lr:1} it holds for all $r \in [1 , \infty)$ that $\mathcal{L} _ r \in C^1 ( \mathbb{R}^{3 H + 1}, \mathbb{R})$,
\item \label{prop:limit:lr:2} it holds for all $r \in [1 , \infty)$, $j \in \{1, 2, \ldots, H \}$ that
\begin{equation} \label{eq:approx:loss:gradient}
\begin{split}
\rbr[\big]{ \tfrac{\partial }{ \partial w_j} \mathcal{L}_r } ( \phi ) &= 2 v_j \int_0^1 x \br*{ (\sigma_r )' ( w_j x + b_j) } ( \realapprox{\phi}{r}(x) - \alpha) \d x, \\
\rbr[\big]{ \tfrac{\partial }{ \partial b_j} \mathcal{L}_r } ( \phi ) &= 2 v_j \int_0^1 \br*{ (\sigma_r) ' ( w_j x + b_j ) } ( \realapprox{\phi}{r}(x) - \alpha) \d x, \\
\rbr[\big]{ \tfrac{\partial }{ \partial v_j} \mathcal{L}_r } ( \phi ) &= 2 \int_0^1 \br*{ \sigma_r ( w_j x + b_j ) } ( \realapprox{\phi}{r}(x) - \alpha) \d x, \\
\rbr[\big]{ \tfrac{\partial }{ \partial c} \mathcal{L}_r } ( \phi ) &= 2 \int_0^1 ( \realapprox{\phi}{r}(x) - \alpha) \d x,
\end{split}
\end{equation}
\item \label{prop:limit:lr:3} it holds that $\limsup_{r \to \infty} \abs{ \mathcal{L}_r ( \phi ) - \mathcal{L}_\infty ( \phi) } = 0$,
\item \label{prop:limit:lr:4} it holds that $\limsup_{r \to \infty } \norm{ ( \nabla \mathcal{L} _ r ) ( \phi ) - \mathcal{G} ( \phi ) } = 0$, and
\item \label{prop:limit:lr:5} it holds for all $j \in \{1, 2, \ldots, H \}$ that
\begin{equation} \label{eq:loss:gradient}
\begin{split}
\mathcal{G}_j ( \phi) &= 2v_j \int_{I_j^\phi} x ( \realapprox{\phi}{\infty} (x) - \alpha ) \d x, \\
\mathcal{G}_{H + j} ( \phi) &= 2 v_j \int_{I_j^\phi} (\realapprox{\phi}{\infty} (x) - \alpha ) \d x, \\
\mathcal{G}_{2 H + j} ( \phi) &= 2 \int_0^1 [\sigma_\infty (w_j x + b_j) ] ( \realapprox{\phi}{\infty}(x) - \alpha ) \d x, \\
\mathcal{G}_{3 H + 1} ( \phi) &= 2 \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha ) \d x.
\end{split}
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof} [Proof of \cref{prop:limit:lr}]
\Nobs that \cref{prop:relu:approximation}, the chain rule, and the dominated convergence theorem establish items \eqref{prop:limit:lr:1} and \eqref{prop:limit:lr:2}.
Next \nobs that \cref{prop:relu:approximation} demonstrates for all $x \in [0, 1]$ that $\lim_{r \to \infty} ( \realapprox{\phi}{r} ( x ) - \alpha ) = \realapprox{\phi}{\infty} ( x ) - \alpha$.
Furthermore, \nobs that \cref{prop:relu:approximation} shows that for all $x \in [0,1]$, $r \in [1 , \infty)$ it holds that
\begin{equation} \label{proof:limit:lr:eq1}
\begin{split}
\abs{ \realapprox{\phi}{r} ( x ) - \alpha }
&\leq \abs{ \alpha } + \abs{ c } + \textstyle\sum_{j=1}^H | v_j | ( \sigma_\infty ( w_j x + b_j ) + 1 ) \\
&\leq | \alpha | + | c | + \textstyle\sum_{j=1}^H | v_j | ( | w_j | + | b_j | + 1 ).
\end{split}
\end{equation}
The dominated convergence theorem hence proves that $\lim_{r \to \infty} \mathcal{L}_r ( \phi ) = \mathcal{L}_\infty ( \phi)$, which establishes item \eqref{prop:limit:lr:3}. Moreover, \nobs that the fact that $\forall \, x \in [0,1] \colon \lim_{r \to \infty} ( \realapprox{\phi}{r} ( x ) - \alpha ) = \realapprox{\phi}{\infty} ( x ) - \alpha$, \eqref{proof:limit:lr:eq1}, and the dominated convergence theorem prove that
\begin{equation} \label{limit:lr:eq2}
\lim_{r \to \infty} \br*{ \rbr[\big]{ \tfrac{\partial }{ \partial c} \mathcal{L}_r } ( \phi ) }
= 2 \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha ) \d x.
\end{equation}
Next \nobs that \cref{prop:relu:approximation} shows for all $x \in [0,1]$, $j \in \{1, 2, \dots, H \}$ that
\begin{equation}
\begin{split}
\lim_{r \to \infty} \br*{ x \br*{ (\sigma_r) ' ( w_j x + b_j ) } ( \realapprox{\phi}{r}(x) - \alpha) }
&= x ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{(0 , \infty ) } ( w_j x + b_j) \\
&= x ( \realapprox{\phi}{\infty} ( x ) - \alpha )\indicator{I_j^\phi} ( x )
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\lim_{r \to \infty} \br*{ [(\sigma_r) ' ( w_j x + b_j )] ( \realapprox{\phi}{r}(x) - \alpha) }
&= ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{(0 , \infty ) } ( w_j x + b_j ) \\
&= ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{I_j^\phi} ( x ).
\end{split}
\end{equation}
Furthermore, \nobs that \cref{prop:relu:approximation} and \eqref{proof:limit:lr:eq1} prove that for all $r \in [1 , \infty)$, $x \in [0,1]$, $j \in \{1, 2, \ldots, H \}$ it holds that
\begin{equation}
\begin{split}
&\abs[\big]{ x [(\sigma_r) ' ( w_j x + b_j )] ( \realapprox{\phi}{r}(x) - \alpha) } \\
&\leq \abs[\big]{ [(\sigma_r ) ' ( w_j x + b_j )] ( \realapprox{\phi}{r}(x) - \alpha) }\\
&\leq| \realapprox{\phi}{r} ( x ) - \alpha |
\leq | \alpha | + | c | + \textstyle\sum_{j=1}^H | v_j | ( | w_j | + | b_j | + 1 ).
\end{split}
\end{equation}
The dominated convergence theorem hence proves for all $j \in \{1, 2, \ldots, H \}$ that
\begin{equation} \label{limit:lr:eq3}
\lim_{r \to \infty} \br[\big]{ \rbr[\big]{ \tfrac{\partial }{ \partial w_j} \mathcal{L}_r } ( \phi ) }
= 2 v_j \int_0^1 x ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{I_j^\phi} ( x ) \d x
= 2v_j \int_{I_j^\phi} x ( \realapprox{\phi}{\infty} (x) - \alpha ) \d x
\end{equation}
and
\begin{equation} \label{limit:lr:eq4}
\lim_{r \to \infty} \br[\big]{ \rbr[\big]{ \tfrac{\partial }{ \partial b_j} \mathcal{L}_r } ( \phi ) }
=2 v_j \int_0^1 ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{I_j^\phi} ( x ) \d x
= 2v_j \int_{I_j^\phi } ( \realapprox{\phi}{\infty} (x) - \alpha ) \d x .
\end{equation}
Moreover, \nobs that \cref{prop:relu:approximation} and \eqref{proof:limit:lr:eq1} show that for all $r \in [1 , \infty)$, $x \in [0,1]$, $j \in \{1, 2, \ldots, H \}$ it holds that
\begin{equation}
\lim_{r \to \infty} \br*{ [\sigma_r ( w_j x + b_j )] ( \realapprox{\phi}{r}(x) - \alpha) } = [\sigma _\infty ( w_j x + b_j )] ( \realapprox{\phi}{\infty}(x) - \alpha)
\end{equation}
and
\begin{equation}
\begin{split}
&\abs[\big]{ [\sigma_r ( w_j x + b_j )] ( \realapprox{\phi}{r}(x) - \alpha) } \\
& \leq (\sigma_\infty ( w_j x + b_j ) + 1 ) | \realapprox{\phi}{r}(x) - \alpha |
\\
&\leq ( 1 + | w_j | + | b_j | ) | \realapprox{\phi}{r}(x) - \alpha | \\ &
\leq ( 1 + | w_j | + | b_j | ) \rbr*{ | \alpha | + | c | + \textstyle\sum_{j=1}^H | v_j | ( | w_j | + | b_j | + 1 ) }.
\end{split}
\end{equation}
This and the dominated convergence theorem demonstrate for all $j \in \{1, 2, \ldots, H \}$ that
\begin{equation}
\lim_{r \to \infty} \br[\big]{ \rbr[\big]{ \tfrac{\partial }{ \partial v_j} \mathcal{L}_r } ( \phi ) } = 2 \int_0^1 [\sigma_\infty (w_j x + b_j)] ( \realapprox{\phi}{\infty}(x) - \alpha ) \d x.
\end{equation}
Combining this, \eqref{limit:lr:eq2}, \eqref{limit:lr:eq3}, and \eqref{limit:lr:eq4} establishes items \eqref{prop:limit:lr:4} and \eqref{prop:limit:lr:5}. The proof of \cref{prop:limit:lr} is thus complete.
\end{proof}
\begin{lemma} \label{lem:interchange}
Let $\mathfrak{u} \in \mathbb{R}$, $\mathfrak{v} \in (\mathfrak{u} , \infty)$, let $f \colon \mathbb{R} \times [\mathfrak{u} , \mathfrak{v}] \to \mathbb{R}$ be locally Lipschitz continuous, let $F \colon \mathbb{R} \to \mathbb{R}$ satisfy for all $x \in \mathbb{R}$ that
\begin{equation}
F(x) = \int_\mathfrak{u} ^\mathfrak{v} f(x,s) \d s,
\end{equation}
let $x \in \mathbb{R}$, let $E \subseteq [\mathfrak{u} , \mathfrak{v}]$ be measurable, assume $\int_{[\mathfrak{u} , \mathfrak{v}] \backslash E } 1 \d s = 0 $, and assume for all $s \in E$ that $\mathbb{R} \ni v \mapsto f ( v , s ) \in \mathbb{R}$ is differentiable at $x$. Then
\begin{enumerate} [(i)]
\item it holds that $F$ is differentiable at $x$ and
\item it holds that
\begin{equation}
F'(x) = \int_E \rbr[\big]{\tfrac{\partial}{\partial x} f } ( x , s ) \d s.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof} [Proof of \cref{lem:interchange}]
\Nobs that the assumption that $\int_{[\mathfrak{u} , \mathfrak{v}] \backslash E } 1 \d s = 0 $ ensures that for all $h \in \mathbb{R} \backslash \{ 0 \}$ we have that
\begin{equation} \label{lem:interchange:eq1}
h^{-1} [ F(x+h) - F(x)] = \int_\mathfrak{u}^\mathfrak{v} h^{-1} [ f(x+h , s ) - f(x,s)] \d s = \int_E h^{-1} [ f(x+h , s ) - f(x,s)] \d s.
\end{equation}
Next \nobs that the assumption that for all $s \in E$ it holds that $\mathbb{R} \ni v \mapsto f ( v , s) \in \mathbb{R}$ is differentiable at $x$ implies that for all $s \in E$ it holds that
\begin{equation} \label{lem:interchange:eq2}
\lim\nolimits_{\abs{h} \searrow 0} \rbr*{ h^{-1} \br{ f( x+h, s) - f(x, s)}} = \rbr[\big]{\tfrac{\partial}{\partial x} f }(x, s ).
\end{equation}
Furthermore, \nobs that the assumption that $f$ is locally Lipschitz continuous ensures that for all $\delta \in (0, \infty)$ there exists $C \in (0, \infty)$ such that for all $h \in [-\delta, \delta] \backslash \{0 \}$, $s \in [ \mathfrak{u} , \mathfrak{v}]$ we have that $| h^{-1} \br{ f( x+h, s) - f(x, s)}| \leq C$. Combining this, \eqref{lem:interchange:eq1}, \eqref{lem:interchange:eq2}, and the dominated convergence theorem establishes that
\begin{equation}
\begin{split}
\lim\nolimits_{\abs{h} \searrow 0} \rbr*{ h^{-1} \br{F(x+h) - F(x)}}
&= \int_E \br*{ \lim\nolimits_{\abs{h} \searrow 0} \rbr*{ h^{-1} \br{ f( x+h, s) - f(x, s)} }} \d s \\
&= \int_E \rbr[\big]{\tfrac{\partial}{\partial x} f }(x, s ) \d s.
\end{split}
\end{equation}
This completes the proof of \cref{lem:interchange}.
\end{proof}
\begin{cor} \label{cor:interchange}
Let $n \in \mathbb{N}$, $j \in \{1, 2, \ldots, n \}$, $\mathfrak{u} \in \mathbb{R}$, $\mathfrak{v} \in (\mathfrak{u} , \infty)$, let $f \colon \mathbb{R}^n \times [\mathfrak{u} , \mathfrak{v}] \to \mathbb{R}$ be locally Lipschitz continuous, let $F \colon \mathbb{R}^n \to \mathbb{R}$ satisfy for all $x \in \mathbb{R}^n$ that
\begin{equation}
F(x) = \int_\mathfrak{u} ^\mathfrak{v} f (x , s ) \d s,
\end{equation}
let $x_1, x_2, \ldots, x_n \in \mathbb{R}$, let $E \subseteq [\mathfrak{u} , \mathfrak{v} ]$ be measurable, assume $\int_{[\mathfrak{u} , \mathfrak{v}] \backslash E } 1 \d s = 0 $, and assume for all $s \in E$ that $\mathbb{R} \ni v \mapsto f ( x_1, \ldots, x_{j-1}, v, x_{j+1}, \ldots, x_n, s) \in \mathbb{R}$ is differentiable at $x_j$. Then
\begin{enumerate} [(i)]
\item \label{cor:interchange:item1} it holds that $\mathbb{R} \ni v \mapsto F(x_1, \ldots, x_{j-1}, v , x_{j+1}, \ldots, x_n) \in \mathbb{R}$ is differentiable at $x_j$ and
\item \label{cor:interchange:item2} it holds that
\begin{equation}
\rbr[\big]{\tfrac{\partial}{\partial x_j} F} ( x_1, \ldots, x_n) = \int_E \rbr[\big]{\tfrac{\partial}{\partial x_j} f }(x_1, \ldots, x_n, s ) \d s.
\end{equation}
\end{enumerate}
\end{cor}
\begin{proof} [Proof of \cref{cor:interchange}]
\Nobs that \cref{lem:interchange} establishes items \eqref{cor:interchange:item1} and \eqref{cor:interchange:item2}.
The proof of \cref{cor:interchange} is thus complete.
\end{proof}
\begin{lemma} \label{prop:loss:diff:vc}
Assume \cref{setting:const} and let $\phi = ( \phi_1, \ldots, \phi_{3 H + 1 } ) \in \mathbb{R}^{3 H + 1 }$. Then
\begin{enumerate} [(i)]
\item \label{prop:loss:diff:vc:item1} it holds for all $j \in \mathbb{N} \cap (2 H , 3 H + 1 ]$ that $\mathbb{R} \ni v \mapsto \mathcal{L}_\infty ( \phi_1, \ldots, \phi_{ j - 1}, v, \phi_{j+1}, \ldots, \phi_{3 H + 1 }) \in \mathbb{R}$ is differentiable at $\phi_{ j}$ and
\item \label{prop:loss:diff:vc:item2} it holds for all $j \in \mathbb{N} \cap (2 H , 3 H + 1 ]$ that $( \frac{\partial}{\partial \phi_{ j} } \mathcal{L}_\infty )(\phi ) = \mathcal{G}_{j } ( \phi)$.
\end{enumerate}
\end{lemma}
\begin{proof} [Proof of \cref{prop:loss:diff:vc}]
\Nobs that the fact that $\sigma_\infty$ is Lipschitz continuous assures that
\begin{equation}
\mathbb{R}^{H + 1 } \times [0,1] \ni (u_1, \ldots, u_{H + 1 }, x ) \mapsto \rbr[\big]{\realapprox{(\phi_1, \ldots, \phi_{2 H}, u_1, \ldots, u_{H + 1 }) }{\infty} (x) - \alpha }^2 \in \mathbb{R}
\end{equation}
is locally Lipschitz continuous. In addition, \nobs that for all $u_1, u_2, \ldots, u_{H + 1 } \in \mathbb{R}$, $j \in \{1, 2, \ldots, H + 1 \}$, $x \in [0,1]$ it holds that
\begin{equation}
\mathbb{R} \ni v \mapsto \rbr[\big]{\realapprox{(\phi_1, \ldots, \phi_{2 H}, u_1, \ldots, u_{j-1}, v, u_{j+1}, \ldots, u_{H + 1 } ) }{\infty} (x) - \alpha }^2 \in \mathbb{R}
\end{equation}
is differentiable at $u_j$. Moreover, \nobs that the chain rule implies that for all $j \in \{1, 2, \ldots, H \}$, $x \in [0,1]$ it holds that
\begin{equation}
\tfrac{\partial}{\partial \phi_{2 H + j} } \br[\big]{ ( \realapprox{\phi}{\infty} ( x ) - \alpha ) ^2 } = 2 [\sigma_\infty (\phi_j x + \phi_{H + j}) ] ( \realapprox{\phi}{\infty}(x) - \alpha )
\end{equation}
and
\begin{equation}
\tfrac{\partial}{\partial \phi_{3 H + 1} } \br[\big]{ ( \realapprox{\phi}{\infty} ( x ) - \alpha ) ^2 } = 2 ( \realapprox{\phi}{\infty}(x) - \alpha ).
\end{equation}
Combining this, \cref{cor:interchange}, and \eqref{eq:loss:gradient} establishes items \eqref{prop:loss:diff:vc:item1} and \eqref{prop:loss:diff:vc:item2}. The proof of \cref{prop:loss:diff:vc} is thus complete.
\end{proof}
\begin{lemma} \label{prop:loss:diff:wb}
Assume \cref{setting:const}, let $\phi = ( \phi_1, \ldots, \phi_{3 H + 1 } ) \in \mathbb{R}^{3 H + 1 }$, and let $j \in \{1, 2, \ldots, H \}$, $i \in \{j , H + j \}$ satisfy $| \phi_j | + | \phi_{H + j } | > 0$. Then
\begin{enumerate} [(i)]
\item \label{prop:loss:diff:wb:item1} it holds that $\mathbb{R} \ni v \mapsto \mathcal{L}_\infty ( \phi_1, \ldots, \phi_{i-1}, v, \phi_{i+1}, \ldots, \phi_{3 H + 1 }) \in \mathbb{R}$ is differentiable at $\phi_i$ and
\item \label{prop:loss:diff:wb:item2} it holds that $( \frac{\partial}{\partial \phi_i } \mathcal{L}_\infty )(\phi ) = \mathcal{G}_{ i } ( \phi)$.
\end{enumerate}
\end{lemma}
\begin{proof} [Proof of \cref{prop:loss:diff:wb}]
Throughout this proof let $E \subseteq \mathbb{R}$ satisfy $E = \{ x \in [0,1] \colon \phi_j x + \phi_{H + j } \not= 0 \}$. \Nobs that the assumption that $| \phi_j | + | \phi_{H + j } | > 0$ implies that $\# ( [0,1] \backslash E ) \leq 1$. This shows that $\int_{[0,1] \backslash E } 1 \d s = 0$.
Next \nobs that the fact that $\sigma_\infty$ is Lipschitz continuous ensures that
\begin{equation} \label{prop:loss:diff:wb:eq1}
\mathbb{R}^{2 H} \times [0,1] \ni (u_1, \ldots, u_{2 H}, x) \mapsto \rbr[\big]{ \realapprox{(u_1, \ldots, u_{2H } , \phi_{2 H + 1 }, \ldots, \phi_{3 H + 1 } )}{\infty} ( x ) - \alpha } ^2 \in \mathbb{R}
\end{equation}
is locally Lipschitz continuous. In addition, \nobs that for all $x \in \mathbb{R} \backslash \{ 0 \}$ it holds that $\sigma_\infty$ is differentiable at $x$. Furthermore, \nobs that for all $x \in \mathbb{R} \backslash \{ 0 \}$ it holds that $(\sigma_\infty ) ' ( x ) = \indicator{(0, \infty)} ( x )$. This and the chain rule prove for all $x \in E$ that
\begin{equation} \label{prop:loss:diff:wb:eq2}
\mathbb{R} \ni v \mapsto \rbr[\big]{ \realapprox{(\phi_1, \ldots, \phi_{j-1}, v, \phi_{j+1}, \ldots, \phi_{3 H + 1 })}{\infty} ( x ) - \alpha}^2 \in \mathbb{R}
\end{equation}
is differentiable at $\phi_j$ and
\begin{equation} \label{prop:loss:diff:wb:eq3}
\tfrac{\partial }{\partial \phi_j} ( \realapprox{\phi}{\infty} ( x ) - \alpha ) ^2 = 2 \phi_{2 H + j} x ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{(0, \infty)} ( \phi_j x + \phi_{H + j }) = 2 \phi_{2 H + j} x ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{I_j^\phi} ( x ).
\end{equation}
Moreover, \nobs that the chain rule implies that for all $x \in E$ we have that
\begin{equation}
\mathbb{R} \ni u \mapsto \rbr[\big]{ \realapprox{( \phi_1, \ldots, \phi_{H + j -1}, u, \phi_{H + j+1}, \ldots, \phi_{3 H + 1 })}{\infty} ( x ) - \alpha}^2 \in \mathbb{R}
\end{equation}
is differentiable at $\phi_{H + j}$ and
\begin{equation}
\tfrac{\partial }{\partial \phi_{H + j}} ( \realapprox{\phi}{\infty} ( x ) - \alpha ) ^2 = 2 \phi_{2 H + j} ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{(0, \infty)} ( \phi_j x + \phi_{H + j }) = 2 \phi_{2 H + j} ( \realapprox{\phi}{\infty} ( x ) - \alpha ) \indicator{I_j^\phi} ( x ).
\end{equation}
Combining \eqref{prop:loss:diff:wb:eq1}, \eqref{prop:loss:diff:wb:eq2}, \eqref{prop:loss:diff:wb:eq3}, \cref{cor:interchange}, and \eqref{eq:loss:gradient} hence establishes items \eqref{prop:loss:diff:wb:item1} and \eqref{prop:loss:diff:wb:item2}. The proof of \cref{prop:loss:diff:wb} is thus complete.
\end{proof}
\begin{lemma} \label{lem:loss:diff:degenerate}
Assume \cref{setting:const}, let $\phi = ( \phi_1, \ldots, \phi_{3 H + 1 } ) \in \mathbb{R}^{3 H + 1 }$, $j \in \{1, 2, \ldots, H \}$, assume $\phi_j = \phi_{H + j } = 0$, and assume that $\mathcal{L}_\infty$ is differentiable at $\phi$. Then $(\frac{\partial}{\partial \phi_{ j }} \mathcal{L}_\infty) ( \phi ) = \mathcal{G}_{j } ( \phi ) = (\frac{\partial}{\partial \phi_{ H + j }} \mathcal{L}_\infty) ( \phi ) = \mathcal{G}_{ H + j } ( \phi ) = 0$.
\end{lemma}
\begin{proof} [Proof of \cref{lem:loss:diff:degenerate}]
Throughout this proof let $\varphi^h = (\varphi_1^h, \ldots, \varphi_{3 H + 1 } ^h ) \in \mathbb{R}^{3 H + 1 }$, $h = (h_1, h_2) \in \mathbb{R}^2 $, satisfy for all $h = (h_1, h_2) \in \mathbb{R}^2 $, $k \in \{1, 2, \ldots, 3 H + 1 \} \backslash \{j , H + j \}$ that $\varphi_j^h = \phi_j + h_1$, $\varphi_{H+j}^h = \phi_{H + j } + h_2$, and $ \varphi_k^h = \phi_k $. \Nobs that the assumption that $\mathcal{L}_\infty$ is differentiable at $\phi$ ensures that for all $i \in \{j , H + j \}$ it holds that $\mathbb{R} \ni v \mapsto \mathcal{L}_\infty ( \phi_1, \ldots, \phi_{i-1}, v, \phi_{i+1}, \ldots, \phi_{3 H + 1 }) \in \mathbb{R}$ is differentiable at $\phi_i$. Furthermore, \nobs that for all $h \in (- \infty , 0 ] ^2 $, $x \in [0,1]$ it holds that $\realapprox{\varphi^h}{\infty} ( x ) = \realapprox{\phi}{\infty} ( x ) $. Hence, we have for all $h \in (- \infty , 0 ] ^2 $ that $\mathcal{L}_\infty ( \varphi^h ) = \mathcal{L}_\infty ( \phi ) $. This implies that $(\frac{\partial}{\partial \phi_{ j }} \mathcal{L}_\infty) ( \phi ) = (\frac{\partial}{\partial \phi_{ H + j }} \mathcal{L}_\infty) ( \phi ) = 0$. Moreover, \nobs that the assumption that $\phi_j = \phi_{H + j } = 0$ implies that $I_j^\phi = \emptyset$. This and \eqref{eq:loss:gradient} demonstrate that $\mathcal{G}_j ( \phi ) = \mathcal{G}_{H + j } ( \phi ) = 0$. Hence, we obtain that $(\frac{\partial}{\partial \phi_{ j }} \mathcal{L}_\infty) ( \phi ) = 0 = \mathcal{G}_{j } ( \phi )$ and $(\frac{\partial}{\partial \phi_{ H + j }} \mathcal{L}_\infty) ( \phi ) = 0 = \mathcal{G}_{ H + j } ( \phi )$. This completes the proof of \cref{lem:loss:diff:degenerate}.
\end{proof}
\begin{cor} \label{cor:loss:differentiable}
Assume \cref{setting:const}, let $\phi = ( \phi_1, \ldots, \phi_{3 H + 1 } ) \in \mathbb{R}^{3 H + 1 }$, and assume that $\mathcal{L}_\infty$ is differentiable at $\phi$. Then $(\nabla \mathcal{L}_\infty)(\phi) = \mathcal{G} ( \phi ) $.
\end{cor}
\begin{proof} [Proof of \cref{cor:loss:differentiable}]
\Nobs that the assumption that $\mathcal{L}_\infty$ is differentiable at $\phi$ ensures that for all $i \in \{1, 2, \ldots, 3 H + 1 \}$ it holds that $\mathbb{R} \ni v \mapsto \mathcal{L}_\infty ( \phi_1, \ldots, \phi_{i-1}, v, \phi_{i+1}, \ldots, \phi_{3 H + 1 }) \in \mathbb{R}$ is differentiable at $\phi_i$.
Moreover, \nobs that \cref{prop:loss:diff:vc} proves for all $j \in \mathbb{N} \cap (2H , 3 H + 1 ] $ that $(\frac{\partial}{\partial \phi_{ j }} \mathcal{L}_\infty) ( \phi ) = \mathcal{G}_{ j } ( \phi )$.
In addition, \nobs that \cref{prop:loss:diff:wb} shows that for all $j \in \{1, 2, \ldots, H \}$ with $| \phi_j | + | \phi_{H + j } | > 0$ it holds that $(\frac{\partial}{\partial \phi_{ j }} \mathcal{L}_\infty) ( \phi ) = \mathcal{G}_{j } ( \phi )$ and $(\frac{\partial}{\partial \phi_{ H + j }} \mathcal{L}_\infty) ( \phi ) = \mathcal{G}_{ H + j } ( \phi )$.
On the other hand, \nobs that \cref{lem:loss:diff:degenerate} ensures that for all $j \in \{1, 2, \ldots, H \}$ with $\phi_j = \phi_{H + j } = 0$ we have that $(\frac{\partial}{\partial \phi_{ j }} \mathcal{L}_\infty) ( \phi ) = 0 = \mathcal{G}_{j } ( \phi )$ and $(\frac{\partial}{\partial \phi_{ H + j }} \mathcal{L}_\infty) ( \phi ) = 0 = \mathcal{G}_{ H + j } ( \phi )$. This demonstrates that $(\nabla \mathcal{L}_\infty ) ( \phi ) = \mathcal{G} ( \phi )$.
The proof of \cref{cor:loss:differentiable} is thus complete.
\end{proof}
\subsection{Upper bounds for gradients of the risk functions}
\begin{lemma} \label{lem:gradient:est}
Assume \cref{setting:const} and let $\phi \in \mathbb{R}^{3 H + 1}$. Then
\begin{equation}
\norm{ \mathcal{G} ( \phi ) } ^2 \leq ( 8 \norm{ \phi } ^2 + 4 ) \mathcal{L}_\infty ( \phi ).
\end{equation}
\end{lemma}
\begin{proof} [Proof of \cref{lem:gradient:est}]
Throughout this proof let $w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, v_{H}, c \in \mathbb{R}$ satisfy $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c)$.
\Nobs that Jensen's inequality implies that
\begin{equation} \label{lem:grad:est:eq1}
\rbr*{ \int_0^1 | \realapprox{\phi}{\infty} ( x ) - \alpha| \d x } ^{\! 2} \leq \int_0^1 \rbr{ \realapprox{\phi}{\infty} ( x ) - \alpha } ^2 \d x = \mathcal{L}_\infty ( \phi ).
\end{equation}
This and \eqref{eq:loss:gradient} ensure that for all $j \in \{1,2, \ldots, H \}$ we have that
\begin{equation} \label{eq:lem:gradient:est1}
\begin{split}
| \mathcal{G}_j( \phi) | ^2 &= 4 (v_j) ^2 \rbr*{ \int_{I_j^\phi} x ( \realapprox{\phi}{\infty} (x) - \alpha ) \d x } ^{\! 2}
\leq 4 (v_j) ^2 \rbr*{ \int_{I_j^\phi} |x| | \realapprox{\phi}{\infty} (x) - \alpha | \d x } ^{ \! 2} \\
&\leq 4 (v_j) ^2 \rbr*{ \int_0^1 | \realapprox{\phi}{\infty} (x) - \alpha | \d x } ^{\! 2} \leq 4 (v_j)^2 \mathcal{L}_\infty (\phi).
\end{split}
\end{equation}
In addition, \nobs that \eqref{eq:loss:gradient} and \eqref{lem:grad:est:eq1} assure that for all $j \in \{1,2, \ldots, H \}$ it holds that
\begin{equation} \label{eq:lem:gradient:est2}
\begin{split}
| \mathcal{G}_{H + j}( \phi) |^2 &= 4 (v_j) ^2 \rbr*{ \int_{I_j^\phi} (\realapprox{\phi}{\infty} (x) - \alpha ) \d x } ^{\! 2} \\
&\leq 4 (v_j) ^2 \rbr*{ \int_0^1 |\realapprox{\phi}{\infty} (x) - \alpha | \d x } ^{\! 2} \leq 4 (v_j)^2 \mathcal{L}_\infty (\phi).
\end{split}
\end{equation}
Furthermore, \nobs that for all $x \in [0,1]$, $j \in \{1,2, \ldots, H \}$ it holds that $| \sigma_\infty ( w_j x + b_j ) | ^2 \leq ( | w_j | + | b_j | ) ^2 \leq 2 ( (w_j) ^2 + (b_j) ^2)$. Combining this and \eqref{eq:loss:gradient} demonstrates for all $j \in \{1,2, \ldots, H \}$ that
\begin{equation} \label{eq:lem:gradient:est3}
\begin{split}
| \mathcal{G}_{2 H + j} ( \phi ) | ^2 &= 4 \rbr*{ \int_0^1 [\sigma_\infty(w_j x + b_j )] ( \realapprox{\phi}{\infty}(x) - \alpha ) \d x } ^{\! 2 } \\
&\leq 4 \int_0^1 | \sigma_\infty ( w_j x + b_j ) | ^2 ( \realapprox{\phi}{\infty}(x) - \alpha ) ^2 \d x
\leq 8 \br*{ (w_j) ^2 + (b_j) ^2 } \mathcal{L}_\infty (\phi).
\end{split}
\end{equation}
Finally, \nobs that \eqref{eq:loss:gradient} and \eqref{lem:grad:est:eq1} show that
\begin{equation} \label{eq:lem:gradient:est4}
| \mathcal{G} _ { 3 H + 1 } ( \phi ) | ^2 = 4 \rbr*{ \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha ) \d x } ^{\! 2 } \leq 4 \mathcal{L}_\infty ( \phi ).
\end{equation}
Combining \eqref{eq:lem:gradient:est1}--\eqref{eq:lem:gradient:est4} yields
\begin{equation}
\begin{split}
\norm{ \mathcal{G} ( \phi ) } ^2 &\leq \br*{ \textstyle\sum_{j=1}^H \rbr*{ 4 (v_j) ^2 + 4 (v_j) ^2 + 8 (w_j) ^2 + 8 (b_j) ^2 } } \mathcal{L}_\infty ( \phi ) + 4 \mathcal{L}_\infty ( \phi ) \\
&\leq (8 \norm{ \phi } ^2 + 4) \mathcal{L}_\infty ( \phi ).
\end{split}
\end{equation}
The proof of \cref{lem:gradient:est} is thus complete.
\end{proof}
\begin{cor} \label{cor:g:bounded}
Assume \cref{setting:const} and let $K \subseteq \mathbb{R}^{3 H + 1}$ be a compact set. Then $\sup_{\phi \in K} \norm{ \mathcal{G} ( \phi ) } < \infty$.
\end{cor}
\begin{proof} [Proof of \cref{cor:g:bounded}]
\Nobs that the fact that $\mathcal{L}_\infty$ is continuous ensures that $\sup_{\phi \in K} \mathcal{L}_\infty ( \phi ) < \infty$. Combining this with \cref{lem:gradient:est} completes the proof of \cref{cor:g:bounded}.
\end{proof}
\subsection{Properties of Lyapunov type functions} \label{subsection:lyapunov}
\begin{prop} \label{prop:lyapunov:norm}
Assume \cref{setting:const} and let $\phi \in \mathbb{R}^{3 H + 1}$. Then
\begin{equation}
\norm{ \phi } ^2 \leq V(\phi) \leq 3 \norm{ \phi } ^2 + 8 \alpha ^2.
\end{equation}
\end{prop}
\begin{proof} [Proof of \cref{prop:lyapunov:norm}]
\Nobs that
$V(\phi) = \norm{ \phi } ^2 + ( \c{\phi} - 2 \alpha ) ^2 \geq \norm{ \phi } ^2$.
Furthermore, \nobs that the fact that $\forall \, x , y \in \mathbb{R} \colon (x - y )^2 \leq 2(x^2 + y^2)$ establishes that
\begin{equation}
V (\phi) \leq \norm{ \phi } ^2 + 2 (\c{\phi}) ^2 + 8 \alpha ^2 \leq 3 \norm{ \phi } ^2 + 8 \alpha ^2.
\end{equation}
This completes the proof of \cref{prop:lyapunov:norm}.
\end{proof}
\begin{prop} \label{prop:v:gradient}
Assume \cref{setting:const} and let $\phi , \psi \in \mathbb{R}^{3 H + 1}$. Then
\begin{equation} \label{eq:prop:v:gradient}
(\nabla V)(\phi) - (\nabla V)(\psi) = 2(\phi - \psi ) + \rbr[\big]{ 0, 0, \ldots, 0, 2(\c{\phi} - \c{\psi}) }.
\end{equation}
\end{prop}
\begin{proof} [Proof of \cref{prop:v:gradient}]
\Nobs that for all $\varphi \in \mathbb{R}^{3 H + 1}$ it holds that
\begin{equation}
(\nabla V ) ( \varphi) = 2 \varphi + \rbr[\big]{ 0, 0, \ldots, 0, 2(\c{\varphi} - 2 \alpha ) } .
\end{equation}
This establishes \eqref{eq:prop:v:gradient}. The proof of \cref{prop:v:gradient} is thus complete.
\end{proof}
\begin{prop} \label{prop:lyapunov:gradient}
Assume \cref{setting:const}, let $\mathcal{V}_1, \mathcal{V}_2 \in C ( \mathbb{R}^{3 H + 1 } , \mathbb{R} )$ satisfy for all $\phi \in \mathbb{R}^{3 H + 1}$ that
$\mathcal{V}_1(\phi) = (\c{\phi}) ^2 - 2 \alpha \c{\phi} + \sum_{j=1}^H (\v{\phi}_j ) ^2 $
and
$\mathcal{V}_2(\phi) = (\c{\phi})^2 - 2 \alpha \c{\phi} + \sum_{j=1}^H \br[\big]{ (\w{\phi}_j)^2 + (\b{\phi}_j)^2 } $,
and let $\phi \in \mathbb{R}^{3 H + 1}$. Then
\begin{enumerate} [(i)]
\item \label{prop:lyapunov:gradient:item1} it holds that $\langle (\nabla \mathcal{V}_1) ( \phi) , \mathcal{G}(\phi) \rangle = 4 \mathcal{L}_\infty (\phi)$,
\item \label{prop:lyapunov:gradient:item2} it holds that $\langle (\nabla \mathcal{V}_2) ( \phi) , \mathcal{G}(\phi) \rangle = 4 \mathcal{L}_\infty (\phi)$, and
\item \label{prop:lyapunov:gradient:item3} it holds that $\langle (\nabla V) ( \phi) , \mathcal{G}(\phi) \rangle = 8 \mathcal{L}_\infty (\phi)$.
\end{enumerate}
\end{prop}
\begin{proof} [Proof of \cref{prop:lyapunov:gradient}]
Throughout this proof let $w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, v_{H}, c \in \mathbb{R}$ satisfy $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c)$.
\Nobs that
\begin{equation}
(\nabla \mathcal{V}_1) ( \phi ) = 2 \rbr[\big]{ \underbrace{0,0, \ldots, 0}_{2 H}, v_1, v_2, \ldots, v_{H}, c - \alpha } .
\end{equation}
This and \eqref{eq:loss:gradient} imply that
\begin{equation}
\begin{split}
&\langle (\nabla \mathcal{V}_1) ( \phi) , \mathcal{G}(\phi) \rangle \\
&= 4 \br[\Bigg]{ \sum_{j=1}^H v_j \int_0^1 [\sigma_\infty (w_j x + b_j) ] ( \realapprox{\phi}{\infty}(x) - \alpha) \d x } + 4(c - \alpha) \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha ) \d x \\
& = 4 \int_0^1 \rbr*{ \br*{ \textstyle\sum_{j=1}^H v_j \sigma_\infty ( w_j x + b_j) } + c - \alpha } (\realapprox{\phi}{\infty} (x) - \alpha ) \d x \\
&= 4 \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha )^2 \d x = 4 \mathcal{L}_\infty ( \phi ).
\end{split}
\end{equation}
This proves item \eqref{prop:lyapunov:gradient:item1}.
Next \nobs that
\begin{equation}
(\nabla \mathcal{V}_2 ) ( \phi ) = 2 \rbr[\big]{ w_1, w_2, \ldots, w_{H}, b_1, b_2, \ldots, b_{H}, \underbrace{0, 0, \ldots, 0}_{H }, c -\alpha }.
\end{equation}
Combining this and \eqref{eq:loss:gradient} demonstrates that
\begin{equation}
\begin{split}
&\langle (\nabla \mathcal{V}_2) ( \phi) , \mathcal{G}(\phi) \rangle \\
&= 4 \br[\Bigg]{ \sum_{j=1}^H v_j \int_{I_j^\phi} (w_j x + b_j) (\realapprox{\phi}{\infty} (x) - \alpha ) \d x } + 4(c - \alpha) \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha ) \d x \\
&= 4 \br[\Bigg]{ \sum_{j=1}^H v_j \int_0^1 \br{\sigma_\infty(w_j x + b_j) } ( \realapprox{\phi}{\infty}(x) - \alpha) \d x} + 4(c - \alpha) \int_0^1 \rbr{\realapprox{\phi}{\infty} (x) - \alpha } \d x \\
& = 4 \int_0^1 \rbr*{ \br*{ \textstyle\sum_{j=1}^H v_j \sigma_\infty ( w_j x + b_j) } + c - \alpha } (\realapprox{\phi}{\infty} (x) - \alpha ) \d x \\
&= 4 \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha )^2 \d x = 4 \mathcal{L}_\infty ( \phi ).
\end{split}
\end{equation}
This establishes item \eqref{prop:lyapunov:gradient:item2}.
Furthermore, \nobs that $\mathcal{V}_1 ( \phi ) + \mathcal{V} _2 ( \phi ) = V ( \phi ) - 4 \alpha ^2$. This ensures that $(\nabla \mathcal{V}_1 ) ( \phi ) + ( \nabla \mathcal{V}_2 ) ( \phi ) = ( \nabla V ) ( \phi)$, which proves item \eqref{prop:lyapunov:gradient:item3}.
The proof of \cref{prop:lyapunov:gradient} is thus complete.
\end{proof}
\begin{cor} \label{cor:critical:points}
Assume \cref{setting:const} and let $\phi \in \mathbb{R}^{3 H + 1}$. Then it holds that $ \norm{ \mathcal{G}(\phi) } = 0$ if and only if $\mathcal{L}_\infty ( \phi ) = 0 $.
\end{cor}
\begin{proof} [Proof of \cref{cor:critical:points}]
Assume first that $\norm{ \mathcal{G}( \phi ) } = 0$.
Then \cref{prop:lyapunov:gradient} implies that $8 \mathcal{L}_\infty (\phi) = \langle (\nabla V ) ( \phi) , \mathcal{G}(\phi) \rangle = 0$.
Next assume $\mathcal{L}_\infty (\phi) = \int_0^1 (\realapprox{\phi}{\infty} (x) - \alpha)^2 \d x = 0$.
The fact that $\realapprox{\phi}{\infty} \in C(\mathbb{R} , \mathbb{R})$ then implies that it holds for all $x \in [0,1]$ that $\realapprox{\phi}{\infty}(x) = \alpha$.
Hence, \eqref{eq:loss:gradient} demonstrates that $\mathcal{G}(\phi) = 0 \in \mathbb{R}^{3 H + 1 }$ and therefore $\norm{ \mathcal{G} ( \phi ) } = 0$.
This completes the proof of \cref{cor:critical:points}.
\end{proof}
\section{Convergence analysis for gradient flow processes}
\label{section:gradientflow}
In this section we employ the findings from \cref{section:risk:regularity} to establish in \cref{theo:flow} below that the risks of the considered time-continuous gradient flow processes converge to zero. Our proof of \cref{theo:flow} uses the deterministic It\^{o} type formula for the Lyapunov function $V \colon \mathbb{R}^{ 3 H + 1 } \to \mathbb{R}$ from \cref{setting:const}, which we establish in \cref{lem:flow:lyapunov} in \cref{subsection:ito:lyapunov} below, as well as the deterministic It\^{o} type formula for the risk function $\mathcal{L}_{ \infty } \colon \mathbb{R}^{ 3 H + 1 } \to \mathbb{R}$ from \cref{setting:const}, which we establish in \cref{lem:loss:integral} in \cref{subsection:ito:risk} below.
Our proof of the deterministic It\^{o} type formula for the Lyapunov function $V \colon \mathbb{R}^{ 3 H + 1 } \to \mathbb{R}$ in \cref{lem:flow:lyapunov}, in turn, is based on the fact that the function $V \colon \mathbb{R}^{ 3 H + 1 } \to \mathbb{R}$ from \cref{setting:const} satisfies the Lyapunov property in item \eqref{prop:lyapunov:gradient:item3} in \cref{prop:lyapunov:gradient} as well as on the well-known deterministic It\^{o}-type formula for continuously differentiable functions in \cref{lem:chainrule:gen} in \cref{subsection:ito:lyapunov} below. We include in this section a detailed proof for \cref{lem:chainrule:gen} only for completeness.
In contrast to \cref{lem:flow:lyapunov},
the deterministic It\^{o} type formula
for the risk function $\mathcal{L}_{ \infty } \colon \mathbb{R}^{ 3 H + 1 } \allowbreak \to \mathbb{R}$
in \cref{lem:loss:integral} can not be proved through
an application of \cref{lem:chainrule:gen} as the risk function
$\mathcal{L}_{ \infty } \colon \mathbb{R}^{ 3 H + 1 } \to \mathbb{R}$ fails to be differentiable.
Instead we prove \cref{lem:loss:integral} through an approximation
argument by employing the mollified
rectifier functions $\sigma_r \in C^{ \infty }( \mathbb{R}, \mathbb{R} )$, $r \in [1,\infty)$, and their corresponding
risk functions $\mathcal{L}_r \colon \mathbb{R}^{ 3 H + 1 } \to \mathbb{R}$, $r \in [1,\infty)$,
from \cref{setting:const}.
\subsection{Deterministic It\^{o} formulas for Lyapunov type functions}
\label{subsection:ito:lyapunov}
\begin{lemma} \label{lem:chainrule:gen}
Let $T \in (0, \infty)$, $n \in \mathbb{N}$, $\Theta \in C ( [ 0, T ] , \mathbb{R}^n )$, $F \in C^1 ( \mathbb{R}^n, \mathbb{R})$, let $\vartheta \colon [0, T] \to \mathbb{R}^n$ be a bounded measurable function, and assume for all $t \in [0, T]$ that
\begin{equation}
\Theta_t = \Theta_0 + \int_0^t \vartheta_s \d s.
\end{equation}
Then it holds for all $t \in [0, T]$ that
\begin{equation}
F ( \Theta_t ) = F ( \Theta_0 ) + \int_0^t \rbr[\big]{ F' ( \Theta_s ) } \vartheta_s \d s.
\end{equation}
\end{lemma}
\begin{proof} [Proof of \cref{lem:chainrule:gen}]
\Nobs that the fact that $\vartheta$ is bounded proves that $\Theta$ is Lipschitz continuous. Combining this and Rademacher's theorem shows that there exists a measurable set $E \subseteq [0,T]$ which satisfies that $\int_{[0, T] \backslash E} 1 \d s = 0 $, which satisfies for all $t \in E$ that $[0, T ] \ni s \mapsto \Theta_s \in \mathbb{R}^n$ is differentiable at $t$, and which satisfies for all $t \in E$ that $\frac{\d}{\d t} \Theta_t = \vartheta_t$. This and the chain rule demonstrate that for all $t \in E$ it holds that $[0, T ] \ni s \mapsto F ( \Theta_s ) \in \mathbb{R}$ is differentiable at $t$ and that $\frac{\d}{\d t} ( F ( \Theta_t ) ) = ( F' ( \Theta_t ) ) \vartheta_t$. Furthermore, \nobs that the fact that $\Theta$ is Lipschitz continuous and the fact that $F$ is continuously differentiable establish that $[0, T ] \ni t \mapsto F ( \Theta_t ) \in \mathbb{R}$ is Lipschitz continuous. Hence, we obtain that $[0, T ] \ni t \mapsto F ( \Theta_t ) \in \mathbb{R}$ is absolutely continuous. This shows for all $t \in [0,T]$ that
\begin{equation}
F ( \Theta_t ) = F ( \Theta_0 ) + \int_0^t \rbr[\big]{ F' ( \Theta_s ) } \vartheta_s \d s.
\end{equation}
The proof of \cref{lem:chainrule:gen} is thus complete.
\end{proof}
\begin{lemma} \label{lem:flow:lyapunov}
Assume \cref{setting:const}, let $T \in (0, \infty)$, and let $\Theta \in C([0, T] , \mathbb{R}^{3 H + 1})$ satisfy for all $t \in [0, T]$ that $\Theta_t = \Theta_0 - \int_0^t \mathcal{G} ( \Theta_s ) \d s$.
Then it holds for all $t \in [0, T]$ that $V(\Theta_t) = V(\Theta_0) - 8 \int_0^t \mathcal{L}_\infty (\Theta_s) \d s$.
\end{lemma}
\begin{proof} [Proof of \cref{lem:flow:lyapunov}]
\Nobs that \cref{cor:g:bounded} and the assumption that $\Theta \in C([0,T] , \mathbb{R}^{3 H + 1 } )$ imply that $[0, T ] \ni t \mapsto \mathcal{G} ( \Theta_t ) \in \mathbb{R}^{3 H + 1}$ is bounded.
Combining this, the fact that $V \in C^\infty ( \mathbb{R}^{3 H + 1} , \mathbb{R} )$, \cref{prop:limit:lr}, \cref{lem:chainrule:gen}, and \cref{prop:lyapunov:gradient} demonstrates that for all $t \in [0,T]$ we have that
\begin{equation}
V(\Theta_t) - V(\Theta_0) = - \int_0^t \langle ( \nabla V ) (\Theta_s), \mathcal{G}(\Theta_s) \rangle \d s =- 8 \int_0^t \mathcal{L}_\infty (\Theta_s) \d s.
\end{equation}
The proof of \cref{lem:flow:lyapunov} is thus complete.
\end{proof}
\begin{cor} \label{cor:flow:stability}
Assume \cref{setting:const} and let $\Theta \in C([0, \infty) , \mathbb{R}^{3 H + 1})$ satisfy for all $t \in [0, \infty)$ that $\Theta_t = \Theta_0 - \int_0^t \mathcal{G} ( \Theta_s ) \d s$.
Then $\sup_{t \in [ 0, \infty)} \norm{ \Theta_t } \leq \br{ V ( \Theta_0 ) } ^{1/2} < \infty$.
\end{cor}
\begin{proof} [Proof of \cref{cor:flow:stability}]
\Nobs that \cref{prop:lyapunov:norm} implies for all $t \in [ 0, \infty)$ that $\norm{ \Theta_t } \leq \br{ V(\Theta_t) }^{1/2} $.
Furthermore, \nobs that \cref{lem:flow:lyapunov} and the fact that $\forall \, \phi \in \mathbb{R}^{3 H + 1} \colon \mathcal{L}_\infty ( \phi) \geq 0$ demonstrate for all $t \in [ 0, \infty)$ that $V(\Theta_t) \leq V(\Theta_0)$. This completes the proof of \cref{cor:flow:stability}.
\end{proof}
\subsection{Deterministic It\^{o} formulas for risk functions}
\label{subsection:ito:risk}
\begin{lemma} \label{lem:lr:bounded}
Assume \cref{setting:const} and let $K \subseteq \mathbb{R}^{3 H + 1}$ be a compact set. Then $\sup_{\phi \in K} \allowbreak \sup_{r \in [1 , \infty) } \allowbreak \norm{ ( \nabla \mathcal{L}_r ) ( \phi ) } < \infty$.
\end{lemma}
\begin{proof} [Proof of \cref{lem:lr:bounded}]
\Nobs that \cref{prop:relu:approximation} demonstrates for all $r \in [1 , \infty)$, $\phi = (w_1, \ldots, \allowbreak w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c) \in \mathbb{R}^{3 H + 1 }$, $x \in [0,1]$ that
\begin{equation}
\abs{ \realapprox{\phi}{r} ( x) }
\leq | c | + \textstyle\sum_{j=1}^H | v_j | ( \sigma_\infty ( w_j x + b_j ) + 1 )
\leq | c | + \textstyle\sum_{j=1}^H | v_j | ( | w_j | + | b_j | + 1 ).
\end{equation}
Hence, we obtain for all $r \in [1, \infty)$, $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c) \in \mathbb{R}^{3 H + 1 }$ that
\begin{equation}
\mathcal{L}_r ( \phi )
\leq \int_0^1 \rbr[\big]{ | \alpha | + | \realapprox{\phi}{r} ( x)| } ^2 \d x
\leq \rbr*{ | \alpha | + | c | + \textstyle\sum_{j=1}^H | v_j | ( | w_j | + | b_j | + 1 ) }^2.
\end{equation}
This implies that $\sup_{\phi \in K} \sup_{r \in [1 , \infty)} \mathcal{L}_r ( \phi ) < \infty$. Next \nobs that \eqref{eq:approx:loss:gradient} and the Cauchy-Schwarz inequality demonstrate that for all $r \in [1, \infty)$, $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c) \in \mathbb{R}^{3 H + 1 }$ it holds that
\begin{equation} \label{eq:lr:bounded:1}
\abs*{ \rbr*{ \tfrac{\partial }{ \partial c} \mathcal{L}_r } ( \phi ) }
\leq 2 \int_0^1 | \realapprox{\phi}{r}(x) - \alpha | \d x \leq 2 \sqrt{\mathcal{L}_r ( \phi)}.
\end{equation}
Furthermore, \nobs that the Cauchy-Schwarz inequality, \cref{prop:relu:approximation}, and \eqref{eq:approx:loss:gradient} prove that for all $r \in [1, \infty)$, $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c) \in \mathbb{R}^{3 H + 1 }$, $j \in \{1, 2, \ldots, H \}$ it holds that
\begin{equation} \label{eq:lr:bounded:2}
\begin{split}
\abs[\big]{ \rbr[\big]{ \tfrac{\partial } { \partial w_j } \mathcal{L}_r } ( \phi ) }
&\leq 2 | v_j | \int_0^1 |x (\sigma_r) ' ( w_j x + b_j )| | \realapprox{\phi}{r}(x) - \alpha | \d x \\
&\leq 2 | v_j | \int_0^1 | \realapprox{\phi}{r}(x) - \alpha | \d x
\leq 2 | v_j | \sqrt{\mathcal{L}_r ( \phi ) }
\end{split}
\end{equation}
and
\begin{equation} \label{eq:lr:bounded:3}
\begin{split}
\abs[\big]{ \rbr[\big]{ \tfrac{\partial } { \partial b_j } \mathcal{L}_r } ( \phi ) }
&\leq 2 | v_j | \int_0^1 | (\sigma_r) ' ( w_j x + b_j )| | \realapprox{\phi}{r}(x) - \alpha | \d x \\
& \leq 2 | v_j | \int_0^1 | \realapprox{\phi}{r}(x) - \alpha | \d x
\leq 2 | v_j | \sqrt{\mathcal{L}_r ( \phi ) } .
\end{split}
\end{equation}
In addition, \nobs that the Cauchy-Schwarz inequality, \cref{prop:relu:approximation}, and \eqref{eq:approx:loss:gradient} demonstrate that for all $r \in [1, \infty)$, $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c) \in \mathbb{R}^{3 H + 1 }$, $j \in \{1, 2, \ldots, H \}$ it holds that
\begin{equation}
\begin{split}
\abs[\big]{ \rbr[\big]{ \tfrac{\partial }{ \partial v_j} \mathcal{L}_r } ( \phi ) }
& \leq 2 \int_0^1 [ \sigma_r ( w_j x + b_j) ] | \realapprox{\phi}{r}(x) - \alpha | \d x \\
&\leq 2 ( 1 + | w_j | + | b_j | )\int_0^1 | \realapprox{\phi}{r}(x) - \alpha | \d x \\ &
\leq 2 ( 1 + | w_j | + | b_j | ) \sqrt{ \mathcal{L}_r ( \phi ) }.
\end{split}
\end{equation}
This, \eqref{eq:lr:bounded:1}, \eqref{eq:lr:bounded:2}, and \eqref{eq:lr:bounded:3} show that for all $r \in [1, \infty)$, $\phi = (w_1, \ldots, w_{H}, b_1, \ldots, b_{H}, v_1, \ldots, \allowbreak v_{H}, c) \in \mathbb{R}^{3 H + 1 }$ it holds that
\begin{equation}
\norm{ (\nabla \mathcal{L}_r) ( \phi ) } ^2 \leq \br*{ 4 + \textstyle\sum_{j=1}^H \rbr*{ 8 (v_j) ^2 + 4 ( 1 + | w_ j| + | b_j | ) ^2 } } \mathcal{L}_r ( \phi) .
\end{equation}
Combining this with the fact that $\sup_{\phi \in K} \sup_{r \in [1 , \infty) } \mathcal{L}_r ( \phi ) < \infty$ establishes that
\begin{equation}
\sup\nolimits_{\phi \in K} \sup\nolimits_{ r \in [1 , \infty) } \norm{ (\nabla \mathcal{L}_r) ( \phi ) } ^2 < \infty.
\end{equation}
The proof of \cref{lem:lr:bounded} is thus complete.
\end{proof}
\begin{lemma} \label{lem:loss:integral}
Assume \cref{setting:const}, let $T \in (0, \infty)$, and let $\Theta \in C([0, T ] , \mathbb{R}^{3 H + 1} )$ satisfy for all $t \in [0,T] $ that $\Theta_t = \Theta_0 - \int_0^t \mathcal{G} ( \Theta_s ) \d s$.
Then it holds for all $t \in [0,T]$ that $\mathcal{L}_\infty (\Theta_t) = \mathcal{L}_\infty (\Theta_0) - \int_0^t \norm{ \mathcal{G}( \Theta_s ) } ^2 \d s$.
\end{lemma}
\begin{proof} [Proof of \cref{lem:loss:integral}]
\Nobs that \cref{lem:chainrule:gen} and item \eqref{prop:limit:lr:1} in \cref{prop:limit:lr} demonstrate that for all $r \in [1 , \infty)$, $t \in [0,T]$ it holds that
\begin{equation} \label{eq:lem:loss:integral}
\mathcal{L}_r ( \Theta_t) - \mathcal{L}_r ( \Theta_0) = - \int_0^t \langle (\nabla \mathcal{L}_r) ( \Theta_s), \mathcal{G} ( \Theta_s ) \rangle \d s.
\end{equation}
Next \nobs that \cref{prop:limit:lr} proves that for all $t \in [0,T]$ it holds that $\lim_{r \to \infty} ( \mathcal{L}_r ( \Theta_t) - \mathcal{L}_r ( \Theta_0)) = \mathcal{L}_\infty ( \Theta_t) - \mathcal{L} ( \Theta_0)$.
Furthermore, \nobs that \cref{prop:limit:lr} ensures that for all $s \in [0,T]$ we have that $\lim_{r \to \infty} \langle ( \nabla \mathcal{L}_r ) ( \Theta_s), \mathcal{G} ( \Theta_s ) \rangle = \langle \mathcal{G} ( \Theta_s), \mathcal{G} ( \Theta_s ) \rangle = \norm{ \mathcal{G} ( \Theta_s ) } ^2$.
In addition, \nobs that the assumption that $\Theta \in C([0,T], \mathbb{R}^{3 H + 1 })$ implies that there exists a compact set $K \subseteq \mathbb{R}^{3 H + 1}$ such that $\forall \, s \in [0, T] \colon \Theta_s \in K$.
Combining this, the Cauchy-Schwarz inequality, \cref{cor:g:bounded}, and \cref{lem:lr:bounded} shows that
\begin{equation}
\begin{split}
&\sup\nolimits_{r \in [1 , \infty)} \sup\nolimits_{s \in [0,T]} | \langle ( \nabla \mathcal{L}_r) ( \Theta_s), \mathcal{G} ( \Theta_s ) \rangle | \\
&\leq \sup\nolimits_{r \in [1 , \infty)} \sup\nolimits_{\phi \in K} | \langle (\nabla \mathcal{L}_r ) ( \phi), \mathcal{G} ( \phi ) \rangle | \\
&\leq \sup\nolimits_{r \in [1 , \infty)} \sup\nolimits_{\phi \in K} \rbr[\big]{ \norm{ (\nabla \mathcal{L}_r) ( \phi ) } \norm{ \mathcal{G} ( \phi ) } } < \infty.
\end{split}
\end{equation}
The dominated convergence theorem hence proves that for all $t \in [0,T]$ we have that
\begin{equation}
\lim_{r \to \infty} \br*{ \int_0^t \langle (\nabla \mathcal{L}_r) ( \Theta_s), \mathcal{G} ( \Theta_s ) \rangle \d s }
= \int_0^t \br*{ \lim_{r \to \infty} \langle (\nabla \mathcal{L}_r) ( \Theta_s), \mathcal{G} ( \Theta_s ) \rangle } \d s
= \int_0^t \norm{ \mathcal{G} ( \Theta_s ) } ^2 \d s.
\end{equation}
Combining this with \eqref{eq:lem:loss:integral} completes the proof of \cref{lem:loss:integral}.
\end{proof}
\subsection{Convergence of the risks of gradient flow processes}
\begin{lemma} \label{lem:loss:decreasing}
Assume \cref{setting:const} and let $\Theta \in C([0, \infty) , \mathbb{R}^{3 H + 1})$ satisfy for all $t \in [0, \infty)$ that $\Theta_t = \Theta_0 - \int_0^t \mathcal{G} ( \Theta_s ) \d s$. Then it holds that $[0, \infty) \ni t \mapsto \mathcal{L}_\infty ( \Theta_t) \in [0, \infty)$ is non-increasing.
\end{lemma}
\begin{proof} [Proof of \cref{lem:loss:decreasing}]
This is an immediate consequence of \cref{lem:loss:integral}.
\end{proof}
\begin{theorem} \label{theo:flow}
Assume \cref{setting:const} and let $\Theta \in C([0, \infty) , \mathbb{R}^{3 H + 1})$ satisfy for all $t \in [0, \infty)$ that $\Theta_t = \Theta_0 - \int_0^t \mathcal{G} ( \Theta_s ) \d s$. Then
\begin{enumerate} [(i)]
\item \label{theo:flow:item1} it holds that $\sup_{t \in [0, \infty)} \norm{ \Theta_t } \leq \br{ V(\Theta_0 )} ^{1/2} < \infty$,
\item \label{theo:flow:item2} it holds for all $t \in (0, \infty)$ that $\mathcal{L}_\infty ( \Theta_t ) \leq \frac{V ( \Theta_0 ) }{8 t}$, and
\item \label{theo:flow:item3} it holds that $\limsup_{t \to \infty} \mathcal{L}_\infty (\Theta_t) = 0$.
\end{enumerate}
\end{theorem}
\begin{proof} [Proof of \cref{theo:flow}]
\Nobs that \cref{cor:flow:stability} establishes item \eqref{theo:flow:item1}. Next \nobs that \cref{lem:flow:lyapunov} and \cref{lem:loss:decreasing} prove that for all $t \in [0, \infty)$ it holds that
\begin{equation}
t \mathcal{L}_\infty ( \Theta_t ) = \int_0^t \mathcal{L}_\infty ( \Theta_t ) \d s \leq \int_0^t \mathcal{L}_\infty ( \Theta_s) \d s = \frac{V(\Theta_0) - V ( \Theta_t) }{8} \leq \frac{ V( \Theta_0)}{8} < \infty.
\end{equation}
Hence, we obtain for all $t \in (0, \infty)$ that
\begin{equation}
\mathcal{L}_\infty ( \Theta_t ) \leq \frac{V ( \Theta_0 ) }{8 t}.
\end{equation}
This establishes items \eqref{theo:flow:item2} and \eqref{theo:flow:item3}. The proof of \cref{theo:flow} is thus complete.
\end{proof}
\section{Convergence analysis for gradient descent processes}
\label{section:gradientdescent}
In this section we use the findings from \cref{section:risk:regularity} to prove in \cref{theo:gd:loss} in \cref{subsection:theorem:gd} below that the risks of the considered time-discrete GD processes converge to zero. Our proof of \cref{theo:gd:loss} uses the fact that the function $V \colon \mathbb{R}^{ 3 H + 1 } \to \mathbb{R}$ from \cref{setting:const} is also a Lyapunov function for the considered time-discrete GD processes, which we establish in \cref{lem:loss:decreasing} below. Moreover, in \cref{subsection:gd:random:initialization} below we apply \cref{theo:gd:loss} to establish in \cref{cor:gd:random} that also the expectations of risks of the time-discrete GD processes with random initializations do converge to zero.
\subsection{Lyapunov type estimates for gradient descent processes}
\label{subsection:gd:lyapunov}
\begin{lemma} \label{lem:est:vtheta_n}
Assume \cref{setting:const}, let $\gamma \in (0, \infty)$, and let $\Theta = (\Theta_n)_{n \in \mathbb{N}_0} = ( ( \Theta_n^1, \ldots, \allowbreak \Theta_n^{3 H + 1}))_{n \in \mathbb{N}_0} \colon \allowbreak \mathbb{N}_0 \to \mathbb{R}^{3 H + 1}$ satisfy for all $n \in \mathbb{N}_0$ that $\Theta_{n+1} = \Theta_n - \gamma \mathcal{G} ( \Theta_n)$. Then it holds for all $n \in \mathbb{N}_0$ that
\begin{equation}
V(\Theta_{n+1}) - V ( \Theta_n) \leq - 8 \gamma \mathcal{L}_\infty (\Theta_n) + 2\gamma ^2 \norm{ \mathcal{G} ( \Theta_n ) } ^2.
\end{equation}
\end{lemma}
\begin{proof} [Proof of \cref{lem:est:vtheta_n}]
Throughout this proof let $n \in \mathbb{N}_0$ be arbitrary and let $g \colon \mathbb{R} \to \mathbb{R}$ satisfy for all $t \in \mathbb{R}$ that $g(t) = V ( t \Theta_{n+1} + ( 1-t) \Theta_n )$. The fact that $V$ is continuously differentiable establishes that $g$ is continuously differentiable. The fundamental theorem of calculus and the chain rule hence ensure that
\begin{equation}
\begin{split}
&V(\Theta_{n+1}) - V(\Theta_n) =g(1)-g(0) = \int_0^1 g'(t) \d t\\
&= \int_0^1 \langle (\nabla V) (t \Theta_{n+1} + (1-t) \Theta_n), \Theta_{n+1} - \Theta_n \rangle \d t \\
&= - \gamma \int_0^1 \langle (\nabla V) (t \Theta_{n+1} + (1-t) \Theta_n), \mathcal{G} ( \Theta_n ) \rangle \d t \\
&= - \gamma \int_0^1 \langle (\nabla V ) (\Theta_n) , \mathcal{G}(\Theta_n) \rangle \d t \\
&\quad - \gamma \int_0^1 \langle (\nabla V) (t \Theta_{n+1} + (1-t) \Theta_n) - (\nabla V) ( \Theta_n), \mathcal{G} ( \Theta_n ) \rangle \d t.
\end{split}
\end{equation}
Next \nobs that \cref{prop:lyapunov:gradient} implies that $\langle (\nabla V ) (\Theta_n) , \mathcal{G}(\Theta_n) \rangle = 8 \mathcal{L}_\infty (\Theta_n)$. Furthermore, \nobs that \cref{prop:v:gradient} establishes for all $t \in [0, 1]$ that
\begin{equation}
\begin{split}
&\langle (\nabla V) (t \Theta_{n+1} + (1-t) \Theta_n) - (\nabla V ) ( \Theta_n), \mathcal{G} ( \Theta_n ) \rangle \\
&= \langle (\nabla V) (t (\Theta_{n+1} - \Theta_n) + \Theta_n ) - (\nabla V ) ( \Theta_n), \mathcal{G} ( \Theta_n ) \rangle \\
&= 2t \langle \Theta_{n+1} - \Theta_n, \mathcal{G}(\Theta_n) \rangle + 2t (\Theta_{n+1}^{3 H + 1} - \Theta_n^{3 H + 1})\mathcal{G}_{3 H +1}(\Theta_n) \\
&= -2 t \gamma \norm{ \mathcal{G} ( \Theta_n ) } ^2 - 2 t \gamma |\mathcal{G}_{3 H +1}(\Theta_n) | ^2 \geq - 4 t \gamma \norm{ \mathcal{G} ( \Theta_n ) } ^2.
\end{split}
\end{equation}
Hence, we obtain that
\begin{equation}
\begin{split}
V(\Theta_{n+1}) - V(\Theta_n)
&\leq - 8 \gamma \mathcal{L}_\infty (\Theta_n) + 4 \gamma ^2 \int_0^1 t \norm{ \mathcal{G} ( \Theta_n ) } ^2 \d t \\
&= - 8 \gamma \mathcal{L}_\infty (\Theta_n) + 2\gamma ^2 \norm{ \mathcal{G} ( \Theta_n ) } ^2.
\end{split}
\end{equation}
The proof of \cref{lem:est:vtheta_n} is thus complete.
\end{proof}
\begin{cor} \label{cor:est:vtheta_n}
Assume \cref{setting:const}, let $\gamma \in (0, \infty)$, and let $\Theta = (\Theta_n)_{n \in \mathbb{N}_0} \colon \mathbb{N}_0 \to \mathbb{R}^{3 H + 1}$ satisfy for all $n \in \mathbb{N}_0$ that $\Theta_{n+1} = \Theta_n - \gamma \mathcal{G} ( \Theta_n)$. Then it holds for all $n \in \mathbb{N}_0$ that
\begin{equation}
V(\Theta_{n+1}) - V ( \Theta_n) \leq 8 \rbr*{ - \gamma + \gamma ^2 ( 2 V(\Theta_n) + 1) } \mathcal{L}_\infty ( \Theta _ n) .
\end{equation}
\end{cor}
\begin{proof} [Proof of \cref{cor:est:vtheta_n}]
\Nobs that \cref{lem:gradient:est} and \cref{prop:lyapunov:norm} imply for all $n \in \mathbb{N}_0$ that
\begin{equation}
\begin{split}
\norm{ \mathcal{G} ( \Theta_n ) } ^2
&\leq ( 8 \norm{ \Theta_n } ^2 + 4) \mathcal{L}_\infty (\Theta_n)
= 4 ( 2 \norm{ \Theta_n } ^2 + 1 ) \mathcal{L}_\infty ( \Theta_n ) \\
&\leq 4 (2 V ( \Theta_n) + 1 ) \mathcal{L}_\infty (\Theta_n).
\end{split}
\end{equation}
Combining this and \cref{lem:est:vtheta_n} ensures that for all $n \in \mathbb{N}_0$ we have that
\begin{equation}
\begin{split}
V(\Theta_{n+1}) - V ( \Theta_n) &\leq - 8 \gamma \mathcal{L}_\infty ( \Theta_n ) + 8 \gamma^2 ( 2 V ( \Theta_n) + 1 ) \mathcal{L}_\infty ( \Theta_n ) \\
&= 8 \rbr*{ - \gamma + \gamma ^2 ( 2 V(\Theta_n) + 1) } \mathcal{L}_\infty ( \Theta _ n).
\end{split}
\end{equation}
The proof of \cref{cor:est:vtheta_n} is thus complete.
\end{proof}
\begin{lemma} \label{lem:vthetan:decreasing}
Assume \cref{setting:const}, let $\gamma \in (0, \infty)$, and let $\Theta = (\Theta_n)_{n \in \mathbb{N}_0} \colon \mathbb{N}_0 \to \mathbb{R}^{3 H + 1}$ satisfy for all $n \in \mathbb{N}_0$ that $\Theta_{n+1} = \Theta_n - \gamma \mathcal{G} ( \Theta_n)$ and $\gamma \leq (4 V ( \Theta_0) + 2 )^{-1}$. Then it holds for all $n \in \mathbb{N}_0$ that $V (\Theta_{n+1}) - V ( \Theta_n) \leq - 4 \gamma \mathcal{L}_\infty (\Theta_n) \leq 0$.
\end{lemma}
\begin{proof} [Proof of \cref{lem:vthetan:decreasing}]
We prove the statement by induction on $n \in \mathbb{N}_0$. \Nobs that \cref{cor:est:vtheta_n} implies that
\begin{equation}
\begin{split}
V(\Theta_1) - V(\Theta_0) &\leq \rbr*{ - 8 \gamma + 8\gamma ^2 ( 2 V(\Theta_0) + 1 ) } \mathcal{L}_\infty ( \Theta _ 0) \\
&\leq \rbr*{ - 8 \gamma + 8 \gamma \br*{ \tfrac{2 V(\Theta_0) + 1}{4 V(\Theta_0) + 2} } } \mathcal{L}_\infty ( \Theta _ 0) = - 4 \gamma \mathcal{L}_\infty (\Theta_0) \leq 0.
\end{split}
\end{equation}
This establishes the assertion in the base case $n=0$. For the induction step let $n \in \mathbb{N}$ satisfy for all $m \in \{0, 1, \ldots, n-1\}$ that
\begin{equation} \label{eq:induction:1}
V( \Theta_{m + 1}) - V ( \Theta_{m} ) \leq - 4 \gamma \mathcal{L}_\infty ( \Theta_{m} ) \leq 0.
\end{equation}
\Nobs that \eqref{eq:induction:1} shows that $V(\Theta_n) \leq V(\Theta_{n-1}) \leq \cdots \leq V(\Theta_0)$. The assumption that $\gamma \leq (4 V ( \Theta_0 ) + 2 ) ^{-1}$ hence ensures that $\gamma \leq (4 V ( \Theta_0) + 2 )^{-1} \leq ( 4 V ( \Theta_n) + 2 )^{-1}$. Combining this and \cref{cor:est:vtheta_n} demonstrates that
\begin{equation}
\begin{split}
V(\Theta_{n+1}) - V(\Theta_n) &\leq \rbr*{ - 8 \gamma + 8\gamma ^2 ( 2 V(\Theta_n) + 1 ) } \mathcal{L}_\infty ( \Theta _ n) \\
&\leq \rbr*{ - 8 \gamma + 8 \gamma \br*{ \tfrac{2 V(\Theta_n) + 1}{4 V(\Theta_n) + 2} } } \mathcal{L}_\infty ( \Theta _ n) = - 4 \gamma \mathcal{L}_\infty (\Theta_n) \leq 0.
\end{split}
\end{equation}
This completes the proof of \cref{lem:vthetan:decreasing}.
\end{proof}
\subsection{Convergence of the risks of gradient descent processes}
\label{subsection:theorem:gd}
\begin{theorem} \label{theo:gd:loss}
Assume \cref{setting:const}, let $\gamma \in (0, \infty)$, and let $\Theta = (\Theta_n)_{n \in \mathbb{N}_0} \colon \mathbb{N}_0 \to \mathbb{R}^{3 H + 1}$ satisfy for all $n \in \mathbb{N}_0$ that $\Theta_{n+1} = \Theta_n - \gamma \mathcal{G} ( \Theta_n)$ and $\gamma \leq (4 V ( \Theta_0) + 2 )^{-1}$. Then
\begin{enumerate} [(i)]
\item \label{theo:gd:item1} it holds that $\sup_{n \in \mathbb{N}_0} \norm{ \Theta_n } \leq \br{ V( \Theta_0 ) } ^{1/2} < \infty$ and
\item \label{theo:gd:item2} it holds that $\limsup_{n \to \infty} \mathcal{L}_\infty (\Theta_n) = 0$.
\end{enumerate}
\end{theorem}
\begin{proof} [Proof of \cref{theo:gd:loss}]
\Nobs that \cref{lem:vthetan:decreasing} proves that for all $n \in \mathbb{N}_0$ we have that $V(\Theta_n ) \leq V ( \Theta_{n-1}) \leq \cdots \leq V(\Theta_0)$. This and the fact that $\forall \, n \in \mathbb{N}_0 \colon \norm{ \Theta_n } \leq \br{ V ( \Theta_n ) }^{1/2}$ establish item \eqref{theo:gd:item1}. Next \nobs that \cref{lem:vthetan:decreasing} implies for all ${N \in \mathbb{N}}$ that
\begin{equation}
\sum_{n=0}^{N - 1} \rbr[\big]{ 4 \gamma \mathcal{L}_\infty (\Theta_n ) } \leq \sum_{n = 0}^{N - 1} \rbr[\big]{ V(\Theta_{n}) - V(\Theta_{n+1}) } = V(\Theta_0) - V( \Theta_N) \leq V(\Theta_0).
\end{equation}
Hence, we have that
\begin{equation}
\sum_{n=0}^\infty \mathcal{L}_\infty (\Theta_n) \leq \frac{V ( \Theta_0 )}{4 \gamma} < \infty.
\end{equation}
This shows that $\limsup_{n \to \infty} \mathcal{L}_\infty ( \Theta_n ) = 0$.
The proof of \cref{theo:gd:loss} is thus complete.
\end{proof}
\begin{cor} \label{cor:gd:main}
Assume \cref{setting:const}, let $\gamma \in (0, \infty)$, and let $\Theta = (\Theta_n)_{n \in \mathbb{N}_0} \colon \mathbb{N}_0 \to \mathbb{R}^{3 H + 1}$ satisfy for all $n \in \mathbb{N}_0$ that $\Theta_{n+1} = \Theta_n - \gamma \mathcal{G} ( \Theta_n)$ and $\gamma \leq \br{ 12 \norm{ \Theta_0 } ^2 + 32 \alpha ^2 + 2 }^{-1}$. Then
\begin{enumerate} [(i)]
\item it holds that $\sup_{n \in \mathbb{N}_0} \norm{ \Theta_n } \leq \br{ V ( \Theta_0 ) }^{1/2} < \infty$ and
\item it holds that $\limsup_{n \to \infty} \mathcal{L}_\infty (\Theta_n) = 0$.
\end{enumerate}
\end{cor}
\begin{proof} [Proof of \cref{cor:gd:main}]
\Nobs that \cref{prop:lyapunov:norm} proves that $ 4 V ( \Theta_0 ) + 2 \leq 12 \norm{ \Theta_0 } ^2 + 32 \alpha ^2 + 2 $. Hence, we have that $\gamma \leq (4 V ( \Theta_0) + 2 )^{-1}$. Combining this with \cref{theo:gd:loss} completes the proof of \cref{cor:gd:main}.
\end{proof}
\subsection{Gradient descent processes with random initializations}
\label{subsection:gd:random:initialization}
\begin{cor} \label{cor:gd:random}
Assume \cref{setting:const}, let $c, \gamma \in (0, \infty)$, let $(\Omega, \mathcal{F}, \P)$ be a probability space, let $\Theta = ( \Theta_n)_{n \in \mathbb{N}_0} \colon \Omega \times \mathbb{N}_0 \to \mathbb{R}^{3 H + 1}$ be a stochastic process, assume $\Theta_0(\Omega) \subseteq [-c ,c ]^{3 H + 1 }$, assume for all $n \in \mathbb{N}_0$ that $\Theta_{n+1} = \Theta_n - \gamma \mathcal{G} ( \Theta_n )$, and assume $\gamma \leq \br{12 c^2 (3 H + 1) + 32 \alpha ^2 + 2}^{-1}$. Then
\begin{enumerate} [(i)]
\item \label{cor:gd:random:item1}
it holds that $\sup_{\omega \in \Omega} \sup_{n \in \mathbb{N}_0} \norm{ \Theta_n ( \omega)} \leq \br{3c^2 (3 H + 1 ) + 8 \alpha ^2 } ^{1/2} < \infty$,
\item \label{cor:gd:random:item2}
it holds for all $\omega \in \Omega$ that $\limsup_{n \to \infty} \mathcal{L}_\infty (\Theta_n ( \omega)) = 0$, and
\item \label{cor:gd:random:item3}
it holds that $\limsup_{n \to \infty} \mathbb{E} [ \mathcal{L}_\infty (\Theta_n) ] = 0$.
\end{enumerate}
\end{cor}
\begin{proof} [Proof of \cref{cor:gd:random}]
\Nobs that \cref{prop:lyapunov:norm} demonstrates for all $\phi \in [-c , c]^{3 H + 1}$ that
\begin{equation} \label{eq:gd:random:1}
V(\phi) \leq 3 \| \phi \| ^2 + 8 \alpha ^2 \leq 3 c^2 ( 3 H + 1) + 8 \alpha ^2.
\end{equation}
Hence, we have for all $\phi \in [-c , c]^{3 H + 1}$ that
\begin{equation}
\gamma \leq \br{12 c^2 (3 H + 1) + 32 \alpha ^2 + 2}^{-1} \leq \br{4 V ( \phi ) + 2}^{-1}.
\end{equation}
This demonstrates for all $\omega \in \Omega$ that $\gamma \leq ( 4 V(\Theta_0 ( \omega)) + 2)^{-1}$.
\cref{lem:vthetan:decreasing} and \eqref{eq:gd:random:1} hence prove that for all $\omega \in \Omega$, $n \in \mathbb{N}_0$ we have that $\norm{\Theta_n ( \omega)} \leq \br{ V ( \Theta_n ( \omega)) }^{1/2} \leq \br{V ( \Theta_0 ( \omega)) }^{1/2} \leq \br{3c^2 ( 3 H + 1 ) + 8 \alpha^2 }^{1/2}$. This establishes item \eqref{cor:gd:random:item1}. Next \nobs that \cref{theo:gd:loss} shows for all $\omega \in \Omega$ that $\limsup_{n \to \infty} \mathcal{L}_\infty (\Theta_n ( \omega)) = 0 $, which proves item \eqref{cor:gd:random:item2}. Furthermore, \nobs that \cref{lem:vthetan:decreasing} assures that for all $\omega \in \Omega$, $N \in \mathbb{N}$ it holds that
\begin{equation}
\sum_{n=0}^{N-1} \rbr[\big]{ 4 \gamma \mathcal{L}_\infty ( \Theta_n ( \omega )) } \leq \sum_{n=0}^{N-1} \rbr[\big]{V(\Theta_{n+1} ( \omega) ) - V ( \Theta_n ( \omega )) } \leq V ( \Theta_0 ( \omega )).
\end{equation}
\cref{prop:lyapunov:norm} hence shows that for all $\omega \in \Omega$ we have that
\begin{equation}
\sum_{n=0}^\infty \mathcal{L}_\infty (\Theta_n ( \omega) ) \leq \frac{V ( \Theta_0 ( \omega ))}{4\gamma} \leq \frac{3 \norm{\Theta_0 ( \omega ) } ^2 + 8 \alpha ^2}{4 \gamma}.
\end{equation}
Combining this, item \eqref{cor:gd:random:item2}, and the dominated convergence theorem establishes item \eqref{cor:gd:random:item3}.
The proof of \cref{cor:gd:random} is thus complete.
\end{proof}
\section{A priori estimates for general target functions}
\label{section:apriori:gen}
The key ingredient in our convergence proofs for gradient flow and GD processes in Sections \ref{section:gradientflow} and \ref{section:gradientdescent} are suitable a priori estimates for the gradient flow processes (see \cref{lem:flow:lyapunov} in \cref{subsection:ito:lyapunov}) and the GD processes (see \cref{lem:vthetan:decreasing} in \cref{subsection:gd:lyapunov}). To initiate further research activities of this kind, we derive in this section related a priori bounds in the case of general target functions. For details we refer to \eqref{prop:gen:apriori:eq1} and \eqref{prop:gen:apriori:eq2} in \cref{prop:gen:apriori} below.
\begin{prop} \label{prop:gen:apriori}
Let $H \in \mathbb{N}$, $f \in C ( [0, 1 ], \mathbb{R})$,
let $\mathfrak{w} = (( \w{\phi} _ 1 , \ldots, \w{\phi} _ H ))_{ \phi \in \mathbb{R}^{3 H + 1}} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{H}$,
$\mathfrak{b} = (( \b{\phi} _ 1 , \ldots, \b{\phi} _ H ))_{ \phi \in \mathbb{R}^{3 H + 1}} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{H}$,
$\mathfrak{v} = (( \v{\phi} _ 1 , \ldots, \v{\phi} _ H ))_{ \phi \in \mathbb{R}^{3 H + 1}} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}^{H}$,
$\mathfrak{c} = (\c{\phi})_{\phi \in \mathbb{R}^{3 H + 1 }} \colon \mathbb{R}^{3 H + 1} \to \mathbb{R}$,
and $\norm{ \cdot} \colon \mathbb{R}^{3 H + 1 } \to [0, \infty)$
satisfy for all $\phi = ( \phi_1 , \ldots, \phi_{3 H + 1}) \in \mathbb{R}^{3 H + 1}$, $j \in \{1, 2, \ldots, H \}$ that
$\w{\phi}_j = \phi_j$, $\b{\phi}_j = \phi_{H + j}$,
$\v{\phi}_j = \phi_{2H + j}$, $\c{\phi} = \phi_{3 H + 1}$,
and $\norm{ \phi } = [ \sum_{i=1}^{3 H + 1 } | \phi_i | ^2 ] ^{ 1/2 }$,
let $\mathscr{N} = (\realization{\phi})_{\phi \in \mathbb{R}^{3 H + 1}} \colon \mathbb{R}^{3 H + 1 } \to C(\mathbb{R} , \mathbb{R})$ and $\mathcal{L} \colon \mathbb{R}^{3 H + 1 } \to \mathbb{R}$ satisfy for all $\phi \in \mathbb{R}^{3 H + 1}$, $x \in \mathbb{R}$ that $\realization{\phi} (x) = \c{\phi} + \sum_{j=1}^H \v{\phi}_j \max \{ \w{\phi}_j x + \b{\phi}_j , 0 \}$ and $\mathcal{L} (\phi) = \int_0^1 (\realization{\phi} (y) - f ( y ) )^2 \d y$,
let $V \colon \mathbb{R}^{3 H + 1 } \to \mathbb{R}$ and $\mathcal{G} = (\mathcal{G}_1 , \ldots, \mathcal{G}_{3 H +1} ) \colon \mathbb{R}^{3 H + 1 } \to \mathbb{R}^{3 H + 1}$ satisfy for all $\phi \in \mathbb{R}^{3 H + 1}$, $j \in \{1,2, \ldots, H \}$ that $V ( \phi ) = \norm{\phi} ^2 + \abs{ \c{\phi} } ^2$ and
\begin{equation} \label{eq:defgradient:gen}
\begin{split}
\mathcal{G}_j ( \phi) &= 2\v{\phi}_j \int_{0}^1 x ( \realization{\phi} (x) - f ( x )) \indicator{(0, \infty )} ( \w{\phi}_j x + \b{\phi}_j) \d x, \\
\mathcal{G}_{H + j} ( \phi) &= 2 \v{\phi}_j \int_{0}^1 (\realization{\phi} (x) - f ( x ) ) \indicator{(0, \infty )} ( \w{\phi}_j x + \b{\phi}_j) \d x, \\
\mathcal{G}_{2 H + j} ( \phi) &= 2 \int_0^1 [\max \{ \w{\phi}_j x + \b{\phi}_j , 0 \}] ( \realization{\phi}(x) - f ( x )) \d x, \\
\mathcal{G}_{3 H + 1} ( \phi) &= 2 \int_0^1 (\realization{\phi} (x) - f ( x ) ) \d x,
\end{split}
\end{equation}
and let $\Theta \in C([0, \infty) , \mathbb{R}^{3 H + 1})$ satisfy for all $t \in [0, \infty)$ that $\Theta_t = \Theta_0 - \int_0^t \mathcal{G} ( \Theta_s ) \d s$.
Then
\begin{enumerate} [(i)]
\item \label{prop:apriori:item1} it holds for all $t \in [ 0, \infty)$ that
\begin{equation} \label{prop:gen:apriori:eq1}
V( \Theta_t ) = V ( \Theta_0 ) - 8 \int_0^t \int_0^1 \realization{\Theta_s} ( x ) ( \realization{\Theta_s} ( x ) - f ( x ) ) \d x \d s \leq V ( \Theta_0 ) + 2 t \int_0^1 \abs{ f(x) } ^2 \d x
\end{equation}
and
\item \label{prop:apriori:item2} it holds for all $t \in [0, \infty)$ that
\begin{equation} \label{prop:gen:apriori:eq2}
\norm{ \Theta_t } \leq ( V ( \Theta_0 ) )^{1/2} + \br*{2 \textstyle\int_0^1 \abs{ f (x) } ^2 \d x}^{1/2} t^{1/2} .
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}[Proof of \cref{prop:gen:apriori}]
Throughout this proof let $\langle \cdot , \cdot \rangle \colon \mathbb{R}^{3 H + 1 } \times \mathbb{R}^{3 H + 1 } \to \mathbb{R}$ satisfy for all $\phi = ( \phi_1 , \ldots , \phi_{3 H + 1 })$, $\psi = ( \psi_1 , \ldots , \psi_{3 H + 1 } ) \in \mathbb{R}^{3 H + 1 }$ that $\langle \phi , \psi \rangle = \sum_{i=1}^{3 H + 1 } \phi_i \psi_i$.
\Nobs that for all $\phi \in \mathbb{R}^{3 H + 1}$ it holds that
\begin{equation}
(\nabla V) ( \phi ) = 2 \rbr[\big]{ \w{\phi}_1, \w{\phi}_2 , \ldots, \w{\phi}_H , \b{\phi}_1 , \b{\phi}_2 , \ldots, \b{\phi}_H , \v{\phi}_1, \v{\phi}_2, \ldots, \v{\phi}_{H}, 2 \c{\phi} } .
\end{equation}
This implies for all $\phi \in \mathbb{R}^{3 H + 1}$ that
\begin{equation} \label{apriori:gen:eq1}
\begin{split}
&\langle ( \nabla V ) ( \phi) , \mathcal{G}(\phi) \rangle \\
&= 4 \br[\Bigg]{ \sum_{j=1}^H \v{\phi}_j \int_0^1 [\max \{ \w{\phi}_j x + \b{\phi}_j , 0 \}] ( \realization{\phi} (x) - f (x) ) \d x }
+ 8 \c{\phi} \br*{ \int_0^1 (\realization{\phi} (x) - f (x) ) \d x } \\
&+ 4 \br[\Bigg]{ \sum_{j=1}^H \v{\phi}_j \int_{0}^1 (\w{\phi}_j x + \b{\phi}_j) (\realization{\phi} (x) - f (x)) \indicator{(0, \infty )} ( \w{\phi}_j x + \b{\phi}_j) \d x } \\
& = 8 \br*{\int_0^1 \rbr*{ \textstyle\sum_{j=1}^H \v{\phi}_j [ \max \{ \w{\phi}_j x + \b{\phi}_j , 0 \} ] }(\realization{\phi} ( x ) - f (x) ) \d x }
+ 8 \c{\phi} \br*{ \int_0^1 (\realization{\phi} (x) - f (x) ) \d x } \\
&= 8 \int_0^1 \realization{\phi}(x) (\realization{\phi} (x) - f (x) ) \d x.
\end{split}
\end{equation}
Next \nobs that the fact that for all $x , y \in \mathbb{R}$ it holds that $x ( x - y ) = (x - \frac{y}{2})^2 -\frac{1}{4} y ^2 \geq -\frac{1}{4} y ^2$ ensures that for all $x \in [0,1]$, $\phi \in \mathbb{R}^{3 H + 1}$ it holds that $\realization{\phi} (x) (\realization{\phi} (x) - f (x) ) \geq - \frac{1}{4} (f (x))^2$. Hence, we have for all $\phi \in \mathbb{R}^{3 H + 1}$ that
\begin{equation}
\langle ( \nabla V ) ( \phi) , \mathcal{G}(\phi) \rangle \geq - 2 \int_0^1 | f (x) | ^2 \d x.
\end{equation}
This, \eqref{apriori:gen:eq1}, the fact that $V \in C^\infty ( \mathbb{R}^{3 H + 1 }, \mathbb{R})$, and \cref{lem:chainrule:gen} shows for all $t \in [ 0, \infty)$ that
\begin{equation}
\begin{split}
V ( \Theta_t ) - V ( \Theta_0 )
&= - \int_0^t \langle ( \nabla V ) ( \Theta_s ) , \mathcal{G} ( \Theta_s) \rangle \d s \\
&= - 8 \int_0^t \int_0^1 \realization{\Theta_s} ( x ) ( \realization{\Theta_s} ( x ) - f ( x ) ) \d x \d s \\
&\leq 2 \int_0^t \int_0^1 | f (x) | ^2 \d x \d s
= 2 t \int_0^1 | f (x) | ^2 \d x.
\end{split}
\end{equation}
This proves item \eqref{prop:apriori:item1}. Next \nobs that item \eqref{prop:apriori:item1} and the fact that $\forall \, \phi \in \mathbb{R}^{3 H + 1 } \colon \norm{ \phi } ^2 \leq V ( \phi)$ demonstrate that for all $t \in [0, \infty)$ it holds that
\begin{equation}
\norm{ \Theta_t } \leq ( V ( \Theta_t ) )^{1/2} \leq \br*{ V ( \Theta_0 ) + 2 t \textstyle\int_0^1 \abs{ f (x) } ^2 \d x}^{1/2}.
\end{equation}
Combining this and the fact that $\forall \, x , y \in [0, \infty) \colon (x + y )^{1/2} \leq x ^{1/2} + y ^{1/2}$ ensures that for all $t \in [0, \infty)$ we have that
\begin{equation}
\norm{ \Theta_t } \leq (V ( \Theta_0 ) )^{1/2} + \br*{2 \textstyle\int_0^1 \abs{ f (x) } ^2 \d x}^{1/2} t^{1/2}.
\end{equation}
This establishes item \eqref{prop:apriori:item2}.
The proof of \cref{prop:gen:apriori} is thus complete.
\end{proof}
\subsection*{Acknowledgments}
Benno Kuckuck is gratefully acknowledged for several helpful suggestions.
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2044-390685587, Mathematics Münster: Dynamics-Geometry-Structure.
\input{gd_const.bbl}
\end{document}
| {
"timestamp": "2021-02-22T02:17:21",
"yymm": "2102",
"arxiv_id": "2102.09924",
"language": "en",
"url": "https://arxiv.org/abs/2102.09924",
"abstract": "Gradient descent optimization algorithms are the standard ingredients that are used to train artificial neural networks (ANNs). Even though a huge number of numerical simulations indicate that gradient descent optimization methods do indeed convergence in the training of ANNs, until today there is no rigorous theoretical analysis which proves (or disproves) this conjecture. In particular, even in the case of the most basic variant of gradient descent optimization algorithms, the plain vanilla gradient descent method, it remains an open problem to prove or disprove the conjecture that gradient descent converges in the training of ANNs. In this article we solve this problem in the special situation where the target function under consideration is a constant function. More specifically, in the case of constant target functions we prove in the training of rectified fully-connected feedforward ANNs with one-hidden layer that the risk function of the gradient descent method does indeed converge to zero. Our mathematical analysis strongly exploits the property that the rectifier function is the activation function used in the considered ANNs. A key contribution of this work is to explicitly specify a Lyapunov function for the gradient flow system of the ANN parameters. This Lyapunov function is the central tool in our convergence proof of the gradient descent method.",
"subjects": "Numerical Analysis (math.NA); Machine Learning (cs.LG); Statistics Theory (math.ST)",
"title": "A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946217644559,
"lm_q2_score": 0.8006919949619792,
"lm_q1q2_score": 0.7910406279209636
} |
https://arxiv.org/abs/0712.2182 | Optimal codes for correcting a single (wrap-around) burst of errors | In 2007, Martinian and Trott presented codes for correcting a burst of erasures with a minimum decoding delay. Their construction employs [n,k] codes that can correct any burst of erasures (including wrap-around bursts) of length n-k. The raised the question if such [n,k] codes exist for all integers k and n with 1<= k <= n and all fields (in particular, for the binary field). In this note, we answer this question affirmatively by giving two recursive constructions and a direct one. | \section{Introduction}
In \cite{MaTr}, Martinian and Trott present codes for correcting a
burst of erasures with a minimum decoding delay. Their
construction employs $[n,k]$ codes that can correct any burst of
erasures (including wrap-around bursts) of length $n-k$. Examples
of such codes are MDS codes and cyclic codes.
The question is raised in \cite{MaTr} if such $[n,k]$ codes exist
for all integers $k$ and $n$ with $1\leq k\leq n$ and all fields
(in particular, over the binary field). In this note, we answer
this question affirmatively by giving two recursive constructions
and a direct one.
Throughout this note, all matrices and codes
are over the (fixed but arbitrary) finite field $\mathbb{F}$, and
we restrict ourselves to linear codes. \\ Obviously, a code of
length $n$ can correct a pattern $E$ of erasures if and only if
any codeword can be uniquely recovered from its values in the
$(n-|E|)$ positions outside $E$. As a consequence, if an $[n,k]$
code can correct a pattern $E$ of erasures, then $n-|E|\geq k$,
{\em i.e.}, $|E|\leq n-k$. We call an $[n,k]$ code {\em optimal}
if it can correct any burst of erasures (including wrap-around
bursts) of length $n-k$.\footnote{A more precise terminology would
be "optimal for the correction of a single (wrap-around) burst of
erasures", but we opted for just "optimal" for notational
convenience.}
Equivalently, an $[n,k]$ code is
optimal if knowledge of any $k$ (cyclically) consecutive symbols
from a codeword allows one to uniquely recover that codeword, or,
in coding parlance, if each of the $n$ sets of $k$ (cyclically)
consecutive codeword positions forms an information set.
We call a $k\times n$ matrix {\em good} if any $k$ cyclically
consecutive columns of $G$ are independent. It is easy to see that
a code is optimal if and only if it has a good generator matrix.
\\ Throughout this note, we
denote with $I_k$ the $k\times k$ identity matrix, and with $X^T$
the transpose of the matrix $X$. \\
\section{A recursive construction of optimal codes}
In this section, we give a recursive construction of good
matrices, and hence of optimal codes. We start with a simple
duality result.
\begin{lem}
Let $C$ be an $[n,k]$ code, and let $C^{\perp}$ be its
dual. If $I\subset\{1,\ldots ,n\}$ has size $k$ and is an
information set for $C$, then $I^{\ast}=\{1,\ldots ,n\}\setminus I$ is an
information set for $C^{\perp}$.
\end{lem}
\begin{proof}
By contradiction. Suppose that $I^{\ast}$
is not an information set for $C^{\perp}$. Then there is a non-zero word {\bf x} in $C^{\perp}$
that is zero in the positions indexed by $I^{\ast}$. As ${\bf x}$
is in $C^{\perp}$, for any word {\bf c}$\in$$C$ we have that
\[ 0 = \sum_{i=1}^n x_ic_i = \sum_{i\in I} x_i c_i . \]
As a consequence, there are $k$-tuples that do not occur in $I$
in any word of $C$, a contradiction. We conclude that $I^{\ast}$
is an information set for $C^{\perp}$. \end{proof} As a
consequence, we have the following.
\begin{cor}\label{dual}
A linear code is optimal if and only if its dual is optimal.
\end{cor}
Our first theorem shows how to construct a good $k\times (k+n)$
matrix from a good $k\times n$ matrix.
\begin{teor}\label{thm1}
Let $G=(I_k\; P)$ be a good $k\times n$ matrix. Then
$G^{\prime}=(I_k\;I_k\;P)$ is a good $k\times (k+n)$ matrix.
\end{teor}
\begin{proof}
Any $k$ cyclically consecutive columns in $G^{\prime}$ either are
$k$ different unit vectors, or $k$ cyclically consecutive columns
of $G$.
\end{proof}
Our next theorem shows how to construct a good $n\times (2n-k)$
matrix from a good $k\times n$ matrix.
\begin{teor}\label{thm2}
Let $G=\left(
I_k \;P\right)$ be a good $k\times n$ matrix. The the following
$n\times (2n-k)$ matrix $G^{\prime}$ is good
\[ G^{\prime} = \left( \matrix{ I_{n-k} & 0 & I_{n-k} \cr
0 & I_k & P} \right) . \]
\end{teor}
\begin{proof} As $G$ is good, Corollary~\ref{dual} implies that the generator
matrix $(-P^T \; I_{n-k})$ of the dual of the code generated by
$G$ is good. By cyclically shifting the columns of this matrix
over
$(n-k)$ positions to the right, we obtain the good matrix $(I_{n-k}\;\;-P^T)$. \\
Theorem~1 implies that $(I_{n-k}\; I_{n-k}\;\; -P^T)$ is good, and
so the matrix $H=\left( I_{n-k}\;\;-P^T \;I_{n-k}\right)$ obtained
by cyclically shifting the columns of the former matrix over $n$
positions, is good. Clearly, after multiplying the columns of a
good matrix with non-zero field elements, we obtain a good matrix;
as a consequence, $H^{\prime}= \left(\matrix{-I_{n-k}\;\; -P^T \;
I_{n-k}}\right)$ is good. As $H^{\prime}$ is a good full-rank
parity check matrix of the code generated by $G^{\prime}$, this
latter matrix is good.
\end{proof}
{\bf Remark} The construction from Theorem~\ref{thm2} also occurs
in the proof of \cite[Thm.1]{MaTr}. \\ \\
The construction from Theorem~\ref{thm1} increases the code length
and fixes its dimension; the construction from Theorem~\ref{thm2}
also increases the code length, but fixes its redundancy. These
constructions can be combined to give a recursive construction of
optimal $[n,k]$ code for all $k$ and $n$. The following definition
is instrumental in making this explicit.
\begin{defi}
For positive integers $r$ and $k$, we recursively define the
$k\times r$ matrix $P_{k,r}$ as follows:
\[ P_{k,r} = \left\{ \begin{array}{cl}
\left( \matrix{ I_r \cr P_{k-r,r}}\right) & \mbox{ if } 1\leq r <
k , \\
I_k & \mbox{ if } r=k, \\
\left( \matrix{ I_{k}\; P_{k,r-k}}\right) & \mbox{ if } r > k.
\end{array} \right. \]
\end{defi}
\begin{teor}\label{thm3}
For each positive integer $k$, the matrix $I_k$ is good. \\
For all integers $k$ and $n$ with 1$\leq k <n$, the $k\times n$
matrix
$\left( I_k \; P_{k,n-k}\right)$ is good.
\end{teor}
\begin{proof} The first statement is obvious. \\
The second statement will be proved by induction on $k+n$. It is
easily verified that it is true for $k+n=3$. Now assume that the
statement is true for all integers $a,b$ with $1\leq a\leq b$ and
$a+b < k+n$. We consider three cases.
\\
If $2k < n$, then by induction hypothesis $(I_k \; P_{k,n-2k})$ is
good. By Theorem~\ref{thm1}, $\left( I_k \; I_k \; P_{k,n-2k}
\right) = \left( I_k \; P_{k,n-k}\right)$ is also good. \\ If
$2k=n$, then $\left( I_k \; P_{n-k} \right) = \left( I_k \;
P_{k,k} \right) = \left( I_k\; I_k\right)$, which obviously is a
good matrix.
If $k<n$ and $2k>n$, the induction hypothesis implies that
$(I_{2k-n} P_{2k-n,n-k})$ is a good
$(2k-n)\times k$ matrix. By Theorem~\ref{thm2},
\[ \left( \matrix{
I_{n-k} & 0 & I_{n-k} \cr
0 & I_{2k-n} & P_{2k-n,n-k}} \right) =
\left( I_k P_{k,n-k}\right) \]
is also good.
\end{proof}
\begin{exam}
\begin{rm}
Theorem~$\ref{thm3}$ implies that $(I_{28} P_{28,17})$ is a good
$28\times 45$ matrix. \\
According to the definition, $P_{28,17}=\left( \matrix{ I_{17}\cr
P_{11,17}}\right)$. \\
Again according to the definition, $P_{11,17}=(I_{11}P_{11,6})$. \\
Continuing in this fashion, $P_{11,6}= \left(\matrix{I_6 \cr
P_{5,6}}\right)$. \\
Finally, $P_{5,6}=(I_5 P_{5,1})$, and, as can be readily seen by
induction on $k$, $P_{k,1}$ is the all-one vector of height $k$.
\\ Putting this altogether, we find that the following $28\times 45$ matrix $G$ is good:
\[ G = \left( \begin{array}{ccc|cc|ccc}
I_{6} & 0 & 0 & 0 & 0 & I_{6} & 0 & 0 \\
0 & I_5 & 0 & 0 & 0 & 0 & I_5 & 0 \\
0 & 0 & I_6 & 0 & 0 & 0 & 0 & I_6 \\
\hline
0 & 0 & 0 & I_6 & 0 & I_6 & 0 & I_6 \\
0 & 0 & 0 & 0 & I_5 & 0 & I_5 & P_{5,6}
\end{array}
\right) , \]
where $P_{5,6} = \left( I_5 {\bf 1}\right)$, where ${\bf 1}$
denotes the all-one column vector.
\end{rm}
\end{exam}
To close this section, we remark that with an induction argument
it can be shown that for all positive integers $k$ and $r$, we
have $P_{k,r}=P_{r,k}^T$.
\section{Adding one column to a good matrix}
In Theorem~\ref{thm1}, we added $k$ columns to a good $k\times n$
matrix to obtain a good $k\times (k+n)$ matrix. In this section,
we will show that it is always possible to add a single column to
a good $k\times n$ matrix in such a way that the resulting
$k\times (n+1)$ matrix is good; we also show that the in the
binary case, there is a {\em unique} column that can be added. The
desired result is a direct consequence of the following
observation, which may be of independent interest.
\begin{lem}\label{dual2}
Let $\mathbb{F}$ be any field, and let $a_1, a_2, \ldots,
a_{2k-2}$ be a sequence of vectors in $\mathbb{F}^k$ such that
$a_i, a_{i+1}, \ldots, a_{i+k-1}$ are independent over
$\mathbb{F}$ for $i=1, \ldots, k-1$. For $i=1, \ldots, k$, let
$b_i$ be a nonzero vector orthogonal to $a_i, a_{i+1}, \ldots,
a_{i+k-2}$. Then $b_1, \ldots, b_k$ are independent over
$\mathbb{F}$.
\end{lem}
\begin{proof} For $i=1, \ldots ,k$, we define
\[V_i:={\rm span}\{a_i, \ldots, a_{i+k-2}\}. \]
For an interval $[i+1, i+s]:=\{i+1, i+2, \ldots, i+s\}$, with
$0\leq i <i+s\leq k$, we let
\[ V_{[i+1,i+s]}=V_{i+1}\cap \cdots \cap V_{i+s}\]
denote the intersection of $V_{i+1}, \ldots, V_{i+s}$.
Note that by definition
\[V_{[i,i]}=V_i=b_i^\perp.\]
We claim that
\[V_{[i+1,i+s]}={\rm span}\{a_{i+s}, \ldots, a_{i+k-1}\}.\]
This is easily proven by induction on $s$: obviously, the claim is true for
$s=1$; if it holds for all $s'\leq s$, then
\begin{eqnarray*} V_{[i+1,i+s+1]}&=&V_{[i+1,i+s]}\cap V_{i+s+1}\\
&=&{\rm span} \{a_{i+s}, \ldots, a_{i+k-1}\} \cap {\rm
span}\{a_{i+s+1},
\ldots, a_{i+s +k-1}\},
\end{eqnarray*}
hence $V_{[i+1,i+s]}$ certainly contains $a_{i+s+1}, \ldots, a_{i+k-1}$ and
does not contain $a_{i+s}$, since by assumption $a_{i+s}\notin {\rm
span}\{a_{i+s+1},
\ldots, a_{i+s +k-1}\}$.
So by our claim it follows that
\[ \{0\}= V_{[1,k]}=V_1\cap \cdots V_k = b_1^\perp \cap \cdots b_k^\perp, \]
hence $b_1, \ldots, b_k$ are independent.
\end{proof}
As an immediate consequence, we have the following.
\begin{teor}\label{addcolumn-alt}
Let $M$ be a good $k\times n$ matrix over ${\rm GF}(q)$. There are
precisely $(q-1)^k$ vectors $x\in{\rm GF}(q)^k$ such that the matrix
$(M x)$ is good.
\end{teor}
\begin{proof} Let $M=(m_0,m_1,\ldots ,m_{n-1})$ have columns $m_0,\ldots
m_{n-1}\in GF(q)^k$. We want to find all vectors $x\in{\rm GF}(q)^k$
with the property that the $k$ vectors \beql{vecs} m_{n-i},
\ldots, m_{n-1}, x, m_0,\ldots, m_{k-i-2}\end{equation} are independent,
for all $i=k-1, k-2, \ldots, 0$. So, for $i=k-1, k-2, \ldots, 0$,
let $b_i$ be a nonzero vector orthogonal to $m_{n-i}, \ldots,
m_{n-1},m_0,\ldots, m_{k-i-2}$; since $M$ is good, the $k-1$
vectors $m_{n-i}, \ldots, m_{n-1},m_0,\ldots, m_{k-i-2}$ are
independent, and hence the vectors in (\ref{vecs}) are independent
if and only if $(x,b_i)=\lambda_i\neq0$. Again since $M$ is good,
the $2k-2$ vectors
\[m_{n-k+1}, \ldots, m_{n-1}, m_0, \ldots, m_{k-2}\]
satisfy the conditions in Lemma~\ref{dual2}, hence the vectors
$b_0, \ldots, b_{k-1}$ are independent. So for each choice of
$\lambda=(\lambda_0, \ldots, \lambda_{k-1})$ with $\lambda_i\neq0$
for each $i$, there is a unique vector $x$ for which
$(x,b_i)=\lambda_i$, and these vectors $x$ are precisely the ones
for which $(M x)$ is good. \end{proof}
\section{Explicit construction of good matrices}
By starting with the $k\times k$ identity matrix, and repeatedly
applying Theorem~\ref{addcolumn-alt}, we find that for each field
$\mathbb{F}$ and all positive integers $k$ and $n$ with $n\geq k$,
there exists a $k\times n$ matrix
$G$ such that \\
(1) the $k$ leftmost columns of $G$ form the $k\times k$
identity matrix, and \\
(2) for each $j$, $k\leq j\leq n$, the $j$ leftmost columns of
$G$ form a good $k\times j$ matrix. \\
Note that Theorem~\ref{addcolumn-alt} implies that for the binary
field, these matrices are unique. It turned out that they have a
simple recursive structure, which inspired our general
construction.
In this section, we give, for all positive integers $k$ and $n$
with $k\leq n$, an explicit construction of $k\times n$ matrices
over $\mathbb{Z}_p$, the field of integers modulo $p$, that
satisfy the above properties (1) and (2). Note that such matrices
also satisfy (1) and (2) for extension fields of $\mathbb{Z}_p$.
We start with describing the result for $p=2$. Let $M_1$ be the
matrix
\begin{equation} \label{M1}
M_1 = \left( \matrix{ 1 & 0 \cr 1 & 1} \right) ,
\end{equation} and for
$m\geq 1$, let $M_{m+1}$ be the given as
\begin{equation} \label{binrecur}
M_{m+1} = \left( \matrix{M_m & 0 \cr M_m & M_m} \right) .
\end{equation}
Clearly, $M_m$ is a binary 2$^m\times 2^m$ matrix. The relevance
of the matrix $M_m$ to our problem is explained in the following
theorem.
\begin{teor}\label{binresult}
Let $k$ and $r$ be two positive integers, and let $m$ be the
smallest integer such that $2^m\geq k$ and $2^m\geq r$. Let $Q$ be
the $k\times r$ matrix residing in the lower left corner of
$M_{m}$. Then for each integer $j$ for which $k\leq j\leq k+r$,
the $j$ leftmost columns of the matrix $(I_k \; Q)$ form a good
binary $k\times j$ matrix.
\end{teor}
Theorem~\ref{binresult} is a consequence from our results for the
general case in the remainder of this section. \\ \\
We now define the matrices that are relevant for constructing good
matrices over $\mathbb{Z}_p$.
\begin{defi}\label{expldef} Let $p$ be a prime number, and let $k,r$ be positive integers.
Let $m$ be the smallest integer such that $p^m\geq r$ and $p^m\geq k$.
The $k\times r$ matrix $Q_{k,r}$ is defined as
\[ Q_{k,r}(i,j) = {p^m-k+i-1 \choose j-1} \;\;\mbox{ for }
1\leq i\leq k, 1\leq j\leq r . \]
\end{defi}
In Theorem~\ref{exppf} we will show that the matrix $(I_k\;
Q_{k,r})$ is good over $\mathbb{Z}_p$. But first, we derive a
recursive property of the $Q$-matrices. To this aim, we need some
well-known results on binomial coefficients modulo $p$.
\begin{lem}\label{binop}
Let $p$ be a prime number, and let $m$ be a positive integer. For
any integer $i$ with $1\leq i\leq p^m-1$, we have that ${p^m
\choose i}\equiv 0 \bmod p$.
\end{lem}
\begin{proof}
The following proof was pointed out to us by our colleague Ronald
Rietman. \\
Let $1\leq i\leq p^m-1$. We have that
\[ {p^m\choose i} = \frac{p^m{p^m-1\choose i-1}}{i}. \]
In the above representation of ${p^m\choose i}$, the nominator
contains at least $m$ factors $p$, while the denominator contains
at most $m-1$ factors $p$.
\end{proof}
\begin{lem}\label{Lucas}
Let $p$ be a prime number, and let $m$ be a positive integer.
Moreover, let $i,j,k,\ell$ be integers such that $0\leq i,k\leq
p-1$ and $0\leq j,\ell\leq p^m-1$. Then we have that
\[ {ip^m + j \choose kp^m+\ell}\equiv {i\choose k}{j\choose \ell}
\bmod p. \] \end{lem}
\begin{proof}
This is a direct consequence of Lucas' theorem (see for example
\cite[Thm.\ 13.3.3]{Blahut}). We give a short direct proof.
Clearly, ${ip^m+j \choose kp^m+\ell}$ is the coefficient of
$z^{kp^m+\ell}$ in $(1+z)^{ip^m+j}$. Now we note that
\[ (1+z)^{ip^m+j} = (1+z)^{ip^m}(1+z)^j = \left[
(1+z)^{p^m}\right]^i (1+z)^{j}. \] It follows from
Lemma~\ref{binop} that $(1+z)^{p^m}\equiv 1+z^{p^m} \bmod p$, and
so
\[ (1+z)^{ip^m+j} \equiv (1+z^{p^m})^i (1+z)^j \bmod p . \]
Hence, modulo $p$, the coefficient of $z^{kp^m+\ell}$ in
$(1+z)^{ip^m+j}$ equals ${i\choose k}{j\choose \ell}$.
\end{proof}
\begin{cor}\label{recur}
Let $p$ be a prime, and let $m$ be a positive integer. Let
$a,b,c,d$ be integers such that $0\leq a,c\leq p-1$ and $1\leq
b,d\leq p^m$. Then we have
\[ Q_{p^{m+1},p^{m+1}}(ap^m+b,cp^m+d) \equiv {a\choose c}
Q_{p^m,p^m}(b,d) \bmod p . \]
\end{cor}
\begin{proof}
According to the definition of $Q_{p^{m+1},p^{m+1}}$, we have that
\[ Q_{p^{m+1},p^{m+1}}(ap^m+b,cp^m+d) =
{ap^m+b-1 \choose cp^m+d-1}, \mbox{ and } Q_{p^m,p^m}(b,d) ={b-1 \choose d-1}. \]
The corollary is now obtained by application of Lemma~\ref{Lucas}.
\end{proof}
In words, Theorem~\ref{recur} states that $Q_{p^{m+1},p^{m+1}}$
can be considered as a $p\times p$ block matrix, for which each
block is a multiple of $Q_{p^m,p^m}$. For example, for $p=3$, we
obtain
\[ Q_{3^{m+1},3^{m+1}}=
\left( \matrix{ {0\choose 0} & {0\choose 1} & {0\choose 2} \cr
{1\choose 0} & {1\choose 1} & {1\choose 2} \cr
{2\choose 0}& {2\choose 1} & {2\choose
2}} \right)
\times Q_{3^m,3^m} =
\left( \matrix{ Q_{3^m,3^m} & 0 & 0 \cr
Q_{3^m,3^m} & Q_{3^m,3^m} & 0 \cr
Q_{3^m,3^m} & 2Q_{3^m,3^m} & Q_{3^m,3^m}}
\right) . \]
For $p=2$, we obtain the relation in (\ref{binrecur}).
Taking $a=p-1$ and $c=0$ in Theorem~\ref{recur}, we see that over
$\mathbb{Z}_p$, the $p^m\times p^m$ block in the lower left hand
corner of $Q_{p^{m+1},p^{m+1}}$ equals $Q_{p^m,p^m}$.
Definition~\ref{expldef} implies $Q_{k,r}$ is the $k\times r$
matrix residing in the lower left hand corner of $Q_{p^m,p^m}$,
where $m$ is the smallest integer that such that $p^m\geq k$ and
$p^m\geq r$. The above observations imply that whenever
$k^{\prime}\geq k$ and $r^{\prime}\geq r$, then over
$\mathbb{Z}_p$, the matrix $Q_{k,r}$ is the $k\times r$ submatrix
in the lower left hand corner of $Q_{k^{\prime},r^{\prime}}$. In
particular, $Q_{k,r+1}$ can be obtained by adding a column to
$Q_{k,r}$.
We now state and prove results on the invertibility in
$\mathbb{Z}_p$ of certain submatrices of $Q_{k,r}$, that will be
used to prove our main result in Theorem~\ref{exppf}.
\begin{lem}\label{intinv}
Let $n\geq 0$ and $b\geq 1$. The $b\times b$ matrix $V_b$ with
$V_b(i,j)={n+i-1\choose j-1}$ for $1\leq i,j\leq b$ has an integer
inverse.
\end{lem}
\begin{proof} By induction on $b$.
For $b=1$, this is obvious. \\
Next, let $b\geq 2$. Let $S$ be the $b\times b$ matrix with
\[ S(i,j) = \left\{ \begin{array} {ll}
1 & \mbox{ if } i=j, \\
-1 & \mbox{ if } i\geq 2 \mbox{ and } i=j+1, \\
0 & \mbox{ otherwise.} \end{array} \right. \]
The matrix $S$ has an integer inverse: it is easy to check that
$S^{-1}(i,j)=1$ if $i\geq j$, and 0 otherwise. We have that
\[ (SV_b)(1,j)=V_b(1,j)={n \choose j-1} , \mbox{ and } \]
\[(SV_b)(i,j) = V_b(i,j)-V_b(i-1,j)={n+i-1\choose j-1} - {n+i-2\choose j-1} =
{n+i-2 \choose j-2} \mbox{ for } 2\leq j \leq b. \]
In other words, $SV_b$ is of the form
\[ SV_b = \left( \matrix{ 1 & A \cr 0 & V_{b-1}}\right) . \]
By induction hypothesis, $V_{b-1}$ has an integer inverse, and so
$V_bS$ has an integer inverse (namely the matrix $\left( \matrix{
1 & -A V_{b-1}^{-1} \cr 0 & V_{b-1}^{-1}}\right)$). As $S$ has an
integer inverse, we conclude that $V_b$ has an integer inverse.
\end{proof}
\begin{lem}\label{modpinv2}
Let $p$ be a prime number, and let $a\geq 0$ and $b\geq 1$ be
integers such that $a+b\leq p^m$. The $b\times b$ matrix $W_b$
with $W_b(i,j) = {p^m-1+i-b \choose a+j-1}$ for 1$\leq i,j\leq b$
is invertible over $\mathbb{Z}_p$.
\end{lem}
\begin{proof}
Similarly to the proof of Lemma~\ref{intinv}, we apply
induction on $b$. \\
For $b=1$, the we have the 1x1 matrix with entry ${p^m-1 \choose
a}$. By induction on $i$, using that ${p^m-1 \choose i}
={p^m\choose i}-{p^m-1 \choose i-1}$ and employing
Lemma~\ref{binop}, we readily find that ${p^m-1 \choose i} \equiv
(-1)^i \bmod p \mbox{ for } 0\leq i\leq p^m-1$. As a consequence,
the lemma is true for
$b=1$. \\
Now let $b\geq 2$. We define the $b\times b$ matrix $T$ by
\[ T(i,j) = \left\{ \begin{array}{ll} 1 & \mbox{ if } i=j \cr
1 & \mbox{ if } j\geq 2 \mbox{ and }
i=j-1 \cr
0 & \mbox{ otherwise}
\end{array} \right. \]
It is easy to check $T$ has an integer inverse, and that
$T^{-1}(i,j)=(-1)^{i-j}$ if $i\leq j$ and 0 otherwise. In order to
show that $W_b$ is invertible in $\mathbb{Z}_p$, it is thus
sufficient to show that $W_bT$ is invertible in $\mathbb{Z}_p$. By
direct computation, we have that $(W_bT)(i,1)=W_b(i,1)$, and
\[ (W_bT)(i,j) = W_b(i,j)+W_b(i,j-1) =
{p^m-1+i-b \choose a+j-1} + {p^m-1+i-b \choose a+j-2} =
{p^m+i-b \choose a+j-1} . \]
In particular, $(W_bT)(b,1)={p^m-1 \choose a}\equiv (-1)^a \bmod
p$, and for $2\leq j\leq b$, we have that $(W_bT)(b,j)={p^m
\choose a+j-1}\equiv 0\bmod p$. We thus have that
\[ W_bT \equiv \left(\matrix{A & W_{b-1} \cr (-1)^a & 0} \right) \bmod p . \]
As $W_{b-1}$ is invertible over $\mathbb{Z}_p$, the matrix $W_bT$
(and hence the matrix $W_b$) is invertible over $\mathbb{Z}_p$.
\end{proof}
{\bf Remark} The matrix in Lemma~\ref{modpinv2} need not have an
integer inverse. For example, take $p=2, m=2, a=1$ and $b=2$. The
matrix $W_2$ equals
\[ \left( \matrix{{2\choose 1} & {3\choose 1} \cr {2\choose 2} &
{3\choose 2}}\right) = \left( \matrix{ 2 & 3 \cr 1 & 3}\right),
\] and so $W_2^{-1}= \left( \matrix{1 & -1 \cr -\frac{1}{3} &
\frac{2}{3}}\right)$. Note that modulo 2, $W_2$ equals $\left(
\matrix{0 & 1 \cr 1 & 1}\right)$, confirming that $W_2$ does have
an inverse in the integers modulo $p=2$.
\vspace{0.5cm}
We are now in a position to prove the main result of this section.
\begin{teor}\label{exppf}
Let $k$ and $r$ be positive integers. For $j=k,k+1,\ldots k+r$,
the matrix consisting of the $j$ leftmost columns of the matrix
$(I_k \;Q_{k,r})$ is good over $\mathbb{Z}_p$.
\end{teor}
\begin{proof}
We denote the matrix $(I_k \; Q_{k,r})$ by $G$, and the $i$-th
column of $G$ by {\bf g}$_i$. Let $k\leq j\leq k+r$. To show that
the matrix consisting of the columns 1,2,\ldots, $j$ of $G$ is
good, we show that for $1\leq i\leq j$, the vectors ${\bf
g}_i,{\bf g}_{i+1},\ldots ,{\bf g}_{i+k-1}$ are independent over
$\mathbb{Z}_p$, where the indices are counted modulo $j$. This is
obvious if $j=k$ and if $i=1$, so we assume that $j\geq k+1$ and
$i\geq 2$. We distinguish between two cases.
\\
(1) $2\leq i\leq k$. \\
The vectors to consider are ${\bf e}_i,\ldots ,{\bf e}_k, {\bf
g}_{k+1},\ldots ,{\bf g}_{i+k-1}$ (if $i+k-1\leq j$), or ${\bf
e}_i,\ldots ,{\bf e}_{k},\\ {\bf g}_{k+1},\ldots ,{\bf g}_{j},{\bf
e}_1,\ldots ,{\bf e}_{k-j+i-1}$ (if $i+k-1\geq j+1$). We define
$b$:=min($i-1,j-k$). The vectors under consideration are
independent if the $b\times b$ matrix consisting of the $b$
leftmost columns of $Q_{k,r}$, restricted to rows
$i-b,i-b+1,\ldots ,i=1$, is invertible in $\mathbb{Z}_p$.
This follows from Lemma~\ref{intinv}. \\
(
2) $i\geq k+1$. \\
The vectors to consider are ${\bf g}_{i},\ldots ,{\bf g}_{i+k-1}$
(if $i+k-1\leq j$), or ${\bf g}_{i},\ldots ,{\bf g}_{j},{\bf
e}_1,\ldots ,{\bf e}_{k-j+i-1}$ (if $i+k-1\geq j+1$). We define
$b$:=min($k,j-i+1$). The vectors under consideration are
independent if the $b\times b$ matrix consisting of the $b$
bottom entries of the columns $i-k+1,i-k+2,\ldots,i-k+b$ of
$Q_{k,r}$ is invertible in $\mathbb{Z}_p$. This follows from
Lemma~\ref{modpinv2}.
\end{proof}
| {
"timestamp": "2007-12-13T17:33:59",
"yymm": "0712",
"arxiv_id": "0712.2182",
"language": "en",
"url": "https://arxiv.org/abs/0712.2182",
"abstract": "In 2007, Martinian and Trott presented codes for correcting a burst of erasures with a minimum decoding delay. Their construction employs [n,k] codes that can correct any burst of erasures (including wrap-around bursts) of length n-k. The raised the question if such [n,k] codes exist for all integers k and n with 1<= k <= n and all fields (in particular, for the binary field). In this note, we answer this question affirmatively by giving two recursive constructions and a direct one.",
"subjects": "Information Theory (cs.IT)",
"title": "Optimal codes for correcting a single (wrap-around) burst of errors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9910145723089617,
"lm_q2_score": 0.7981867849406659,
"lm_q1q2_score": 0.7910147353006391
} |
https://arxiv.org/abs/1309.0920 | Topology of geometric joins | We consider the geometric join of a family of subsets of the Euclidean space. This is a construction frequently used in the (colorful) Carathéodory and Tverberg theorems, and their relatives. We conjecture that when the family has at least $d+1$ sets, where $d$ is the dimension of the space, then the geometric join is contractible. We are able to prove this when $d$ equals $2$ and $3$, while for larger $d$ we show that the geometric join is contractible provided the number of sets is quadratic in $d$. We also consider a matroid generalization of geometric joins and provide similar bounds in this case. | \section{Introduction}
The purpose of this paper is to introduce the notion of a geometric join, and to study its topological connectedness. The geometric join is a natural object which appears in the proof of the colorful Carath\'eodory theorem~\cite{bar1982} and Tverberg's theorem~\cite{tver1966}; see chapter 8 in \cite{mat2002} for a detailed explanation. Recently, it was also shown in \cite{hol2014} that the colorful version of Hadwiger's transversal theorem \cite{aro2009} is closely related to the connectedness of the geometric join.
\begin{definition}
Let $X_1, \dots, X_m$ be subsets of the Euclidean space $\R^d$. The \df{geometric join} of $X_1$, $\dots$, $X_m$ is the set of all convex combinations $t_1x_1 + \cdots + t_mx_m \in \R^d$ where $x_i \in X_i$, $t_i\ge 0$, and $\sum\limits_{^{i=1}}^{_m} t_i = 1$. The geometric join of the subsets $X_1, \dots, X_m$ of $\R^d$ is denoted by $\xm$.
\end{definition}
\begin{remark}
In this paper we will consider the case when the subsets $X_1, \dots ,X_m$ are finite, but our results can easily be extended to the case when the $X_i$ are arbitrary compact subsets of $\R^d$.
\end{remark}
The subsets $X_1$, $\dots$, $X_m$ are often referred to as \df{color classes}, and a subset $Y\subset X_1$ $\cup$ $\cdots$ $\cup X_m$ is called \df{colorful} if $|Y\cap X_i|\leq 1$ for every $i$. The convex hull of a colorful subset is called a \df{colorful simplex}. In other words, the geometric join $\xm$ is the union of all colorful simplices spanned by $X_1\cup \cdots \cup X_m$.
\medskip
Let us start by pointing out some simple examples. Consider a point set $X \subset \R^d$. Carath{\'e}odory's theorem \cite{Car1907} states that for any point $p$ in the convex hull of $X$, i.e. $x\in \conv X$, there exists a subset $Y\subset X$ such that $|Y|\leq d+1$ and $p\in \conv Y$. This means that if we color each point in $X$ by $d+1$ distinct colors, then every subset $Y\subset X$ with $|Y|\leq d+1$ spans a colorful simplex, so in this case the geometric join is the same as $\conv X$. Using our notation, this means if $X = X_1$ $= \cdots$ $= X_{d+1}$, then $X_{[d+1]} = \conv X$. A well-known generalization of Carath{\'e}odory's theorem is the colorful Carath{\'e}odory theorem due to B{\'a}r{\'a}ny \cite{bar1982} which states that if $X_1$, $\dots$, $X_{d+1}$ are subsets of $\R^d$ and a point $p$ is contained in $\conv X_i$ for all $1\leq i \leq d+1$, then the point $p$ is contained in a colorful simplex spanned by $X_1$, $\dots$, $X_{d+1}$. Equivalently, this can be stated as: $\bigcap_{i=1}^{d+1} \conv X_i \subset \xm$. In fact the stronger statement $\bigcap_{i\neq j} \conv (X_i\cup X_j) \subset \xm$ also holds \cite{aro2009,hol2008}.
\medskip
Here we consider the following problem.
\begin{probs} \label{join-conn-prob}
Give sufficient conditions in terms of $m$ and $d$ for the contractibility or $k$-connectedness of $\xm$.
\end{probs}
We may compare the geometric join with the \emph{abstract join} (see~\cite{mat2003}), which can be regarded as a geometric join after putting the $X_i$'s into $\mathbb R^D$, with sufficiently large $D$, so that the affine hulls of the $X_i$'s are in general position. It is known that an abstract join of $m$ finite sets, each of cardinality greater than one, is homotopic to a wedge of $(m-1)$-dimensional spheres. The geometric join can be regarded as a piecewise linear image of the abstract join in the ambient space $\mathbb R^d$, where $X_i$'s reside, and its homotopy type may be different from that of the abstract join.
In the subsequent sections we show that the geometric join has certain connectivity for sufficiently large $m$, depending on $d$. These are partial results towards establishing the following.
\begin{conjecture} \label{conj-main}
The geometric join $\xm$ is contractible whenever $m\geq d+1$.
\end{conjecture}
This conjecture is open even when $m=d+1$ and each $X_i$ consists of two elements. If any of the $X_i$ were a singleton, then $X_{[m]}$ is trivially contractible. Actually, the authors do not agree whether the conjecture should be true or false, and perhaps it is better to look for a counterexample.
\medskip
In section~\ref{star-sect} we show that $\xm$ is starshaped whenever $m>d(d+1)$. This is a simple consequence of Tverberg's theorem \cite{tver1966}, and of course implies contractibility of $\xm$.
Section~\ref{morse-dist-sec} introduces a technique for studying the homotopy type of compact subsets of $\R^d$ via an analogue of the Morse theory to the distance function. As a consequence we can show that $\xm$ is $(k-2)$-connected whenever $m>\frac{dk}{2}$. This is done in section \ref{join-sec}. Note that this implies that $\xm$ is contractible whenever $m>\frac{d(d+1)}{2}$, which is a slight improvement on the approach of section \ref{star-sect}.
In section~\ref{simple} we apply the nerve theorem to show that the geometric join in $\mathbb{R}^d$ is simply connected whenever $m>\frac{d+2}{2}$, and it is easily seen that this bound is best possible. This implies that our Conjecture holds when $d=2$. It should be noted that the case $d=2$ was previously verified in \cite{bois1991,tot2010}.
Section~\ref{dim2-3-sec} gives a proof of our Conjecture for $d = 3$. Our proof uses some basic observations about geometric joins in $\R^2$ together with the ``strong'' colorful Carath\'{e}odory theorem \cite{aro2009,hol2008}.
In section \ref{matroid} we generalize the notion of geometric joins by replacing the color classes by an arbitrary matroid. This gives rise to a generalization of our main Problem, and it turns out that many of our methods also work in this more general setting.
\section{Starshapedness of the geometric join} \label{star-sect}
We start with an observation that the geometric join is {\em starshaped} for sufficiently large $m$, which obviously implies contractibility.
\begin{theorem} \label{join-starshaped}
If $m > d(d+1)$, then $\xm$ is starshaped.
\end{theorem}
\begin{proof}
For each $1\leq i \leq m$, choose one element $x_i\in X_i$, and let $T = \{x_1\dots,x_m\}$. By Tverberg's theorem~\cite{tver1966} there is a partition $T = T_1 \cup\cdots\cup T_{d+1}$ and a point $t\in \R^d$ such that $t \in \conv T_j$ for each $j$. We will show that every point $x \in \xm$ can be ``seen'' from $t$.
It suffices to consider the case when $x$ belongs to the boundary of $\xm$, so by Carath\'eodory's theorem we may assume $x$ belongs to a colorful simplex of dimension at most $d-1$. Thus $x$ is contained in the convex hull of a colorful subset $Y$ with $|Y| \leq d$. By the pigeon-hole principle there exists some $T_j$ such that $T_j\cup Y$ is a colorful subset. Therefore the closed segment $[t,x]$ is contained in $\conv(T_j\cup Y)$ which is contained in $\xm$.
\end{proof}
\begin{remark}
In the previous argument the Tverberg point can be replaced by any point in the $d$-core of $X = \bigcup X_i$, which is the intersection of the convex hulls of all sets $X\setminus Y$ where $|Y| = d$. Here in fact, it suffices to take all sets $X\setminus Y$ where $Y$ is a colorful subset with $|Y|=d$.
\end{remark}
\begin{remark}
Actually, Krasnoselskii's theorem (that a compact $C$ is starshaped iff every $d+1$ of its points are seen from some point of $C$, see~\cite{dgk1963}) implies that $\xm$ is starshaped when $m \geq (d+1)^2+1$.
\end{remark}
\begin{remark}
It should be noted that in $\R^2$ the geometric join $\xm$ is starshaped for $m\geq 3$ \cite{bois1991,tot2010}, but in $\R^3$ the geometric join $X_{[4]}$ is not necessarily starshaped as was shown in \cite{tot2010}.
\end{remark}
\section{Topology of subsets of $\mathbb R^d$ through the
distance function} \label{morse-dist-sec}
Suppose we have a compact set $S\subset \mathbb R^d$ and we want to study its homotopy type. One possible way to do this (see also~\cite{eh2010}, where such methods are widely discussed) is to apply an analogue of the Morse theory to the distance function
\[\rho_S : \mathbb R^d \to \mathbb R,\]
which is $\rho_S(x) = \dist(x, S)$. The sets $$ S(t) = \{x\in \mathbb R^d : \rho_S(x) \le t\} $$ in this case are just $t$-neighborhoods of $S$. For $t=0$ the set $S(0)$ is equal to $S$ and $S(t) \subset S(t')$ whenever $t\leq t'$.
If the function $\rho_S$ (which we simply denote by $\rho$) were a smooth function (which can only happen in the trivial case of convex $S$) we could study the problem using the ordinary Morse theory, however, in general the differential $d\rho$ is not always defined. For a given $x_0\not\in S$ let $P(x_0)$ denote the set of points in $S$ which are closest to $x_0$. If $P(x_0)$ consists of a single point then (for reasonable sets $S$) the differential $d\rho$ is the unit normal in the direction of $x_0-P(x_0)$. Otherwise the function $\rho$ has no differential at $x_0$.
The next informal observation is the following: If $\conv P(x_0)$ does not contain $x_0$ then varying $t$ in a neighborhood of $t_0 = \rho(x_0)$ does not influence the topology of $S(t)$ near $x_0$. This is because $S(t_0)$ in the first order approximation is the complement of the convex cone
\[\{ x\in \mathbb R^d : \forall y\in P(x_0) (x - x_0, x_0 - y) \ge 0\},\]
which has nonempty interior; and $S(t)$ in a neighborhood of $x_0$ looks similarly when $t$ is close to $t_0$.
Of course, the above argument is informal and we want to give a
rigorous proof in the following useful case.
\begin{theorem} \label{morse-dist-contr}
Let $S$ be the union of a finite number of compact convex sets in $\R^d$. If for any $x_0\not\in S$ we have $x_0\not\in \conv P(x_0)$, then $S$ is contractible.
\end{theorem}
\begin{remark}
In~\cite{pan2001}, under the same condition $x_0\not\in \conv P(x_0)$, it was proved that $\mathbb R^d\setminus X$ is contractible for sets $X$ of another kind. The methods of~\cite{pan2001} also seem to imply Theorem~\ref{morse-dist-contr} but we provide a different proof here, which is short and self-contained.
\end{remark}
\begin{proof}
Let $S = \bigcup_{i=1}^m C_i$ where $F= \{C_1, \dots, C_m\}$ is a family of compact convex sets in $\R^d$. By the nerve theorem (see~\cite{bor1948} or~\cite[Corollary~4G.3]{hat2002}) the homotopy type of $S$ is determined by the nerve of $F$, which we denote by $N$. This is the abstract simplicial complex with vertex set $[m]$ where $\sigma\subset [m]$ is a simplex of $N$ if and only if $\bigcap_{i\in\sigma}C_{i} \neq \emptyset$. To be more precise, the nerve theorem implies that $S$ is homotopy equivalent to the geometric realization of $N$. For $t\geq 0$, let $N(t)$ denote the nerve of the family $F(t) = \{C_1(t), \dots, C_m(t)\}$. The idea is to show that the homotopy type of the nerve $N(t)$ does not change as the parameter $t$ increases. This will prove the claim of the theorem, since $N(t)$ is an $(m-1)$-dimensional simplex for all sufficiently large $t$. Alternatively, by thinking of this process in reverse, we show that $N$ can be obtained from the $(m-1)$-dimensional simplex by a sequence of simplicial collapses.
We will prove the theorem for the case when the members of $F$ are smooth, strictly convex, and in some suitable ``general position'', which will be explained below. Any other configuration can be reduced to this case by approximating every body in the family by a smooth and strictly convex body, maintaining the other general position assumptions, such that the nerve of the approximating family remains the same. Since the nerve lemma applies for every such approximating family and the nerve remains the same, we conclude that the contractibility of the union of the approximating family implies the contractibility of the union of the original family.
Now consider the situation when the nerve $N(t)$ changes. This means there is some subfamily $G(t) \subset F(t)$ which is intersecting for all $t \geq t_0>0$, but not intersecting for any $t<t_0$. Here we impose the additional ``general position'' assumption, namely, that this is the only change in $N(t)$ which happens for all $t$ sufficiently close to $t_0$. Since the members of $F$ are smooth and strictly convex it follows that the members of $G(t_0)$ intersect in a unique point $x_0 \in \R^d$. Clearly we have $\rho(x_0) \leq t_0$, and we claim that the inequality must be strict. Define the sets \[I = \{i \in [m] \: : \: \dist(x_0, C_i) < t_0\} \: \mbox{ and } \: J = \{j \in [m] \: : \: \dist(x_0, C_j) = t_0\}.\] If $\rho(x_0) = t_0$, then $I=\emptyset$ and the set $P(x_0)$ contains a unique point $y_j\in C_j$ for every $j\in J$. Since the sets $C_j(t_0)$ have a single point, $x_0$, in common, the vectors $y_j-x_0$ contain the origin in their convex hull and therefore $x_0$ is contained in $\conv P(x_0)$. This contradicts the hypothesis. Therefore we may assume that $\rho(x_0) < t_0$, which implies $I\neq\emptyset$, and therefore $I \cup J$ is a partition of $[m]$.
The point $x_0$ is contained in the interior of the set $C_i(t_0)$ for every $i\in I$, while it is on the boundary of the set $C_j(t_0)$ for every $j\in J$. Let $\hat{C}_j(t_0)$ be the set obtained from $C_j(t_0)$ by cutting off, by a hyperplane, a small cap centered at $x_0$, for every $j\in J$. Since the bodies $C_j(t_0)$ are strictly convex, these small caps can be chosen arbitrarily close to $x_0$, and since we are cutting off by hyperplanes, the bodies $\hat{C}_j(t_0)$ remain convex so the nerve theorem still applies. The resulting family $\{\hat{C}_j(t_0)\}_{j\in J}$ will have empty intersection, and if the removed caps are chosen sufficiently small, we will have \[\bigcup_{i\in [m]} C_i(t_0) = \left( \bigcup_{i\in I} C_i(t_0) \right) \: \cup \: \bigcup_{j\in J}\hat{C}_j(t_0).\] Let $\hat{N}(t_0)$ denote the nerve of the family $\{C_i(t_0)\}_{i \in I} \cup \{\hat{C}_j(t_0)\}_{j\in J}$. By the nerve theorem, $N(t_0)$ and $\hat{N}(t_0)$ are homotopy equivalent, and clearly $\hat{N}(t_0) = N(t)$ for all $t < t_0$ which are sufficiently close to $t_0$.
\end{proof}
For a weaker conclusion than contractibility, we have the following.
\begin{theorem} \label{morse-dist-conn}
Let $S$ be the union of a finite number of compact convex sets in $\R^d$. If for any $x_0\not\in S$ we have $x_0\notin \conv Y$ where $Y\subset P(x_0)$ with $|Y|\le k$, then $S$ is $(k-2)$-connected.
\end{theorem}
\begin{proof}
The proof of Theorem \ref{morse-dist-contr} goes through as before, using one additional observation: Each time the nerve $N(t)$ changes, as $t$ increases, either there is no change in the homotopy type, or the new simplex being added has at least $k+1$ vertices, so in both cases $\pi_i(N(t))$ is preserved for $i\le k-2$.
\end{proof}
\section{Contractibility of the geometric join} \label{join-sec}
We now apply the methods from the previous section to our main Problem concerning the connectivity of $\xm$. Our first result is the following.
\begin{theorem} \label{contr-dsq}
If $m > \frac{d(d+1)}{2}$, then $\xm$ is contractible.
\end{theorem}
\begin{proof}
The results of Section~\ref{morse-dist-sec} can be applied to $\xm$ since it is the union of the colorful simplices spanned by $X_1\cup \cdots \cup X_m$. We assume that $\xm$ is not contractible and obtain an upper bound on $m$, that is, the number of distinct colors. From Theorem~\ref{morse-dist-contr}, there is a point $x_0\not\in \xm$ that is contained in the convex hull of the set of its closest points $P(x_0)$. By Carath\'eodory's theorem there is a set $\{y_1,\ldots,y_k\} \subset P(x_0)$ with $k \le d+1$ such that $x_0 \in \conv \{y_1,\ldots,y_k\}$ and each $y_i \in \conv S_i$ where $S_i$ is some colorful subset with $|S_i|\leq d$. Therefore $|S_1\cup \cdots \cup S_{k}| \leq d(d+1)$. Now we will show that each color appears in $S_1\cup \cdots \cup S_k$ at least twice. Assuming the contrary, we have to consider two cases:
\begin{itemize}
\item {\em Some color $j$ is not used.} Define open halfspaces
\[H_i^+ = \{x\in \mathbb R^d : \langle x - y_i, x_0- y_i\rangle > 0\}.\]
These open halfspaces cover $\mathbb R^d$ because $x_0\in \conv\{y_1, \ldots, y_k\}$ and every $H_i^+$ is disjoint from its respective $S_i$. A point $p$ of color $j$, which exists by our assumption, is contained in some $H_i^+$, and therefore the segment $[y_i, p]$ is closer to $x_0$ than $y_i$. This is a contradiction since $S_i \cup \{p\}$ is a colorful subset whose convex hull contains the segment $[y_i, p]$.
\item {\em Some color $j$ is used only once}. Let $p$ denote the unique point in $S_1\cup \cdots \cup S_k$ of color $j$. Again, $p \in H_i^+$ for some $i$, and the set $S_i$ does not contain the color $j$ since $p$ was the only point of this color. Therefore $S_i\cup \{p\}$ is a colorful subset whose convex hull is closer to $x_0$ than the $y_i$'s, and again we obtain a contradiction.
\end{itemize}
Therefore there are at most $\frac{d(d+1)}{2}$ distinct colors. \end{proof}
Similarly we prove:
\begin{theorem} \label{conn-dsq}
If $m > \frac{dk}{2}$, then $\xm$ is $(k-2)$-connected. \end{theorem}
\begin{proof} The previous proof goes through as before, but we need only consider subsets $\{y_1,\ldots, y_\ell\}\subseteq P(x_0)$ consisting of at most $k$ points in view of Theorem~\ref{morse-dist-conn}. The rest of the proof is the same.
\end{proof}
\begin{remark}
If the points in $X_1\cup\dots\cup X_m$ are in appropriate general position, then the sum of codimensions of $S_i$ must be at least $d+1$, otherwise perturbations will destroy the inclusion $x_0\in\conv \{y_1,\ldots, y_\ell\}$. It follows that the number of vertices of the $S_i$ will be at most $(k-1)(d+1)$ and the inequality in Theorem~\ref{conn-dsq} can be relaxed to $m > \frac{(k-1)(d+1)}{2}$ in this case.
\end{remark}
\section{An improved bound for simple connectedness}
\label{simple}
We have a feeling that our application of Morse theory for the distance function is not the optimal approach for attacking our main Problem. To illustrate this we give an improved sufficient condition for simple connectedness.
\begin{theorem} \label{simp-conn}
If $m>\frac{d+2}{2}$, then $\xm$ is simply connected.
\end{theorem}
\begin{proof}
We think of the geometric join as a map $f: Y \to \mathbb{R}^d$ where $Y = Y_1 *\cdots * Y_m$ is the abstract join and $f$ is linear on every simplex of $Y$. Thus, $X_i = f(Y_i)$, $\xm = f(Y)$, and for $m\geq 2$, $\xm$ and $Y$ are connected. The case $d=1$ is obvious, so we suppose $d\geq 2$.
By the nerve theorem, a path in $Y$ from $p$ to $q$ can be regarded as a sequence $(\sigma_1, \dots ,\sigma_k)$ where the $\sigma_i \in Y$ are $(m-1)$-dimensional simplices, $p\in\sigma_1$, $q\in\sigma_k$, and $\sigma_i \cap \sigma_{i+1} \neq \emptyset$ for every $1\leq i <k$. Likewise, a path in $\xm$ from $f(p)$ to $f(q)$ can be regarded as a sequence $(f(\sigma_1), \dots , f(\sigma_k))$ where the $\sigma_i \in Y$ are $(m-1)$-simplices, $p\in \sigma_1$, $q\in\sigma_k$, and $f(\sigma_i)\cap f(\sigma_{i+1}) \neq \emptyset$ for every $1\leq i <k$.
Now consider a path $(f(\sigma_1),\ldots, f(\sigma_k))$ in $\xm$. Suppose there are consecutive simplices $\sigma_i$ and $\sigma_{i+1}$ such that $\sigma_i\cap\sigma_{i+1} = \emptyset$, but $f(\sigma_i)\cap f(\sigma_{i+1}) \neq \emptyset$. Since $m>\frac{d+2}{2}$, there is a proper face $\tau$ of either $\sigma_i$ or $\sigma_{i+1}$ such that $f(\sigma_i)\cap f(\tau)\cap f(\sigma_{i+1}) \neq \emptyset$. Therefore $\tau$ is contained in an $(m-1)$-simplex $\tau_{i} \in Y$ such that $\sigma_i \cap \tau_i \neq\emptyset$, $\tau_i\cap \sigma_{i+1}\neq \emptyset$, and $f(\sigma_i) \cap f(\tau_i) \cap f(\sigma_{i+1}) \neq \emptyset$. The nerve theorem implies that the paths, $(f(\sigma_1$, $\dots$, $f(\sigma_i), f(\sigma_{i+1})$, $\dots$, $f(\sigma_k))$ and $(f(\sigma_1)$, $\dots$, $f(\sigma_i), f(\tau_i), f(\sigma_{i+1})$, $\dots$, $f(\sigma_k))$
are homotopic in $\xm$. For each consecutive pair $f(\sigma_i)$, $f(\sigma_{i+1})$ such that $\sigma_i\cap\sigma_{i+1} = \emptyset$, this procedure can be repeated, thereby removing all such pairs. Therefore for any element $\gamma \in \pi_1(X)$ there is a $\gamma'\in \pi_1(Y)$ such that $\gamma$ and $f(\gamma')$ are homotopic. Since $m\geq 3$ we have $\pi_1(Y) = 0$ which completes the proof.
\end{proof}
\begin{remark} The inequality $m>\frac{d+2}{2}$ is tight which can be seen by the following example. Let $d=2k$ and $Y = Y_1* \cdots *Y_{k+1}$ where $|Y_i|=2$. Then $Y \cong S^k$. We can map $Y$ into $\mathbb{R}^{d}$ so that a single pair of opposite $k$-simplices of $Y$ intersect in an interior point, resulting in a space homeomorphic to a $k$-sphere with a single pair of antipodal points identified. Such a space is not simply connected.
\end{remark}
\section{Dimensions 2 and 3} \label{dim2-3-sec}
It was established in \cite{bois1991}, and independently in \cite{tot2010}, that for $d=2$ the geometric join $X_{[3]}$ is starshaped, which implies that Conjecture \ref{conj-main} holds for $d=2$. It is easily seen that their arguments extend to $m > 3$. Here we establish the next case of our Conjecture.
\begin{theorem}\label{conj:d3}
If $d=3$ and $m\geq 4$, then $\xm$ is contractible.
\end{theorem}
The proof of Theorem \ref{conj:d3} is based on two observations. The first one is the ``strong'' colorful Carath{\'e}odory theorem, established independently in \cite{aro2009} and \cite{hol2008}.
\begin{lemma}\label{obs:colorcara}
Let $m\geq d+1$ and suppose that the origin is not contained in $\xm$. Then there exists $1\leq i < j \leq m$ and an affine hyperplane that strictly separates $X_i\cup X_j$ from the origin.
\end{lemma}
\begin{remark}
In fact there exists a subset $I\subset [m]$ with $|I|\geq m-d+1$ and an affine hyperplane which strictly separates $\cup_{i\in I}X_i$ from the origin, but for our purpose we only need Lemma \ref{obs:colorcara} as stated.
\end{remark}
The second observation extends the fact that the geometric join in the plane is starshaped. Let $X_1$ and $X_2$ be finite sets in $\R^2$, and consider their geometric join $X_{[2]}$. The complement $\mathbb{R}^2\setminus X_{[2]}$ is a collection of open regions, one of which is unbounded. Let $\tilde{X}$ denote the complement of the (unique) unbounded region. Obviously $\tilde{X}$ is a compact region, but the following also holds.
\begin{claim}\label{obs:starshape}
For finite sets $X_1$ and $X_2$ in $\mathbb{R}^2$, the set $\tilde{X}$ is starshaped.
\end{claim}
\begin{proof} The claim is obvious if $|X_1|=1$ or $|X_2|=1$. The geometric join $X_{[2]}$ can be regarded as a drawing of a complete bipartite graph in the plane where edges are drawn as straight segments. Direct each edge so that it goes from a vertex $v\in X_1$ to a vertex $w\in X_2$. Then we can assign a unique angle $\alpha \in [0,\pi]$ to each pair of edges. Choose a pair of edges $e_1, e_2$ which
\begin{itemize}
\item have a point in common (which may or may not be a vertex), and
\item maximize the angle $\alpha_0$.
\end{itemize}
We consider the case where $e_1$ and $e_2$ intersect in interior points and $\alpha_0<\pi$. (The cases when $e_1$ and $e_2$ intersect in a vertex or when $\alpha_0=\pi$ are treated similarly.) Suppose $e_1 = [v_1, w_1]$, $e_2 = [v_2, w_2]$, $v_i\in X_1$, $w_i\in X_2$, and $x = e_1\cap e_2$. We show that any edge $e = [v,w]$, where $v\in X_1$ and $w\in X_2$, is visible from $x$ within a bounded region enclosed by some cycle of $X_{^{[2]}}^2$. This will prove that $\tilde{X}$ is starshaped from $x$. By the maximality of the angle $\alpha_0$ the points of $X_1\cup X_2$ must be contained in the closed antipodal sectors bounded by $\angle v_1xv_2$ and $\angle w_1xw_2$. Up to symmetries there are two cases to consider.
\begin{enumerate}
\item $e$ is contained in the closed sector bounded by $\angle v_1xv_2$, and the line containing $e$ intersects the ray from $x$ through $v_1$.
\item $e$ crosses the closed sector bounded by $\angle v_1xw_2$.
\end{enumerate}
Case (1) splits into two subcases: (a) The ray from $v$ through $w$
intersects the ray from $x$ through $v_1$. Then the edge $e$ is
visible from $x$ within the cycle $v w v_1 w_1$. See figure below
(left). (b) The ray from $w$ through $v$ intersects the ray from $x$
through $v_1$. Then the edge $e$ is visible from $x$ within the cycle
$v w v_2 w_2$. See figure below (right).
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=.9]
\begin{scope}[rotate = 20]
\begin{scope}
\fill[blue, opacity = .12] (.2,1.2) -- (.8,1.8) -- (310:1.8cm) --
(130:2.2cm) -- cycle;
\draw[blue, opacity =.8] (.2,1.2) -- (130:2.2cm) (.8,1.8) --
(310:1.8cm);
\fill[gray, opacity =.4] (0,0) -- (.2,1.2) -- (.8,1.8) -- cycle;
\end{scope}
\begin{scope}
\fill (0:2cm) circle [radius = .07];
\fill (180:2.4cm) circle [radius = .07];
\fill (130:2.2cm) circle [radius = .07];
\fill (310:1.8cm) circle [radius = .07];
\draw (0:2cm) -- (180:2.4cm);
\draw (130:2.2cm) -- (310:1.8cm);
\fill[white] (0,0) circle [radius = .07];
\draw (0,0) circle [radius = .07];
\end{scope}
\begin{scope}
\draw[dotted] (-2,-1) --(.2,1.2) (.8,1.8) -- (1.2,2.2);
\fill (.2,1.2) circle [radius = .07];
\fill (.8,1.8) circle [radius = .07];
\draw (.2,1.2) -- (.8,1.8);
\end{scope}
\begin{scope}
\node [below] at (0,0) {\small $x$};
\node [left] at (180:2.4cm) {\small $w_2$};
\node [left] at (130:2.2cm) {\small $v_1$};
\node [right] at (0:2cm) {\small $v_2$};
\node [right] at (310:1.8cm) {\small $w_1$};
\node at (0.15,1.5) {\small $w$};
\node at (.8,2.05) {\small $v$};
\end{scope}
\end{scope}
\begin{scope}[xshift = 7cm]
\begin{scope}[rotate = 20]
\begin{scope}
\fill[blue, opacity = .12] (.2,1.2) -- (.8,1.8) -- (0:2cm) --
(180:2.4cm) -- cycle;
\draw[blue, opacity = .8] (.2,1.2) -- (180:2.4cm) (.8,1.8) -- (0:2cm);
\fill[gray, opacity =.4] (0,0) -- (.2,1.2) -- (.8,1.8) -- cycle;
\end{scope}
\begin{scope}
\fill (0:2cm) circle [radius = .07];
\fill (180:2.4cm) circle [radius = .07];
\fill (130:2.2cm) circle [radius = .07];
\fill (310:1.8cm) circle [radius = .07];
\draw (0:2cm) -- (180:2.4cm);
\draw (130:2.2cm) -- (310:1.8cm);
\fill[white] (0,0) circle [radius = .07];
\draw (0,0) circle [radius = .07];
\end{scope}
\begin{scope}
\draw[dotted] (-2,-1) --(.2,1.2) (.8,1.8) -- (1.2,2.2);
\fill (.2,1.2) circle [radius = .07];
\fill (.8,1.8) circle [radius = .07];
\draw (.2,1.2) -- (.8,1.8);
\end{scope}
\begin{scope}
\node [below] at (0,0) {\small $x$};
\node [left] at (180:2.4cm) {\small $w_2$};
\node [left] at (130:2.2cm) {\small $v_1$};
\node [right] at (0:2cm) {\small $v_2$};
\node [right] at (310:1.8cm) {\small $w_1$};
\node at (0.15,1.5) {\small $v$};
\node at (.8,2.05) {\small $w$};
\end{scope}
\end{scope}
\end{scope}
\end{scope}
\end{tikzpicture}
\end{center}
Case (2) splits into two subcases: (a) $v$ is contained in the closed sector bounded by $\angle v_1xv_2$. See figure below (left). (b) $w$ is contained in the closed sector bounded by $\angle v_1xv_2$. See figure below (right). In both cases the edge $e$ is visible from $x$ within the cycle $v w v_2 w_1$.
\end{proof}
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=.9]
\begin{scope}[rotate = 20]
\begin{scope}
\fill[blue, opacity = .12] (.2,1.2) -- (-1.5,-.5) -- (0:2cm) --
(310:1.8cm) -- cycle;
\draw[blue, opacity =.8] (-1.5,-.5) -- (0:2cm) -- (310:1.8cm)
-- (.2,1.2);
\fill[gray, opacity =.4] (0,0) -- (-1.5,-.5) -- (.2,1.2) -- cycle;
\end{scope}
\begin{scope}
\fill (0:2cm) circle [radius = .07];
\fill (180:2.4cm) circle [radius = .07];
\fill (130:2.2cm) circle [radius = .07];
\fill (310:1.8cm) circle [radius = .07];
\draw (0:2cm) -- (180:2.4cm);
\draw (130:2.2cm) -- (310:1.8cm);
\fill[white] (0,0) circle [radius = .07];
\draw (0,0) circle [radius = .07];
\end{scope}
\begin{scope}
\draw[dotted] (-2,-1) --(-1.5,-.5) (.2,1.2) -- (.7,1.7);
\fill (.2,1.2) circle [radius = .07];
\fill (-1.5,-.5) circle [radius = .07];
\draw (-1.5,-.5) -- (.2,1.2);
\end{scope}
\begin{scope}
\node [below] at (0,0) {\small $x$};
\node [left] at (180:2.4cm) {\small $w_2$};
\node [left] at (130:2.2cm) {\small $v_1$};
\node [right] at (0:2cm) {\small $v_2$};
\node [right] at (310:1.8cm) {\small $w_1$};
\node at (0.15,1.5) {\small $v$};
\node at (-1.3,-.8) {\small $w$};
\end{scope}
\end{scope}
\begin{scope}[xshift = 7cm]
\begin{scope}[rotate = 20]
\begin{scope}
\fill[blue, opacity = .12] (.2,1.2) -- (-1.5,-.5) --
(310:1.8cm) -- (0:2cm) -- cycle;
\draw[blue, opacity =.8] (-1.5,-.5) -- (310:1.8cm) -- (0:2cm)
-- (.2,1.2);
\fill[gray, opacity =.4] (0,0) -- (-1.5,-.5) -- (.2,1.2) -- cycle;
\end{scope}
\begin{scope}
\fill (0:2cm) circle [radius = .07];
\fill (180:2.4cm) circle [radius = .07];
\fill (130:2.2cm) circle [radius = .07];
\fill (310:1.8cm) circle [radius = .07];
\draw (0:2cm) -- (180:2.4cm);
\draw (130:2.2cm) -- (310:1.8cm);
\fill[white] (0,0) circle [radius = .07];
\draw (0,0) circle [radius = .07];
\end{scope}
\begin{scope}
\draw[dotted] (-2,-1) --(-1.5,-.5) (.2,1.2) -- (.7,1.7);
\fill (.2,1.2) circle [radius = .07];
\fill (-1.5,-.5) circle [radius = .07];
\draw (-1.5,-.5) -- (.2,1.2);
\end{scope}
\begin{scope}
\node [below] at (0,0) {\small $x$};
\node [left] at (180:2.4cm) {\small $w_2$};
\node [left] at (130:2.2cm) {\small $v_1$};
\node [right] at (0:2cm) {\small $v_2$};
\node [right] at (310:1.8cm) {\small $w_1$};
\node at (0.15,1.5) {\small $w$};
\node at (-1.3,-.8) {\small $v$};
\end{scope}
\end{scope}
\end{scope}
\end{scope}
\end{tikzpicture}
\end{center}
Theorem \ref{conj:d3} will be deduced from the following slightly stronger claim.
\begin{claim} \label{claim:r3}
Let $m\geq 4$ and suppose the origin is not contained in $\xm$. Then there exists an infinite ray $R\subset \R^3$ from the origin such that $\xm \cap R = \emptyset$.
\end{claim}
\begin{proof}
We may suppose the $X_i$'s are on the unit sphere centered at the origin and argue using spherical convexity. Suppose $X_1$ and $X_2$ are the sets found in Claim \ref{obs:colorcara}, so they are contained in some open hemisphere. We may therefore regard the geometric join of $X_1$ and $X_2$ as a planar geometric join, and let $\tilde{X}$ denote the starshaped set from Claim \ref{obs:starshape}.
Notice that the boundary of $\tilde{X}$ cannot separate the set $(-X_{3})\cup\cdots\cup (-X_m)$. If there exists points $p,q \in (-X_{3})\cup\cdots\cup (-X_m)$ such that $p\in \tilde{X}$ and $q\notin\tilde{X}$, then there also exists $v\in X_{i} $ and $w\in X_j$, $3\leq i<j\leq m$, such that $-v$ and $-w$ are separated by the boundary of $\tilde{X}$. In this case the geodesic connecting $-v$ to $-w$ intersects the boundary of $\tilde{X}$, and a boundary segment of $\tilde{X}$ is made up of a geodesic connecting $a\in X_1$ and $b\in X_2$, which implies that the simplex spanned by $a, b, v, w$ contains the origin, contradicting the assumption that the origin is not contained in $X$.
We may therefore assume that $p\cap \tilde{X}=\emptyset$ for every $p\in -(X_{3})\cup\cdots\cup (-X_m)$. If not, we reverse the argument and define $\tilde{X}$ for any pair $X_i$ and $X_j$ with $3\leq i < j \leq m$.
Let $c$ be the center of $\tilde{X}$. We show that $-c$ is the direction we are looking for. Suppose the contrary, that $-c$ is contained in some triangle $x_ix_jx_k$ spanned by points from distinct color classes $X_i$, $X_j$, and $X_k$, respectively. This implies that the origin is contained in the simplex $cx_ix_jx_k$. Consider the following cases:
\begin{enumerate}
\item If $i=1$, $j=2$, and $k>2$, then $-x_k$ is contained in the triangle $cx_1x_2$. This is a contradiction since the triangle $cx_1x_2$ is contained in $\tilde{X}$.
\item If $i=1$ and $k>j>2$, then the geodesic connecting $-x_j$ and $-x_k$ intersects the geodesic connecting $c$ and $x_1$. But this implies that the geodesic connecting $-x_j$ and $-x_k$ intersects the boundary of $\tilde{X}$, which cannot happen (which was explained two paragraphs above).
\item If $k>j>i>2$, then consider a point $v\in X_1$. Either the simplex $vx_ix_jx_k$ contains the origin, or $-c$ is covered by a triangle involving the vertex $v$, which puts us in case (2) above.
\end{enumerate}
\end{proof}
We are now in position to prove Theorem \ref{conj:d3}.
\begin{proof}
Since $m\geq d=3$, Theorem \ref{simp-conn} implies that $\xm$ is simply connected, and Claim \ref{claim:r3} implies that the second homology group of $\xm$ vanishes. It follows that $X$ is contractible.
\end{proof}
\section{Geometric joins of matroids} \label{matroid}
Kalai and Meshulam \cite{kalmesh} showed that the color classes in the colorful Helly theorem can be replaced by an arbitrary matroid. A similar generalization was given for the ``strong'' colorful Carath\'{e}odory theorem in \cite{hol-int} and for the Colorful Hadwiger transversal theorem in \cite{hol2014}. The purpose of this last section is to show that the notion of geometric joins can be generalized in the same way, and that most of our methods from the previous chapters work in this more general setting.
Let us recall that a matroid $M$ on a finite set $E$ can be defined as a non-empty family of subsets of $E$ called the \df{independent sets} which satisfy the following properties:
\begin{itemize}
\item If $B$ is independent and $A\subset B$, then $A$ is independent.
\item If $A$ and $B$ are independent and $|A| < |B|$, then there exists an element $b\in B\setminus A$ such that $A\cup \{b\}$ is independent.
\end{itemize}
The second condition is often called the {\em independence augmentation axiom} for matroids. We will assume that the union of all independent sets equals the ground set $E$, which is the same as restricting ourselves to matroids which are loopless. For a subset $S\subset E$ the rank of $S$, denoted by $\mbox{rk}(S)$, is the maximum cardinality of an independent set contained in $S$, and the rank of the matroid equals $\mbox{rk}(E)$. Notice that the independent sets of a matroid form an abstract simplicial complex which is often referred to as the \df{independence complex} of the matroid.
\begin{definition}
Let $E\subset \R^d$ be a finite set and let $M$ be a matroid defined on $E$. The \df{geometric join} of $M$ is the set of all convex combinations $t_1x_1 + \cdots + t_kx_k\subset \R^d$ where $\{x_1, \dots x_k\}$ is independent in $M$. The geometric join of a matroid $M$ of rank $r$ defined on a finite set of points in $\R^d$ is denoted by $\mrd$.
\end{definition}
Our previous definition of geometric join is obtained by noticing that the colorful subsets of a family of finite sets $X_1, \dots, X_m$ form the independent sets of a matroid. In the general case of a matroid, the convex hull of an independent set will be called an \df{independent simplex}. In other words, $\mrd$ is the union of all independent simplices of $M$.
As before, we can think of the geometric join of a matroid as a piecewise linear image of the independence complex into the ambient space $\R^d$ where the ground set of the matroid resides. It is a well-known fact that the independence complex of a matroid of rank $r$ is $(r-2)$-connected (see for instance \cite{BjKoLo}), but the homotopy type of $\mrd$ might be different from that of its independence complex.
It should be clear from the discussion above that our main Problem can be studied in the setting of arbitrary matroids.
\begin{probs}
\label{mat-join-conn-prob} Give sufficient conditions in terms of $r$ and $d$ for the contractibility or $k$-connectedness of $\mrd$.
\end{probs}
It seems tempting to conjecture that $\mrd$ is contractible whenever $r>d$, which would imply our main Conjecture, but we have very little evidence to support this. We do however have the following generalization of Theorem \ref{join-starshaped}.
\begin{theorem}
If $r > d(d+1)$, then $\mrd$ is starshaped.
\end{theorem}
\begin{proof}
The proof is identical to the proof of Theorem \ref{join-starshaped}. Suppose the rank of $M$ is greater than $d(d+1)$ and let $T$ be an independent set of size $d(d+1)+1$. By Tverberg's theorem there exists a partition $T = T_1 \cup \cdots \cup T_{d+1}$ and a point $t\in \R^d$ such that $t \in \conv T_i$ for every $i$. We will show that every point $x\in M_r^d$ is visible from $t$.
It suffices to consider the case when $x$ belongs to the boundary of $\mrd$, so by Carath\'eodory's theorem we may assume $x$ belongs to an independent simplex of dimension at most $d-1$. Thus $x$ is contained in the convex hull of an independent set $Y$ with $|Y| \leq d$. By repeated application of the independence augmentation axiom, there is a subset $S\subset T$ such that $S\cap Y = \emptyset$ and $\mbox{rk}(S\cup Y) = |S\cup Y| = d(d+1)+1$, that is, $S\cup Y$ is independent. Therefore $|S|> d^2$, and by the pigeon-hole principle there exists some $T_j\subset S$. This implies that the closed segment $[t,x]$ is contained in $\conv (S\cup Y)$ which is contained in $\mrd$.
\end{proof}
We also have the following generalization of Theorem \ref{simp-conn}.
\begin{theorem}
If $r>\frac{d+2}{2}$, then $\mrd$ is simply connected.
\end{theorem}
\begin{proof}
The same proof as the one for Theorem \ref{simp-conn} works here. We think of $\mrd$ as the image of a map $f:Y \to \R^d$ where $Y$ is the independence complex of $M$ and $f$ is linear on every simplex of $Y$. The case $d=1$ is trivial since the independence complex of a matroid of rank $r\geq 2$ is connected, so we assume $d\geq 2$. Let $\sigma_1$ and $\sigma_2$ be two $(r-1)$-simplices of $Y$ such that $\sigma_1\cap \sigma_2 = \emptyset$ and $f(\sigma_1) \cap f(\sigma_2) \neq \emptyset$. Then there is a proper face $\tau$ of either $\sigma_1$ or $\sigma_2$ such that $f(\sigma_1)\cap f(\tau) \cap f(\sigma_2) \neq \emptyset$, and by the independence augmentation axiom, $\tau$ is contained in an $(r-1)$-simplex $\tau_0\in Y$ such that $\sigma_1\cap \tau_0 \neq \emptyset$, $\sigma_2\cap \tau_0 \neq \emptyset$, and $f(\sigma_1)\cap f(\tau_0) \cap f(\sigma_2) \neq\emptyset$. Therefore the same argument as in the proof of Theorem \ref{simp-conn} shows that any path in $\mrd$ is homotopic to the image of some path in $Y$. The result now follows from the fact that the independence complex of a matroid of rank $r\geq 3$ is simply connected.
\end{proof}
\begin{remark}
It is natural to ask whether there are other reasonable classes of simplicial complexes for which our main Problem might yield interesting results.
\end{remark}
\section{Acknowledgments}
The authors are grateful to two anonymous referees for helpful comments and suggestions.
I.~B. was partially supported by ERC Advanced Research Grant no 267165 (DISCONV), and by Hungarian National Research Grant K 83767.
A.~F.~H. was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (NRF-2010-0021048).
R.~K. was supported by the Dynasty foundation.
| {
"timestamp": "2015-01-08T02:09:40",
"yymm": "1309",
"arxiv_id": "1309.0920",
"language": "en",
"url": "https://arxiv.org/abs/1309.0920",
"abstract": "We consider the geometric join of a family of subsets of the Euclidean space. This is a construction frequently used in the (colorful) Carathéodory and Tverberg theorems, and their relatives. We conjecture that when the family has at least $d+1$ sets, where $d$ is the dimension of the space, then the geometric join is contractible. We are able to prove this when $d$ equals $2$ and $3$, while for larger $d$ we show that the geometric join is contractible provided the number of sets is quadratic in $d$. We also consider a matroid generalization of geometric joins and provide similar bounds in this case.",
"subjects": "Metric Geometry (math.MG)",
"title": "Topology of geometric joins",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9910145715128202,
"lm_q2_score": 0.7981867729389246,
"lm_q1q2_score": 0.791014722771269
} |
https://arxiv.org/abs/2006.01718 | Proximity in Concave Integer Quadratic Programming | A classic result by Cook, Gerards, Schrijver, and Tardos provides an upper bound of $n \Delta$ on the proximity of optimal solutions of an Integer Linear Programming problem and its standard linear relaxation. In this bound, $n$ is the number of variables and $\Delta$ denotes the maximum of the absolute values of the subdeterminants of the constraint matrix. Hochbaum and Shanthikumar, and Werman and Magagnosc showed that the same upper bound is valid if a more general convex function is minimized, instead of a linear function. No proximity result of this type is known when the objective function is nonconvex. In fact, if we minimize a concave quadratic, no upper bound can be given as a function of $n$ and $\Delta$. Our key observation is that, in this setting, proximity phenomena still occur, but only if we consider also approximate solutions instead of optimal solutions only. In our main result we provide upper bounds on the distance between approximate (resp., optimal) solutions to a Concave Integer Quadratic Programming problem and optimal (resp., approximate) solutions of its continuous relaxation. Our bounds are functions of $n, \Delta$, and a parameter $\epsilon$ that controls the quality of the approximation. Furthermore, we discuss how far from optimal are our proximity bounds. | \section{Introduction}
The relationship between an Integer Linear Programming problem and its standard linear relaxation plays a crucial role in many theoretical and computational aspects of the field, including perfect formulations, cutting planes, and branch-and-bound.
Proximity results study one of the most fundamental questions regarding this relationship:
Is it possible to bound the distance between optimal solutions to an Integer Linear Programming problem and its standard linear relaxation?
A classic result by Cook, Gerards, Schrijver, and Tardos~\cite{CooGerSchTar86}
provides the upper bound $n \Delta$ for this distance, where $n$ is the number of variables and $\Delta$ denotes the maximum of the absolute values of the
subdeterminants of the constraint matrix.
This bound has been recently extended to the mixed-integer case by Paat et al.~\cite{PaaWeiWel18} to $p \Delta$, where $p$ is the number of integer variables.
For other recent proximity results in Integer Linear Programming, we refer the reader to \cite{EisWei18,XuLee19,AliHenOer19}.
Granot and Skorin-Kapov~\cite{GraSko90} show that the upper bound $n \Delta$ is still valid if we minimize a separable convex quadratic objective function over the integer points in a polyhedron.
This result has been further extended to separable convex objective functions by Hochbaum and Shanthikumar~\cite{HocSha90} and by Werman and Magagnosc \cite{WerMag91}.
All the above results feature a convex objective function to be minimized.
Therefore, a natural question is whether proximity phenomena only occur in the presence of convexity.
The next example seems to indicate that this is indeed the case.
In fact it shows that,
with a concave objective,
the distance between optimal solutions of
the discrete and continuous problems
cannot be bounded by any function of $n, \Delta$.
\begin{example}
\label{ex no bound}
Consider the following
optimization problem for every $t \in \mathbb Z$ with $t \ge 0$:
\begin{align}
\label{pr ex}
\begin{split}
\min \ &
- \pare{x-\frac{1}{4}}^2 \\
\textnormal{s.t.} \ & -t\le x \le t+ \frac{3}{4}\\
&x\in \mathbb Z.
\end{split}
\end{align}
Note that all these problems have dimension one ($n=1$) and $\Delta = 1$.
Clearly, the unique optimal solution to \eqref{pr ex} is $x^d := -t.$
If we drop the integer constraint, then the unique optimal solution is
$x^c := t + \frac{3}{4}.$
We have
$\abs{x^d-x^c} = 2t+\frac{3}{4},$
which goes to infinity as $t$ approaches infinity. $\hfill \diamond$
\end{example}
Example~\ref{ex no bound} explains the lack of proximity results in the nonconvex setting.
However, a key observation
is that the solution $x^* = t$, while not optimal to \eqref{pr ex}, is `almost' optimal.
Furthermore, its distance from $x^c$ is always $\frac 34$.
This simple observation leads us to the question that is at the basis of this work:
Is it possible
to bound the distance between \emph{approximate} (resp., \emph{optimal}) solutions to
a Nonconvex Integer Programming problem
and \emph{optimal} (resp., \emph{approximate}) solutions of its continuous relaxation?
This paper provides the first answers to the posed question.
The optimization problem in Example~\ref{ex no bound} belongs to perhaps the simplest class of Nonconvex Integer Programming problems, formed by Integer Quadratic Programming problems with separable concave objective functions.
Therefore, in this paper we focus on this class of optimization problems.
For these problems, we answer our question in the affirmative and provide explicit upper bounds.
Our bounds are functions of $n, \Delta$, and a parameter $\epsilon$ that controls the quality of the approximation.
Furthermore, we discuss how far from optimal are our proximity bounds.
In the remainder of this section we formally introduce Separable Concave Integer Quadratic Programming and $\epsilon$-approximate solutions.
With the notation in place, we then formally state our results.
\subsection{Separable Concave Integer Quadratic Programming}
In this paper we denote by \eqref{pr IQP} the Separable Concave Integer Quadratic Programming problem
\begin{align}
\label{pr IQP}
\tag{IQP}
\begin{split}
\min \ & \sum_{i=1}^k -q_ix_i^2+h^\mathsf T x \\
\textnormal{s.t.} \ & Ax\le b \\
& x \in \mathbb Z^n.
\end{split}
\end{align}
In this formulation we assume that $q_i > 0$ for every $i = 1,\dots, k$.
Furthermore, we assume that the matrix $A$ is integer, while the remaining data is real.
Clearly \eqref{pr IQP} subsumes Integer Linear Programming, which can be obtained by setting $k=0$.
We refer the reader to \cite{dPWei14,dP16,dPDeyMol17,dP18,dP20} for recent theoretical results on \eqref{pr IQP}.
We denote by \eqref{pr QP} the Separable Concave Quadratic Programming problem obtained from \eqref{pr IQP} by dropping the integer constraints, i.e.,
\begin{align}
\label{pr QP}
\tag{QP}
\begin{split}
\min \ & \sum_{i=1}^k -q_ix_i^2+h^\mathsf T x \\
\textnormal{s.t.} \ & Ax\le b \\
& x \in \mathbb R^n.
\end{split}
\end{align}
Throughout this paper, we denote by
$f(x) := \sum_{i=1}^k -q_ix_i^2+h^\mathsf T x$
the objective function of \eqref{pr IQP} and \eqref{pr QP}, which is quadratic, concave, and separable.
Furthermore, we let $P$ be the polyhedron defined by $P := \{x \in \mathbb R^n \mid Ax\le b\},$
and we denote by $\Delta$ the largest absolute value of the subdeterminants of $A$.
\subsection{$\epsilon$-approximate solution}
In order to state our proximity results, we give the definition of $\epsilon$-approximate solution.
Consider an instance of an optimization problem of the form $\min \{ f(x) \mid x \in S\}$, where $S \subseteq \mathbb R^n$.
We assume that this problem has an optimal solution, and we denote it by $x^\textnormal{opt}$.
Let $f_{\max}$ be the maximum value of $f(x)$ on the feasible region $S$.
For $\epsilon \in [0,1]$, we say that a feasible point $x^*$ is an \emph{$\epsilon$-approximate solution} if
\begin{equation*}
f(x^*) - f(x^\textnormal{opt}) \le \epsilon \cdot \pare{f_{\max} - f(x^\textnormal{opt})}.
\end{equation*}
An intuitive way to interpret this definition is as follows:
If we let $[\alpha,\beta]$ be the smallest interval containing the image of $S$ under $f$, then $f(x^*)$ should lie in the interval $[\alpha, \alpha + \epsilon(\beta - \alpha)]$.
Observe that any feasible point is a $1$-approximation, and only an optimal solution is a $0$-approximation.
If $f(x)$ has no upper bound on the feasible region, our definition loses its value because any feasible point is an $\epsilon$-approximation for any $\epsilon > 0$.
Our definition of approximation has been used in earlier works, and we refer to \cite{NemYud83,Vav92c,BelRog95,KleLauPar06} for more details.
In this work we consider $\epsilon$-approximate solutions to \eqref{pr IQP} and to \eqref{pr QP}.
Clearly, the optimal solution $x^\textnormal{opt}$ and the quantity $f_{\max}$ in the definition of $\epsilon$-approximate solution differ for the two problems because the feasible regions are different.
To avoid confusion, throughout the paper we denote by $x^d$ an optimal solution to \eqref{pr IQP} and by $x^c$ an optimal solution to \eqref{pr QP}.
Similarly, we denote by $f^d_{\max}$ the value $f_{\max}$ in the definition of $\epsilon$-approximate solution to \eqref{pr IQP} and by $f^c_{\max}$ the value $f_{\max}$ in the definition of $\epsilon$-approximate solution to \eqref{pr QP}.
The definition of $\epsilon$-approximate solution is natural for these general problems, and has several useful properties.
It is well known that, for continuous optimization problems, the definition is insensitive to translations or dilations of the objective function, and that it is preserved under affine linear transformations of the problem.
Similar invariance properties
hold for discrete optimization problem,
and are formalized in Lemma~\ref{lem tans} in Section~\ref{sec lemmas}.
\subsection{Our results}
We are ready to state our proximity result for Separable Concave Integer Quadratic Programming.
\begin{theorem}
\label{th main}
Consider a problem \eqref{pr IQP}, and the corresponding continuous problem \eqref{pr QP}.
Suppose that both problems
have an optimal solution.
Then:
\begin{enumerate}[label={(\roman*)}]
\item
\label{th main 1}
Let $x^c$ be an optimal solution to \eqref{pr QP}.
Then, $\forall \epsilon \in (0,1]$, there is
an $\epsilon$-approximate solution $x^*$ to \eqref{pr IQP}
such that
$\norminf{x^c-x^*} \le n\Delta \pare{\frac{10\Delta}{\epsilon}+1}^k.$
\item
\label{th main 2}
Let $x^d$ be an optimal solution to \eqref{pr IQP}.
Then, $\forall \epsilon \in (0,1]$, there is an $\epsilon$-approximate solution $x^\star$ to \eqref{pr QP} such that
$\norminf{x^\star-x^d} \le n\Delta \pare{\frac{10\Delta}{\epsilon}+1}^k.$
\end{enumerate}
\end{theorem}
In particular, note that the bounds in Theorem~\ref{th main} do not depend on the right-hand side vector $b$ in \eqref{pr IQP} and \eqref{pr QP}.
The proof of Theorem~\ref{th main} is given in Section~\ref{sec proof}.
Since $k \le n$, Theorem~\ref{th main} implies that, for every optimal solution to one of the two problems, and for every $\epsilon \in (0,1]$, there is an $\epsilon$-approximate solution to the other problem at distance bounded
by a function of $n,\Delta,\epsilon$.
In particular, this distance is independent on the objective function and on the vector $b$.
Note that, for $k=0$, problem \eqref{pr IQP} is an Integer Linear Programming problem, while \eqref{pr QP} is its standard linear relaxation.
In this setting, our bounds in Theorem~\ref{th main} reduce to $n\Delta$ for every $\epsilon > 0$.
Therefore, the proximity bound by Cook et al.~\cite{CooGerSchTar86} can be obtained as a corollary to Theorem~\ref{th main}.
In Section~\ref{sec tight}, we discuss how far from optimal are our upper bounds in Theorem~\ref{th main}.
At the heart of our tightness results lies a special polytope, denoted by $\bar P$, and which is used with several different objective functions.
In particular, using the notation of Theorem~\ref{th main}, we show that any upper bound on $\norminf{x^c-x^*}$ or on $\norminf{x^\star-x^d}$ must grow at least linearly with $\frac 1 \epsilon,$ $n$, and $\Delta$.
Furthermore, we show that the neighborhood of $x^c$ considered by Cook et al.~\cite{CooGerSchTar86}, namely $\{x \in P \cap \mathbb Z^n \mid \norminf{x^c-x} \le n\Delta\}$ might contain only arbitrarily bad solutions to \eqref{pr IQP}, i.e., vectors that are not $\epsilon$-approximate solution to \eqref{pr IQP}, for any $\epsilon \in (0,1)$.
The polytope $\bar P$ also allows us to show that the Integer Linear Programming bound $n \Delta$ by Cook et al.
is best possible.
To the best of our knowledge this tightness result was known only for $\Delta = 1$
(see
page 241 in \cite{SchBookIP}).
\section{Three simple lemmas}
\label{sec lemmas}
In this section we present three lemmas that will be used in the proof of Theorem~\ref{th main}.
Our first lemma formalizes the invariance properties of $\epsilon$-approximate solutions to optimization problems with integer constraints.
The proof is standard.
This result will allow us to greatly simplify the notation in the main proof.
\begin{lemma}
\label{lem tans}
Consider an optimization problem of the form
\begin{align}
\label{pr O}
\tag{O}
\begin{split}
\min \ & f(x) \\
\textnormal{s.t.} \ & x \in S \cap \mathbb Z^n,
\end{split}
\end{align}
where $S \subseteq \mathbb R^n$.
Let $M \in \mathbb Z^{n\times n}$ be a unimodular matrix, and let $t\in \mathbb Z^n$.
For any $\alpha, \beta \in \mathbb R$ with $\alpha > 0$, consider the optimization problem
\begin{align}
\label{pr O'}
\tag{O'}
\begin{split}
\min \ & \alpha f(M^{-1} (y-t)) + \beta \\
\textnormal{s.t.} \ & y\in U(S) \cap \mathbb Z^n,
\end{split}
\end{align}
where $U(S) := \{y \in \mathbb R^n \mid y=Mx+t, \ x\in S\}$.
Then, for every $\epsilon$-approximate solution to $\eqref{pr O}$, denoted by $x^*$, the vector $Mx^* + t$ is an $\epsilon$-approximate solution to $\eqref{pr O'}$.
Viceversa, for every $\epsilon$-approximate solution to $\eqref{pr O'}$, denoted by $y^*$, the vector $M^{-1}(y^*-t)$ is an $\epsilon$-approximate solution to $\eqref{pr O}$.
\end{lemma}
\begin{prf}
We prove the first statement of the lemma, the second one being symmetric.
Let $x^*$ be an $\epsilon$-approximate solution to $\eqref{pr O}$.
We show that the vector $y^* := Mx^* + t$ is an $\epsilon$-approximate solution to $\eqref{pr O'}$.
Since $M$ and $t$ are integer, for every feasible solution $x$ to $\eqref{pr O}$, the vector $y = Mx+t$ is feasible to $\eqref{pr O'}$.
Viceversa, since $M^{-1}$ and $t$ are integer, for every feasible solution $y$ to $\eqref{pr O'}$, the vector $x = M^{-1}(y-t)$ is feasible to $\eqref{pr O}$.
In both cases, the relation between the cost of $x$ and $y$ is given by $g(y) = \alpha f(x) + \beta$,
where $g(y) := \alpha f(M^{-1} (y-t)) + \beta$ denotes the objective function of \eqref{pr O'}.
Let $x^d$ be an optimal solution to $\eqref{pr O}$, and let $y^d$ be an optimal solution to $\eqref{pr O'}$.
Furthermore, let $f_{\max}$ be the maximum value of $f(x)$ on the feasible region of $\eqref{pr O}$, and let $g_{\max}$ be the maximum value of $g(y)$ on the feasible region of $\eqref{pr O'}$.
Since $\alpha > 0$, the above argument in particular implies $g(y^d) = \alpha f(x^d) + \beta$, and $g_{\max} = \alpha f_{\max} + \beta$.
If $g_{\max} = g(y^d)$, then $y^*$ is an optimal solution to $\eqref{pr O'}$ and we are done. Otherwise, we have
\begin{equation*}
\frac{g(y^*)-g(y^d)}{g_{\max}-g(y^d)}
=\frac{\pare{\alpha f(x^*)+\beta}-\pare{\alpha f(x^d)+\beta}}{\pare{\alpha f_{\max}+\beta}-\pare{\alpha f(x^d)+\beta}}
=\frac{f(x^*)-f(x^d)}{f_{\max}-f(x^d)}\le \epsilon.
\end{equation*}
Thus $y^*$ is an $\epsilon$-approximate solution to \eqref{pr O'}.
\end{prf}
Next, we define a polyhedral cone which will be heavily used in the proof of Theorem~\ref{th main}, and we present some of its properties.
We remark that this cone has been used in several papers to obtain proximity results, including \cite{CooGerSchTar86,GraSko90,HocSha90,dP20}.
Let $A$ be a matrix with $n$ columns and let $x^a,x^b \in \mathbb R^n$.
Let $A_1$ be the matrix that contains all rows $u$ in $A$ for which $ux^a \le ux^b$.
Similarly, let $ A_2$ be the matrix that contains all rows $u$ in $A$ for which $ ux^a \ge ux^b$.
We define the polyhedral cone
\begin{align*}
T(A, x^a, x^b) := \bra{x \in \mathbb R^n \mid A_1 x \le 0, \ A_2 x \ge 0}.
\end{align*}
From the definition of the cone, we obtain $x^a-x^b \in T(A, x^a, x^b).$
The next lemma is well-known, see, e.g., \cite{CooGerSchTar86}.
Since we are unable to find a complete proof in the literature, we present it here.
\begin{lemma}
\label{lem 1}
Let $A$ be an integer matrix with $n$ columns, let $\Delta$ be the largest absolute value of the
subdeterminants of $A$, and let $x^a,x^b \in \mathbb R^n$.
Then there exists a finite subset $V$ of $\mathbb Z^n$ such that $T(A, x^a, x^b) = \operatorname{cone} V,$ and for every $v \in V$, we have $\norminf{v} \le \Delta$.
\end{lemma}
\begin{prf}
Let $T := T(A, x^a, x^b).$
We partition $T$ into pointed polyhedral cones by intersecting it with the $2^n$ orthants of $\mathbb R^n$, which we denote by $O_1, \dots, O_{2^n}$.
Namely, we let $T_i := T \cap O_i$, for $i=1,\dots,2^n$, and observe that $T = \bigcup_{i=1}^{2^n} T_i.$
In order to prove the lemma, it suffices to show that, for every $i=1,\dots,2^n$, there exists a finite subset $V_i$ of $\mathbb Z^n$ such that $T_i = \operatorname{cone} V_i,$ and for every $v \in V_i$, we have $\norminf{v} \le \Delta$.
This is because the set $V = \bigcup_{i=1}^{2^n} V_i$ then satisfies the thesis of the lemma.
Let us now consider a single $T_i$, for some $i \in \{1,\dots,2^n\}$.
We assume that $T_i$ arises from the intersection of $T$ with the nonnegative orthant, i.e., $T_i = \{x \in \mathbb R^n \mid x \in T, \ x \ge 0\}$, the other cases being symmetric.
The set $T_i$ is a pointed polyhedral cone.
Since $A$ is integer, $T_i$ is a rational cone.
Therefore, there exists a finite set of vectors $V_i = \{r^1,\dots,r^m\} \subset \mathbb R^n$ such that $T_i = \operatorname{cone} V_i.$
Here we can assume that for every $j=1,\dots,m,$ the vector $r^j$ is not a proper conic combination of other vectors in $V_i$, that is to say, each $r^j$ is an extreme ray of $T_i.$
Let us now consider a single vector $r^j$, for some $j \in \{1,\dots,m\}$.
We now show that we can scale $r^j$ so that it is integer and with infinity norm at most $\Delta.$
From Theorem 3.35 in \cite{ConCorZamBook}, we know that $r^j$ satisfies at equality $n-1$ linearly independent inequalities in the system $A_1 x \le 0$, $A_2 x \ge 0$, $x \ge 0$.
Let $e_k$ be a vector of the standard basis of $\mathbb R^n$ that is linearly independent from all the rows of $A_1$, $A_2$, and the identity matrix which correspond to the $n-1$ linearly independent inequalities.
Note that we have $r^j_k \neq 0,$ since otherwise we obtain $r^j = 0$, which contradicts the fact that $r^j$ is an extreme ray of $T_i.$
Since $T_i$ is contained in the nonnegative orthant, we have $r^j_k>0.$
Denote by $Dx=e_1$ the system of equations containing $x_k=1$ and the $n-1$ equations arising by setting to equality the $n-1$ linearly independent inequalities discussed above, where $e_1$ is the first vector of the standard basis of $\mathbb R^n$.
Note that the matrix $D$ is invertible.
The vector $r := D^{-1} e_1$ is a solution to the system, and is a scaling of the vector $r^j$.
Note that each entry of $r$ coincides with an entry of the matrix $D^{-1}$.
By Cramer's rule, each entry of $D^{-1}$ is a fraction with denominator $\det(D)$ and numerator with absolute value at most $\Delta.$
Thus, the vector $\abs{\det(D)} \cdot r$ is a scaling of $r^j$ that is integer and with $\norminf{\abs{\det(D)} \cdot r} \le \Delta.$
Hence, we can assume that each vector $r^j$ is integer and with infinity norm at most $\Delta.$
\end{prf}
The next lemma will often be used in the proof of Theorem~\ref{th main} to show that a given vector is in our polyhedron $P$.
\begin{lemma} \label{lem 2}
Let $P = \{x\in \mathbb R^n \mid A x \le b\}$ be a polyhedron,
let $x^a,x^b \in P$.
Let $x^\circ$ be a vector in $\mathbb R^n$ that can be written in the following two ways:
\begin{align*}
x^\circ=x^1+\sum_{i=1}^m \alpha_i v^i, \qquad
x^\circ=x^2-\sum_{i=1}^m \beta_i v^i,
\end{align*}
where $x^1, x^2 \in P$
and, for $i =1,\dots,m$, $\alpha_i, \beta_i,$ are nonnegative numbers and $v^i \in T(A, x^a, x^b)$.
Then $x^\circ \in P.$
\end{lemma}
\begin{prf}
Let $A_1, A_2$ from the definition of $T(A, x^a, x^b)$ and $b_1, b_2$ the corresponding sub-vectors of $b.$
Since $x^1,x^2 \in P,$ we obtain
\begin{align*}
& A_1 x^\circ
= A_1 \pare{x^1+\sum_{i=1}^m \alpha_i v^i}
\le b_1+\sum_{i=1}^m \alpha_i A_1 v^i
\le b_1\\
& A_2 x^\circ
= A_2 \pare{x^2-\sum_{i=1}^m \beta_i v^i}
\le b_2-\sum_{i=1}^m \beta_i A_2 v^i
\le b_2,
\end{align*}
where the last inequalities follow because $A_1 v^i \le 0$ and $A_2 v^i \ge 0$ from the definition of $T(A, x^a, x^b)$.
This implies that $Ax^\circ \le b,$ hence $x^\circ \in P.$
\end{prf}
We are now ready to present our proof of Theorem~\ref{th main}.
\section{Proof of Theorem~\ref{th main}}
\label{sec proof}
Let $x^c$ be an optimal solution to \eqref{pr QP}, let $x^d$ be an optimal solution to \eqref{pr IQP}, and let $\epsilon >0$. In this section, we present our proof of Theorem~\ref{th main}. To do so, we will construct an $\epsilon$-approximate solution $ x^* $ to \eqref{pr IQP} and an $\epsilon$-approximate solution $ x^\star $ to \eqref{pr QP}.
We first give a brief outline of our proof.
In Section~\ref{sec algorithm}, we design a recursive algorithm which takes in input $x^c, x^d, P$ and that outputs a point $ x^\ell \in P $. In the algorithm, we use several times cones of the form $T(A, \cdot ,\cdot)$ to construct a path inside $ P $, which starts at $ x^c $, ends at $ x^\ell $ and contains at most $ k+1 $ points. The special structure of this path enables us to upper bound $ \norminf{x^c-x^\ell} $ by a function of $ n,\Delta,k,\epsilon.$
In Section~\ref{sec property}, we study some properties of $x^\ell$ and we consider separately two cases. In the first case, $ \norminf{x^\ell-x^d} $ can be bounded by a function of $ n,\Delta,k,\epsilon.$ In this case, we can then also bound $ \norminf{x^c-x^d} $ by a function of $ n,\Delta,k,\epsilon.$ As a consequence, we can conclude the proof in the first case by choosing $ x^* $ to be $ x^d $ and by choosing $ x^\star $ to be $ x^c $.
In the second case, $ \abs{x^\ell_i-x^d_i} $ is large for every index $ i \in \{1,\dots,k \}$ such that $ x^\ell_i-x^d_i \neq 0 .$ In this case, in Section~\ref{sec integral point} we use vectors $x^\ell,x^d$ to construct an integer vector $x^*$ which is close to $x^\ell$, that is, $\norminf{x^\ell-x^*} \le n\Delta.$ This in particular implies that $\norminf{x^c-x^*}$ can be bounded by a function of $ n,\Delta,k,\epsilon.$ In Section~\ref{sec property 2} we further study the vector $x^*$. The properties obtained allow us to prove, in Section~\ref{sec analysis 1}, that $x^*$ is an $\epsilon$-approximate solution to \eqref{pr IQP}. This is done by providing an upper bound on $ f(x^*)-f(x^d) $ and a lower bound on $ f^d_{\max}-f(x^d) .$ In Section~\ref{sec star} we define a vector $ x^\star $, based on $ x^* $, with the property that $ \norminf{x^\star-x^d}=\norminf{x^c-x^*} $. Next, in Section~\ref{sec analysis 2}, we prove that $ x^\star $ is an $ \epsilon$-approximate solution to \eqref{pr QP}. This concludes the proof in the second case, and our outline of the proof of Theorem~\ref{th main}. We are now ready to present the full proof.
In order to simplify the notation in the remainder of the proof, in the next claim we employ Lemma~\ref{lem tans}.
\begin{claim}
\label{claim eqt}
We can assume without loss of generality that $x^d$ is the origin and that $f(x^d)=0$.
\end{claim}
\begin{prf}
We apply Lemma~\ref{lem tans} as follows:
Problem \eqref{pr O} is \eqref{pr IQP}, the matrix $M$ is the identity, $t := -x_d$, $\alpha := 1$, and $\beta := -f(x^d)$.
Problem \eqref{pr O'} in Lemma~\ref{lem tans} then takes the form
\begin{align}
\label{pr O' inproof}
\begin{split}
\min \ & f(y+x^d)-f(x^d) \\
\textnormal{s.t.} \ & Ay\le b-Ax^d\\
& y \in \mathbb Z^n.
\end{split}
\end{align}
The objective function of \eqref{pr O' inproof} can then be explicitly written as
$\sum_{i=1}^k - q_i y_i^2 + {h'}^\mathsf T y,$
where the vector $h'$ is defined by $h'_i := h_i - 2q_i x_i^d$ for $i=1,\dots,k$ and $h'_i := h_i$ for $i=k+1,\dots,n$.
In particular, the coefficients $q_i$ of the quadratic monomials are identical in \eqref{pr IQP} and in \eqref{pr O' inproof}.
Furthermore, note that the constraint matrix of \eqref{pr O' inproof} is the same of \eqref{pr IQP} and so the two problems have the same $\Delta$.
The transformation that maps problem \eqref{pr IQP} into \eqref{pr O' inproof} is $y = x - x_d$.
Consider vectors $\bar x, \tilde x, \bar y, \tilde y \in \mathbb R^n$ such that $\bar y = \bar x - x_d$ and $\tilde y = \tilde x - x_d$.
Then distances are maintained since $\bar y - \tilde y = \bar x - \tilde x$.
Lemma~\ref{lem tans} implies that $\epsilon$-approximate solutions are mapped to $\epsilon$-approximate solutions, thus, without loss of generality we can consider problem \eqref{pr O' inproof} instead of \eqref{pr IQP}.
The optimal solution $x^d$ of \eqref{pr IQP} is mapped to the origin, which is then an optimal solution of \eqref{pr O' inproof}, and the optimal cost is zero.
\end{prf}
\subsection{Construction of the vector $x^\ell$}\label{sec algorithm}
This section of the proof is devoted to the construction of a special vector in $P$ that we denote by $x^\ell$.
The vector $x^\ell$ is obtained via a recursive algorithm which utilizes the vectors $x^c$ and $x^d.$
To begin with, we introduce a claim which will play a key role in the iterative step of our algorithm.
\begin{claim}
\label{claim onestep}
Let $x^a \in P$ and let
$Z := \{i \in \{1,\dots, k\} \mid x^a_i=0\}$.
Assume $Z \neq \{1,\dots,k\}$,
and let
$s$ be an index such that $\abs{x^a_s}=\min \{\abs{x^a_i} \mid i \in \{1,\dots,k\} \setminus Z\}.$
Assume that
$\norminf{x^a} > \Delta \abs{x^a_s}$.
Then there exists a vector $x^b \in P$ such that
\begin{enumerate}[label={(\roman*)}, leftmargin=*]
\item
\label{cond onestep1}
$x^b_i = 0$ for every $i \in Z \cup \{s\}$;
\item
\label{cond onestep2}
$\norminf{x^a-x^b} \le \Delta \abs{x^a_s}$;
\item
\label{cond onestep3}
For $i =1,\dots,m$, there exist nonnegative scalars $\alpha_i,$ $\beta_i$ and vectors $v^i \in T(A,x^a,x^d)$ such that $x^b$ can be expressed in the following two ways:
\begin{align*}
x^b =x^d+\sum_{i=1}^m \alpha_i v^i, \qquad
x^b =x^a-\sum_{i=1}^m \beta_i v^i.
\end{align*}
\end{enumerate}
\end{claim}
\begin{prf}
Our fist task is that of defining the vector $x^b$.
Denote by $\tilde A x\le \tilde b$ the system obtained from $Ax\le b$ by adding the inequalities
$x_i \le 0$, $-x_i \le 0$, for all $i \in Z$.
We remark that the largest absolute value of a
subdeterminant
of $\tilde A$ is $\Delta$.
Let $\tilde P := \{x \in \mathbb R^n \mid \tilde A x \le \tilde b \}$, and note that the vectors $x^a$ and $x^d$ are in $\tilde P$.
Let $\tilde T := T(\tilde A, x^a, x^d)$.
From Lemma~\ref{lem 1}, applied to $\tilde A$, $x^a$, and $x^d$, we know that there exists a finite subset $\tilde V$ of $\mathbb Z^n$ such that $\tilde T = \operatorname{cone} \tilde V,$ and for every $v \in \tilde V$, we have $\norminf{v} \le \Delta$.
Since $x^a - x^d \in \tilde T$, Caratheodory's theorem implies that there exist $m\le n$ vectors $v^1,\dots,v^m \in \tilde V$ and $m$ positive scalars $\alpha_1,\dots, \alpha_m$ such that
\begin{align}
\label{eq op}
x^a = x^a-x^d=\sum_{i=1}^{m}\alpha_iv^i.
\end{align}
We pick those vectors $v^{i}$ such that $v^{i}_s$ has the same sign as $x^a_s$. Without loss of generality, we can assume that these vectors are $v^{1},\dots,v^{r}$, where $r \le m$.
We now show that there exist nonnegative scalars $\lambda_{1}, \dots, \lambda_{r}$ that satisfy $\lambda_{1}\le \alpha_{1},\dots, \lambda_{r}\le \alpha_{r},$ and such that
\begin{align}
\label{eq also this}
x^a_s = \sum_{i=1}^{r}\lambda_{i}v^i_s.
\end{align}
From \eqref{eq op}, we obtain
$x^a_s = \sum_{i=1}^m \alpha_i v^i_s.$
Since for $i=1,\dots,r,$ $v^i_s$ has the same sign as $x^a_s$ and for $i=r+1,\dots,m,$ $v^i_s$ has the opposite sign of $x^a_s$ or $v^i_s=0,$ we have
\begin{align*}
0<\abs{x^a_s}=\abs{\sum_{i=1}^{r}\alpha_{i}v^i_s}-\abs{\sum_{i=r+1}^{m}\alpha_{i}v^i_s} \le \abs{\sum_{i=1}^{r}\alpha_{i}v^i_s}.
\end{align*}
Using continuity, we know that there exist nonnegative scalars $\lambda_{1}\le \alpha_{1},\dots, \lambda_{r}\le \alpha_{r},$ such that
$\abs{x^a_s}=\abs{\sum_{i=1}^{r}\lambda_{i}v^i_s}.$
Since each $v^i_s$ above has the same sign as $x^a_s$ and each $\lambda_i \ge 0,$ we know that \eqref{eq also this} holds.
We are finally ready to define the vector $x^b$ as
\begin{align}
\label{np}
x^b:=x^a-\sum_{i=1}^{r}\lambda_{i}v^{i}.
\end{align}
From~\eqref{eq op}, we can write $x^b$ in the form
\begin{align}
\label{newnp}
x^b=
x^d + \sum_{i=1}^{r}(\alpha_{i}-\lambda_{i})v^{i}+\sum_{i=r+1}^{m}\alpha_{i}v^{i}.
\end{align}
Since $x^a,x^d \in \tilde P$, from Lemma~\ref{lem 2} we know that $x^b$ is in $\tilde P$ as well.
Since $\tilde P \subseteq P$, we obtain that $x^b \in P.$
Next we show that \ref{cond onestep1}, \ref{cond onestep2}, \ref{cond onestep3} hold.
\smallskip
\ref{cond onestep1}.
Note that $\tilde P$ satisfies equations $x_i=0$, for $i\in Z,$ hence $x^b_i=0$ for every $i \in Z.$
Furthermore, from the definition of $x^b$, and using \eqref{eq also this}, we have
\begin{align*}
x^b_s=x^a_s-\sum_{i=1}^{r}\lambda_{i}v^i_s
=x^a_s-x^a_s=0.
\end{align*}
\smallskip
\ref{cond onestep2}.
From the definition of $x^b$ we have
$\norminf{x^a-x^b}
=\norminf{\sum_{i=1}^{r}\lambda_{i}v^{i}}.$
Denote by $l$ the index such that $\norminf{\sum_{i=1}^{r}\lambda_{i}v^{i}} = \abs{(\sum_{i=1}^{r}\lambda_{i}v^{i})_l}$.
Then we have
\begin{align*}
\norminfL{\sum_{i=1}^{r}\lambda_{i}v^{i}}
= \absL{\sum_{i=1}^{r}\lambda_{i}v^{i}_l}
= \absL{\sum_{i=1}^{r}\lambda_{i} \frac{v^{i}_l}{v^{i}_s}v^{i}_s}
\le \sum_{i=1}^{r}\lambda_{i} \absL{\frac{v^{i}_l}{v^{i}_s}} \abs{v^{i}_s}.
\end{align*}
Since, for every $i=1,\dots,r$, the vector $v^{i}$ is integer and $\norminf{v^{i}} \le \Delta$, we know that $\absL{\frac{v^{i}_l}{v^{i}_s}} \le \Delta.$
Thus,
\begin{align*}
\sum_{i=1}^{r}\lambda_{i} \absL{\frac{v^{i}_l}{v^{i}_s}} \abs{v^i_s}
\le \Delta \sum_{i=1}^{r}\lambda_{i}\abs{v^{i}_s}
= \Delta \absL{\sum_{i=1}^{r}\lambda_{i}v^{i}_s}
= \Delta \abs{x^a_s}.
\end{align*}
The first equality holds because all $v^{1}_s,\dots,v^{r}_s$ have the same sign, and the last equality follows from \eqref{eq also this}.
This completes the proof of \ref{cond onestep2}.
\ref{cond onestep3}.
We notice that $\tilde T \subseteq T(A, x^a, x^d),$ which implies that $v^1, \dots, v^m \in T(A, x^a, x^d).$
So \ref{cond onestep3} follows directly from \eqref{np} and \eqref{newnp}.
\end{prf}
We are now ready to state our algorithm that constructs the vector $x^\ell.$
We recursively define a sequence of vectors in $P$ denoted by $x^{0},x^{1},x^{2},\dots$.
The last vector in this sequence is indeed the vector $x^\ell$ that we wish to obtain.
To define this sequence of vectors,
we first recursively define the following $k$ scalars:
\begin{align*}
\chi_1 &:= \frac{8n\Delta}{\epsilon}+2n\Delta \\
\chi_j &:= 2n\Delta +\frac{8}{\epsilon} \pare{\sum_{i=1}^{j-1}\Delta\chi_i+n\Delta} && j=2,\dots,k.
\end{align*}
For every vector $x^j$ in the sequence, it will be useful to partition the set $\{1,\dots,k\}$ into the two sets
\begin{align*}
Z^j := \bra{i \in \{1,\dots, k\} \mid x^{j}_i=0}, \qquad
N^j := \bra{i \in \{1,\dots, k\} \mid x^{j}_i \neq 0}.
\end{align*}
We start the sequence by setting $x^{0} := x^c.$
Now assume that we have constructed the vectors $x^{0},x^1,\dots, x^{j}.$
We state the next iteration of the algorithm.
In this iteration, either the algorithm sets $\ell := j$ and terminates, or it constructs the next vector $x^{j+1}.$
If $x^{j}$ satisfies $\abs{x^{j}_i}> \chi_{j+1}$ for every $i \in N^j$ then we set $\ell := j$ and terminate.
Otherwise, we have $N^j \neq \emptyset$ and $\abs{x^{j}_s} \le \chi_{j+1}$, where $s$ is an index such that $\abs{x^{j}_s}=\min \{\abs{x^{j}_i} \mid i \in N^j\}.$
If $\norminf{x^{j}}\le \Delta\abs{x^{j}_s}$, we set $\ell := j$ and terminate.
Otherwise, we have $\norminf{x^{j}}> \Delta\abs{x^{j}_s}.$
Then $x^{j}$ satisfies the assumptions of Claim~\ref{claim onestep}.
The next vector $x^{j+1}$ in the sequence is then defined to be the vector $x^b \in P$ in the statement of Claim~\ref{claim onestep}, invoked with $x^a = x^j$.
\smallskip
This concludes our definition of the sequence $x^{0},x^1,\dots, x^{\ell}$ of vectors in $P$.
Note that the sequence contains at most $k+1$ points, i.e., $\ell \le k$.
In fact, according to Claim~\ref{claim onestep}\ref{cond onestep1}, we know that $Z^{j+1}$ has at least one more element than $Z^{j}$, for every $j=0,1,\dots$.
\subsection{Properties of the vector $x^\ell$}\label{sec property}
For ease of notation, we define
\begin{align*}
\psi_j &:= \sum_{i=1}^j \Delta \chi_i & j=1,\dots,k,
\end{align*}
and obtain an upper bound on $\psi_k$.
\begin{claim}\label{bound}
We have $\psi_k \le n\Delta(\frac{10\Delta}{\epsilon}+1)^k - n\Delta.$
\end{claim}
\begin{prfc}
The number $\psi_j+n\Delta$ can be upper bounded as follows:
\begin{align*}
\psi_j+n\Delta
&= 2n\Delta^2+\pare{\frac{8\Delta}{\epsilon}+1}(\psi_{j-1}+n\Delta) \\
& \le\Delta(\psi_{j-1}+n\Delta)+\pare{\frac{8\Delta}{\epsilon}+1}(\psi_{j-1}+n\Delta) \\
& \le \pare{\frac{9\Delta}{\epsilon}+1}(\psi_{j-1}+n\Delta).
\end{align*}
The equality holds by definition of $\psi_j$ and $\chi_j$.
The first inequality follows from the fact that $n\Delta \le \psi_i$ for every $i=1,\dots,k,$ while the second inequality is correct because $\Delta \le \frac{\Delta}{\epsilon}.$
Then we have
\begin{align*}
\psi_k+n\Delta \le \pare{\frac{9\Delta}{\epsilon}+1}^{k-1}(\psi_1+n\Delta).
\end{align*}
Since
\begin{align*}
\psi_1+n\Delta
=\frac{8n\Delta^2}{\epsilon}+2n\Delta^2+n\Delta
=n\Delta\pare{\frac{8\Delta}{\epsilon}+2\Delta+1}
\le n\Delta\pare{\frac{10\Delta}{\epsilon}+1},
\end{align*}
we get
\begin{equation*}
\psi_k+n\Delta
\le n\Delta\pare{\frac{9\Delta}{\epsilon}+1}^{k-1}\pare{\frac{10\Delta}{\epsilon}+1}
\le n\Delta\pare{\frac{10\Delta}{\epsilon}+1}^k. \tag*{\qed}
\end{equation*}
\end{prfc}
In the next claim we highlight some properties of $x^\ell$ that will be used later.
\begin{claim}
\label{claim xell}
The vector $x^\ell$ satisfies the following properties:
\begin{enumerate}[label={(\alph*)}]
\item
\label{xell a}
$x^c-x^\ell \in P;$
\item
\label{xell b}
$\norminf{x^c-x^\ell} \le \psi_\ell;$
\item
\label{xell c}
At least one of the following holds:
\begin{enumerate}[label={(c-\arabic*)}]
\item
\label{xell c-1}
$\norminf{x^\ell} \le \Delta\chi_{\ell+1},$ with $\ell \le k-1;$
\item
\label{xell c-2}
$\abs{x^\ell_i} > \chi_{\ell+1}$ for every $i \in N^\ell$.
\end{enumerate}
\end{enumerate}
\end{claim}
\begin{prf}
\ref{xell a}.
We prove the stronger statement that $x^c-x^{j} \in P$ for every $j = 0,\dots,\ell,$ by induction on $j$.
The base case is $j=0$, and it holds since $x^c-x^{0} = 0 = x^d \in P$.
Next, we show the inductive step.
We assume that the result is true for $j=t,$ and we prove it for $j=t+1.$
From our definition of the sequence $x^{0},x^1,\dots, x^{\ell}$, the vector $x^{t+1}$ is obtained from $x^t$ as described in Claim~\ref{claim onestep}, where $x^a = x^{t}$ and $x^b = x^{t+1}$.
Claim~\ref{claim onestep}\ref{cond onestep3} implies that for $i =1,\dots,m$ there exist nonnegative scalars $\alpha_i,$ $\beta_i$ and vectors $v^i \in T(A, x^t,x^d)$ such that
\begin{align*}
x^c-x^{t+1} & = (x^c-x^{t})+(x^{t}-x^{t+1})=(x^c-x^{t})+\sum_{i=1}^m \beta_i v^i \\
x^c-x^{t+1} & = x^c-x^d-\sum_{i=1}^m \alpha_i v^i= x^c-\sum_{i=1}^m \alpha_i v^i.
\end{align*}
Clearly $x^c \in P$ and, from the induction hypothesis, $x^c-x^t \in P$ as well.
Then Lemma~\ref{lem 2} implies that $x^c-x^{t+1}\in P.$
This concludes our proof that $x^c-x^{j} \in P$ for every $j = 0,\dots,\ell.$
Therefore $x^c-x^\ell \in P$, concluding the proof of \ref{xell a}.
\smallskip
\ref{xell b}.
According to Claim~\ref{claim onestep}\ref{cond onestep2},
and from the definition of the sequence,
we know that
\begin{align*}
\norminf{x^{j-1}-x^{j}}\le \Delta\abs{x_s^{j-1}}\le \Delta\chi_j \qquad j=1, \dots, \ell.
\end{align*}
Thus, we have
\begin{align*}
\norminf{x^c-x^\ell}
\le \sum_{j=1}^{\ell}\norminf{x^{j-1}-x^{j}}
\le \sum_{j=1}^{\ell}\Delta\chi_j
=\psi_\ell.
\end{align*}
\smallskip
\ref{xell c}.
This proof follows from the definition of $x^\ell.$
In fact, since $x^\ell$ is the last point in the sequence, it must satisfy at least one of the two termination conditions.
If $\abs{x^\ell_i} > \chi_{\ell+1}$ for every $i \in N^\ell$, then \ref{xell c-2} holds and we are done.
Note that, if $\ell = k$, then $Z^\ell=\{1,\dots,k\},$ and this termination condition is triggered.
Otherwise, we have $\norminf{x^\ell}\le \Delta\abs{x^\ell_s}$ and $\ell \le k-1.$
Observing that $\abs{x^\ell_s} \le \chi_{\ell+1}$ from the construction of the sequence, we obtain \ref{xell c-1}.
\end{prf}
From \ref{xell c}, the vector $x^\ell$ satisfies at least one of the two properties \ref{xell c-1}, \ref{xell c-2}.
Next we show that, if $x^\ell$ satisfies \ref{xell c-1},
then Theorem~\ref{th main} holds with $x^* = x^d$ and $x^\star = x^c$.
So assume that the vector $x^\ell$ satisfies property \ref{xell c-1}.
We obtain
\begin{align*}
\norminf{x^c-x^d}
\le \norminf{x^c-x^\ell}+ \norminf{x^\ell-x^d}
\le \psi_\ell + \Delta\chi_{\ell+1}
=\psi_{\ell+1}\le \psi_k,
\end{align*}
where the second inequality follows from \ref{xell b} and \ref{xell c-1}.
Hence the distance between $x^d$ and $x^c$ is upper bounded by $\psi_k$, which is at most $n\Delta(\frac{10\Delta}{\epsilon}+1)^k$ from Claim~\ref{bound}.
As a consequence, in this case, we conclude the proof of Theorem~\ref{th main}\ref{th main 1} with $x^* = x^d$ and of Theorem~\ref{th main}\ref{th main 2} with $x^\star = x^c$.
Therefore, in the remainder of the proof, we assume that $x^\ell$ satisfies \ref{xell c-2}.
\subsection{Construction of the vector $x^*$}\label{sec integral point}
This section of the proof is devoted to the construction of the vector $x^*$ in the statement of Theorem~\ref{th main}\ref{th main 1}.
In particular, $x^*$ lies in a neighborhood of the vector $x^\ell$.
Denote by $\bar A x\le \bar b $ the system obtained from $Ax\le b$ by adding the inequalities
$x_i \le 0$, $-x_i \le 0$, for all $i \in Z^\ell$.
Note that the largest absolute value of a subdeterminant of $\bar A$ is $\Delta$.
Let
\begin{align*}
\bar P := \bra{x \in \mathbb R^n \mid \bar A x \le \bar b }.
\end{align*}
Note that $\bar P \subseteq P$ and that the vectors $x^\ell$ and $x^d$ are in $\bar P$.
Denote by
\begin{align*}
\bar T := T(\bar A,x^\ell,x^d).
\end{align*}
From Lemma~\ref{lem 1}, applied to $\bar A$, $x^\ell$, and $x^d$, we know that there exists a finite subset $\bar V$ of $\mathbb Z^n$ such that $\bar T = \operatorname{cone} \bar V,$ and for every $v \in \bar V$, we have $\norminf{v} \le \Delta$.
Since $x^\ell-x^d \in \bar T$, Caratheodory's theorem implies that there exist $m\le n$ vectors $v^1,\dots,v^m \in \bar V$ and $m$ positive scalars $\gamma_1,\dots, \gamma_m$ such that
\begin{align}
\label{will need this}
x^\ell-x^d = x^\ell =\sum_{i=1}^{m}\gamma_{i}v^{i}.
\end{align}
The following simple observation will be used twice in our proof.
\begin{observation}
\label{ob1}
For $i=1,\dots,m,$ let $\lambda_i \in \mathbb R$, such that $0\le \lambda_i \le \gamma_i$.
Then the vector $\sum_{i=1}^{m}\lambda_iv^i$ is in $\bar P.$
\end{observation}
\begin{prf}
Let $x:=\sum_{i=1}^{m}\lambda_iv^i$.
Since $x^d$ is the origin, we can write $x= x^d + \sum_{i=1}^{m}\lambda_iv^i$.
Using \eqref{will need this}, we can write $x$ also in the form
$
x=x^\ell-\sum_{i=1}^{m}(\gamma_i-\lambda_i) v^i.
$
From Lemma~\ref{lem 2}, applied with $\bar P$ and $\bar T$, we obtain $x \in \bar P.$
\end{prf}
We are now ready to define the vector $x^*$ as
\begin{align*}
x^*:=\sum_{i=1}^{m}\floor{\gamma_i} v^{i}.
\end{align*}
\subsection{Properties of the vector $x^*$}
\label{sec property 2}
Note that $x^* \in \mathbb Z^n$ because $\floor{\gamma_i}$ and $v^{i}$ are all integer.
From Observation~\ref{ob1}, we have $x^* \in \bar P$.
The next claim introduces several properties of $x^*$ that will be used later.
\begin{claim}
\label{claim xstar}
The vector $x^*$ satisfies the following properties:
\begin{enumerate}[label={(\alph*)}]
\setcounter{enumi}{3}
\item
\label{xstar d}
$\abs{x_i^*} \ge \chi_{\ell+1}-n\Delta$ for every $i \in N^\ell$;
\item
\label{xstar e}
$\{i \in \{1,\dots,k\} \mid x^*_i=0\} = Z^\ell;$
\item
\label{xstar f}
$\norminf{x^c-x^*} \le \psi_\ell+n\Delta$;
\item
\label{xstar g}
$x^c-x^* \in P$.
\end{enumerate}
\end{claim}
\begin{prf}
In this proof we will be using the upper bound on $\norminf{x^\ell-x^*}$ given by
\begin{align}
\label{eq cnc7}
\norminf{x^\ell-x^*}=\norminfL{\sum_{i=1}^{m}(\gamma_i-\floor{\gamma_i}) v^i} \le \sum_{i=1}^{m}\norminf{v^i}\le m\Delta\le n\Delta.
\end{align}
Next, we prove the properties of $x^*$ in the statement of the claim.
\smallskip
\ref{xstar d}.
If $Z^\ell=\{1,\dots,k\}$ we are done, thus we assume $N^\ell \neq \emptyset.$
Let $i \in N^\ell$.
We have
\begin{align*}
\abs{x_i^\ell} = \abs{(x^\ell_i-x_i^*)+x^*_i} \le \abs{x^\ell_i-x_i^*} + \abs{x^*_i}.
\end{align*}
According to our assumption \ref{xell c-2}, we have $\abs{x^\ell_i} > \chi_{\ell+1}$, thus
\begin{align*}
\abs{x_i^*}
\ge \abs{x^\ell_i}-\abs{x^\ell_i-x_i^*}
> \chi_{\ell+1}-\norminf{x^\ell-x^*}
\ge \chi_{\ell+1}-n\Delta,
\end{align*}
where the last inequality holds by \eqref{eq cnc7}.
\smallskip
\ref{xstar e}.
Since the inequalities $x_i=0$, for $i \in Z^\ell,$ are valid for $\bar P$, and $x^* \in \bar P$, we know $\{i \in \{1,\dots,k\} \mid x^*_i=0\} \supseteq Z^\ell.$
One the other hand, given any index $i \in N^\ell$, we know from \ref{xstar d} that $\abs{x^*_i}\ge \chi_{\ell+1}-n\Delta>0$.
So $\{i \in \{1,\dots,k\} \mid x^*_i=0\} \subseteq Z^\ell.$
Thus we conclude that $\{i \in \{1,\dots,k\} \mid x^*_i=0\} = Z^\ell.$
\smallskip
\ref{xstar f}. This property follows directly from \ref{xell b} and \eqref{eq cnc7} as follows:
\begin{align*}
\norminf{x^c-x^*}\le\norminf{x^c-x^\ell}+\norminf{x^\ell-x^*} \le \psi_\ell+n\Delta.
\end{align*}
\smallskip
\ref{xstar g}.
Using the definition of $x^*$, the vector $x^c-x^*$ can be written as
\begin{align*}
& x^c-x^*
=x^c-\sum_{i=1}^m \floor{\gamma_i} v^i\\
& x^c-x^*
=(x^c-x^\ell)+(x^\ell-x^*)
=(x^c-x^\ell)+\sum_{i=1}^m (\gamma_i - \floor{\gamma_i}) v^i.
\end{align*}
Recall that we have $\gamma_i > 0$ for every $i=1,\dots,m$.
Furthermore, we have $x^c \in P$ and, from \ref{xell a}, $x^c-x^\ell \in P$.
Let
$T := T(A,x^\ell,x^d),$
and note that $\bar T \subseteq T,$ which implies $v^i \in T$ for every $i=1,\dots,m$.
Thus, from Lemma~\ref{lem 2}, applied with $P$ and $T$, we obtain that $x^c-x^* \in P.$
\end{prf}
Notice that \ref{xstar f} implies that $ \norminf{x^c-x^*} $ can be upper bounded by a function of $ n,\Delta,k,\epsilon $. In the next section, we will use \ref{xstar e} and \ref{xstar g} to show that the distance between $x^*$ and $x^d$ mainly depends on $ \abs{x^*_i} $ for $ i \in N^\ell $. In particular, when $ \abs{x^*_i} $ is large enough for every $ i \in N^\ell $, the vector $ x^* $ is a suitable approximation to $ x^d $. Together with \ref{xstar d}, this will imply that $ x^* $ is an $ \epsilon $-approximate solution to \eqref{pr IQP}.
\subsection{$x^*$ is an $\epsilon$-approximate solution to \eqref{pr IQP}}\label{sec analysis 1}
In this section we show that the vector $x^*$
is an $\epsilon$-approximate solution to \eqref{pr IQP}.
In Section~\ref{sec ub} we provide an upper bound for $ f(x^*)-f(x^d)$, while in Section~\ref{sec lb} we derive a lower bound for $ f_{\max}^d-f(x^d)$, where $ f_{\max}^d $ is the maximum value of $f$ on $P \cap \mathbb Z^n$.
In Section~\ref{sec ratio}, we use the two bounds to show that $x^*$ is an $\epsilon$-approximate solution to \eqref{pr IQP}.
\subsubsection{Upper bound on $ f(x^*)-f(x^d) $}
\label{sec ub}
\begin{claim}
\label{claim ratio 1}
We have $f(x^*)-f(x^d) \le 2(\psi_\ell+ n \Delta) \sum_{i \in N^\ell} q_i \abs{x^*_i}.$
\end{claim}
\begin{prf}
For ease of notation, let $\delta := x^c - x^*.$
We have
\begin{align*}
f(x^c)
& = f(\delta + x^*)
= \pare{ \sum_{i=1}^{k}-q_i \delta_i^2 + h^\mathsf T \delta }
+ \pare{ \sum_{i=1}^{k}-q_i (x^*_i)^2+h^\mathsf T x^* }
- 2\sum_{i=1}^{k} q_i\delta_i x^*_i \\
&=f(\delta)+f(x^*)- 2\sum_{i \in N^\ell} q_i\delta_i x^*_i
\ge f(x^c)+f(x^*)- 2\sum_{i \in N^\ell} q_i\delta_i x^*_i.
\end{align*}
In the last equality we used \ref{xstar e}.
Furthermore, in the last inequality we used $f(\delta) \ge f(x^c)$ since $\delta \in P$ from \ref{xstar g} and $x^c$ is an optimal solution to \eqref{pr QP}.
We obtain
\begin{align*}
f(x^*)
\le 2 \sum_{i \in N^\ell} q_i \delta_i x^*_i \le 2 \sum_{i \in N^\ell} q_i \abs{\delta_i} \abs{x^*_i}
\le 2 (\psi_\ell+n\Delta) \sum_{i \in N^\ell} q_i \abs{x^*_i},
\end{align*}
where the last inequality holds from \ref{xstar f}.
The claim follows by recalling that $f(x^d) = 0$ by Claim~\ref{claim eqt}.
\end{prf}
\subsubsection{Lower bound on $ f_{\max}^d-f(x^d) $}
\label{sec lb}
In this section we give a lower bound on $f_{\max}^d-f(x^d)$.
In our derivation, a fundamental role is played by the midpoint of $x^d$ and $x^*$, which we denote by $x^\triangle$, i.e.,
\begin{align*}
x^\triangle
:= \frac{x^d+x^*}{2}.
\end{align*}
We first give a lower bound on $f(x^\triangle)-f(x^d)$.
\begin{claim}
\label{claim a}
We have $f(x^\triangle)-f(x^d)\ge \frac{1}{4}\sum_{i \in N^\ell} q_i(x^*_i)^2.$
\end{claim}
\begin{prf}
The claim can be derived as follows:
\begin{align*}
f(x^\triangle) & = f \pare{ \frac{x^*}{2} }
= \frac 14 \sum_{i=1}^{k}-q_i (x^*_i)^2 + \frac 12 h^\mathsf T x^* \\
& = \pare{ \frac{1}{2}\sum_{i=1}^{k}-q_i(x^*_i)^2+\frac{1}{2}h^\mathsf T x^* } +\frac{1}{4}\sum_{i=1}^{k}q_i(x^*_i)^2 \\
& = \frac{1}{2}f(x^*) + \frac{1}{4}\sum_{i \in N^\ell} q_i(x^*_i)^2
\ge \frac{1}{4}\sum_{i \in N^\ell} q_i (x^*_i)^2.
\end{align*}
In the last equality we used \ref{xstar e}.
In the last inequality, we used $f(x^*) \ge f(x^d) = 0$, which holds because $x^d$ is optimal to \eqref{pr IQP} and $x^*$ is feasible to the same problem.
\end{prf}
Recall that the goal of this section is to obtain a lower bound on $f_{\max}^d-f(x^d)$.
Since both $x^d$ and $x^*$ are in $P$, the vector $x^\triangle$ is in $P$ as well.
Therefore, if $x^\triangle \in \mathbb Z^n$, then $f_{\max}^d \ge f(x^\triangle)$, and the bound of Claim~\ref{claim a} yields a bound on $f_{\max}^d-f(x^d)$.
However, $x^\triangle$ is not always an integer vector.
Thus we define two integer points $x^l$ and $x^r$ whose midpoint is $x^\triangle$:
\begin{align*}
x^l & :=
x^d+\sum_{i \mid \floor{\gamma_i} \text{ odd}} \frac{\floor{\gamma_i}-1}{2} v^i
+\sum_{i \mid \floor{\gamma_i} \text{ even}} \frac{\floor{\gamma_i}}{2} v^i\\
x^r & :=
x^d+\sum_{i \mid \floor{\gamma_i} \text{ odd}} \frac{\floor{\gamma_i}+1}{2} v^i
+\sum_{i \mid \floor{\gamma_i} \text{ even}} \frac{\floor{\gamma_i}}{2} v^i.
\end{align*}
We now show that both $x^l$ and $x^r$ are in $\bar P \cap \mathbb Z^n$.
Clearly, $0 \le \frac{\floor{\gamma_i}}{2} \le \gamma_i$.
Furthermore, if $\floor{\gamma_i}$ is odd, we have $\floor{\gamma_i} \ge 1$, which implies $0 \le \frac{\floor{\gamma_i}-1}{2} \le \frac{\floor{\gamma_i}+1}{2} \le \gamma_i.$
By Observation~\ref{ob1}, we know that both $x^l$ and $x^r$ are in $\bar P$.
Since all coefficients $\frac{\floor{\gamma_i} \pm 1}{2}$ and $\frac{\floor{\gamma_i}}{2}$ are integer, we conclude that both $x^l$ and $x^r$ are in $\bar P \cap \mathbb Z^n$.
Let $D \subset \mathbb R^n$ be the smallest box containing $x^l$ and $x^r$, i.e.,
\begin{align*}
D := [\min\{x^l_1,x^r_1\}, \max\{x^l_1,x^r_1\}] \times \dots \times [\min\{x^l_n,x^r_n\}, \max\{x^l_n,x^r_n\}].
\end{align*}
In the reminder of the proof we denote by $q : \mathbb R^n \to \mathbb R$ the quadratic part of the objective function $f$, i.e.,
\begin{align*}
q(x):=\sum_{i=1}^{k}-q_ix_i^2.
\end{align*}
We also define the affine function $\lambda : \mathbb R^n \to \mathbb R$ which achieves the same value as $q$ at the vertices of the box $D$:
\begin{align*}
\lambda(x) := \sum_{i=1}^{k} -q_i \big( (x^l_i+x^r_i)x_i - x^l_ix^r_i \big).
\end{align*}
We have the following claim.
\begin{claim}
\label{claim lambdabound}
For every $x\in D$ we have
$
\lambda(x)\le q(x)
\le \lambda(x)+\frac{(n\Delta)^2}{4}\sum_{i \in N^\ell} q_i.
$
\end{claim}
\begin{prf}
Since $\lambda$ achieves the same value as $q$ at each vertex of $D$ and $q$ is a concave function, we have
$\lambda(x) \le q(x)$, for every $x \in D$.
Using the definitions of $q$ and $\lambda$ we obtain
\begin{align*}
q(x)-\lambda(x) &
= \sum_{i=1}^{k} -q_i \pare{ x_i^2 - (x^l_i+x^r_i)x_i + x^l_ix^r_i }
=\sum_{i=1}^{k}-q_i(x_i-x^l_i)(x_i-x^r_i) \\
& \le \frac{1}{4}\sum_{i \in N^\ell} q_i(x^l_i-x^r_i)^2.
\end{align*}
The inequality holds because, for each $i=1,\dots,k$, the univariate quadratic function $-q_i(x_i-x^l_i)(x_i-x^r_i)$ achieves its maximum at $ \frac{x^l_i+x^r_i}{2}.$
In particular, if $i \in Z^\ell,$ the maximum is 0.
This is because both $x^l$ and $x^r$ are in $\bar P,$ which implies $x_i^l = x_i^r = 0$ for every $i \in Z^\ell$.
From the definition of $x^l$ and $x^r$, we obtain $\norminf{x^r-x^l} \le \norminf{\sum_{i=1}^{m} v^i} \le m\Delta \le n\Delta$.
Therefore, we have $q(x)\le \lambda(x)+ \frac{(n\Delta)^2}{4} \sum_{i \in N^\ell} q_i$.
\end{prf}
\begin{claim}
\label{claim b}
There exists $\tilde x \in \{x^l, x^r\}$ such that
$
f(\tilde{x}) - f(x^\triangle) \ge - \frac{(n\Delta)^2}{4}\sum_{i \in N^\ell} q_i.
$
\end{claim}
\begin{prfc}
Let $g : \mathbb R^n \to \mathbb R$ be defined by $g(x):=\lambda(x)+h^\mathsf T x$.
Claim~\ref{claim lambdabound} implies that, for every $x\in D$, we have
\begin{equation*}
g(x)\le f(x) \le g(x)+\frac{(n\Delta)^2}{4}\sum_{i \in N^\ell}q_i.
\end{equation*}
Since $g$ is a linear function and $x^\triangle$ is the midpoint of $x^l$ and $x^r$, we know that $g(x^\triangle) \le g(\tilde x)$ for some $\tilde x \in \{ x^l, x^r \}$.
We derive the following relation:
\begin{equation*}
f(x^\triangle) \le g(x^\triangle)+\frac{(n\Delta)^2}{4}\sum_{i \in N^\ell}q_i
\le g(\tilde{x})+\frac{(n\Delta)^2}{4}\sum_{i \in N^\ell}q_i
\le f(\tilde{x})+\frac{(n\Delta)^2}{4}\sum_{i \in N^\ell}q_i. \tag*{\qed}
\end{equation*}
\end{prfc}
We are finally ready to state our lower bound on $f_{\max}^d-f(x^d)$.
\begin{claim}
\label{claim ratio 2}
We have
$
f_{\max}^d-f(x^d)\ge \frac{1}{4} \sum_{i \in N^\ell} q_i \big( (x^*_i)^2-(n\Delta)^2 \big).
$
\end{claim}
\begin{prfc}
Combining Claim~\ref{claim a} and Claim~\ref{claim b}, we have
\begin{align*}
f_{\max}^d - f(x^d)
& \ge f(\tilde{x}) - f(x^d)
= \pare{ f(\tilde{x})-f(x^\triangle) } + \pare{ f(x^\triangle)-f(x^d) } \\
& \ge \frac{1}{4} \sum_{i \in N^\ell}
q_i \pare{ (x^*_i)^2 - (n\Delta)^2 }. \tag*{\qed}
\end{align*}
\end{prfc}
\subsubsection{$x^*$ is an $\epsilon$-approximate solution}
\label{sec ratio}
In order to prove that $x^*$ is an $\epsilon$-approximate solution, we
first prove the following observation.
\begin{observation}\label{ob2}
Let $a_i,b_i>0$, for $i=1,\dots,k.$
Then $ \frac{\sum_{i=1}^{k}a_i}{\sum_{i=1}^{k}b_i}\le \max_{i = 1,\dots,k}\frac{a_i}{b_i} $.
\end{observation}
\begin{prf}
To prove this statement, let $j \in \{1,\dots,k\}$ such that $\frac{a_j}{b_j} = \max_{i = 1,\dots,k}\frac{a_i}{b_i}$.
Then,
\begin{align*}
\frac{\sum_{i=1}^{k}a_i}{\sum_{i=1}^{k}b_i}-\frac{a_j}{b_j}
=\frac{\sum_{i=1}^{k}(b_j a_i-a_j b_i)}{b_j \sum_{i=1}^{k}b_i}.
\end{align*}
We only need to show that the right-hand side of the latter equation is nonpositive.
To see this, notice that,
for $i=1,\dots,k$, we have $\frac{a_i}{b_i} \le \frac{a_j}{b_j}$, thus
$b_j a_i - a_j b_i\le 0.$
\end{prf}
\begin{claim}
\label{claim solution}
The vector $x^*$ is an $\epsilon$-approximate solution to \eqref{pr IQP}.
\end{claim}
\begin{prf}
Consider first the case $Z^\ell=\{1,\dots,k\}.$
Then by \ref{xstar e}, we know that $x^*_i=0$, for $i=1,\dots,k.$
In this case, from Claim~\ref{claim ratio 1}, we know that $f(x^*)\le 0.$
By Claim~\ref{claim eqt}, this implies that $x^*$ is an optimal solution to \eqref{pr IQP}.
Now assume that $Z^\ell \subset \{1,\dots,k\}$, i.e., $N^\ell \neq \emptyset$.
Observe that the quantity $f_{\max}^d-f(x^d)$ in the definition of $\epsilon$-approximate solution is positive.
This follows from Claim~\ref{claim ratio 2}, since for $i \in N^\ell$, we have $q_i > 0$ by assumption and $\abs{x_i^*} > n \Delta$ from \ref{xstar d}.
Therefore, we consider the ratio $\frac{f({x^*}) - f(x^d)}{f_{\max}^d-f(x^d)}$, and our aim is to show that it is upper bounded by $\epsilon$.
Using Claim~\ref{claim ratio 1}, Claim~\ref{claim ratio 2}, and Observation~\ref{ob2}, we derive the following bound:
\begin{align*}
\frac{f({x^*}) - f(x^d)}{f_{\max}^d-f(x^d)}
& \le 8(\psi_\ell+ n \Delta) \frac{\sum_{i \in N^\ell}q_i\abs{x_i^*}}{\sum_{i \in N^\ell}q_i((x^*_i)^2-(n\Delta)^2 )} \\
& \le 8(\psi_\ell+ n \Delta) \max_{i \in N^\ell} \frac{\cancel{q_i}\abs{x_i^*}}{\cancel{q_i}((x^*_i)^2-(n\Delta)^2)}.
\end{align*}
In particular, the latter max can be written in the form
\begin{align*}
\max_{i \in N^\ell}\frac{\abs{x_i^*}}{(x^*_i)^2-(n\Delta)^2}
= \max_{i \in N^\ell} \frac{1}{\abs{x^*_i} - \frac{(n\Delta)^2}{\abs{x^*_i}}}.
\end{align*}
In the right-hand side, the denominator is always positive due to $\abs{x_i^*} > n \Delta$.
Let $s$ be the index in $\{1,\dots,k\}$ that achieves $\min_{i \in N^\ell}\abs{x_i^*}$.
Note that this index exists because of our assumption $N^\ell \neq \emptyset$.
Then the max is achieved by the index $s$.
In fact, in the denominator in the right-hand side, the term $\abs{x^*_i}$ is minimized by $s$, while the term $\frac{(n\Delta)^2}{\abs{x^*_i}}$ is maximized by $s$.
From \ref{xstar d}, we have
$\abs{x_s^*} \ge \chi_{\ell+1}-n\Delta= \frac{8}{\epsilon}(\psi_\ell+n\Delta) + n\Delta,$ where the equality can be obtained using the definition of $\chi_{\ell+1}$ and of $\psi_\ell$.
We obtain
\begin{align*}
\frac{f({x^*}) - f(x^d)}{f_{\max}^d-f(x^d)}
& \le
\frac{8(\psi_\ell+ n \Delta)}{\abs{x^*_s} - \frac{(n\Delta)^2}{\abs{x^*_s}}}
\le \frac{8(\psi_\ell+ n \Delta)}{\frac{8}{\epsilon}(\psi_\ell+ n \Delta)+n\Delta-\frac{(n\Delta)^2}{\frac{8}{\epsilon}(\psi_\ell+ n \Delta)+n\Delta} } \\
&
=\frac{\cancel{8(\psi_\ell+n\Delta)}\pare{\frac{8}{\epsilon}(\psi_\ell+n\Delta)+n\Delta}}{\cancel{8(\psi_\ell+n\Delta)}\pare{\frac{8}{\epsilon^2}(\psi_\ell+n\Delta)+\frac{2}{\epsilon}n\Delta}} \\
& =\epsilon\frac{8(\psi_\ell+n\Delta)+n\Delta\epsilon}{8(\psi_\ell+n\Delta)+2n\Delta\epsilon} < \epsilon,
\end{align*}
where the first equality can be obtained by multiplying the numerator and the denominator by $\frac{8}{\epsilon}(\psi_\ell+ n \Delta)+n\Delta.$
This implies that $x^*$ is an $\epsilon$-approximate solution to \eqref{pr IQP}.
\end{prf}
From \ref{xstar f}, we know that
$\norminf{x^c-x^*}\le \psi_\ell+n\Delta \le \psi_k+n\Delta.$
Furthermore, from Claim~\ref{bound}, we have $\psi_k + n\Delta \le n\Delta(\frac{10\Delta}{\epsilon}+1)^k.$
This completes the proof of Theorem~\ref{th main}\ref{th main 1}.
Therefore, in the remainder of the proof we only need to show Theorem~\ref{th main}\ref{th main 2}.
\subsection{Construction of the vector $x^\star$}
\label{sec star}
In this section we introduce the vector $x^\star$ in the statement of Theorem~\ref{th main}\ref{th main 2}.
The point $x^\star$ is defined by
\begin{align*}
x^\star:=x^c-x^*.
\end{align*}
From the definition of $x^*$, we obtain
\begin{align*}
x^\star
=x^c-\sum_{i=1}^{m}\floor{\gamma_i} v^i=
x^c-x^\ell+\sum_{i=1}^{m}(\gamma_i-\floor{\gamma_i})v^i.
\end{align*}
We know that $x^c \in P,$ and, from \ref{xell a}, we know that $x^c-x^\ell \in P$ as well.
Thus, from Lemma~\ref{lem 2}, applied with $P$ and $T,$ we know that $x^\star \in P.$
\subsection{$x^\star$ is an $\epsilon$-approximate solution to \eqref{pr QP}}\label{sec analysis 2}
In this section we show that the vector $x^\star$ is an $\epsilon$-approximate solution to \eqref{pr QP}.
To do this, we first give an upper bound on $f(x^\star)-f(x^c)$, and then a lower bound on $f_{\max}^c-f(x^c)$, where $f_{\max}^c$ is the maximum value of $f$ on $P$.
The two bounds are then used to show that $x^\star$ is an $\epsilon$-approximate solution to \eqref{pr QP}.
\begin{claim}\label{rub}
We have $f(x^\star)-f(x^c) \le 2(\psi_\ell+ n \Delta) \sum_{i \in N^\ell} q_i \abs{x^*_i}.
$
\end{claim}
\begin{prf}
First, we derive an upper bound on $f(x^\star)$.
According to the definition of $x^\star$, we get
\begin{align*}
f(x^\star) & = f(x^c-x^*) \\
& = \pare{-\sum_{i=1}^{k}q_i(x^c_i)^2+h^\mathsf T x^c} + \pare{-\sum_{i=1}^{k}q_i(x^*_i)^2-h^\mathsf T x^*}+
2\sum_{i=1}^{k}q_ix^c_ix^*_i \\
& =f(x^c)+f(-x^*)+2\sum_{i \in N^\ell}q_ix^c_ix^*_i \\
& =f(x^c)+f(-x^*)+2\sum_{i \in N^\ell}q_i(x^*_i+x^\star_i)x^*_i,
\end{align*}
where in the third equality we used \ref{xstar e}.
To derive from the above formula an upper bound on $f(x^\star)-f(x^c)$, we need to upper bound $f(-x^*)$.
Since $x^d$ is the optimal solution to \eqref{pr IQP} and $x^* \in P\cap \mathbb Z^n,$ we know that
\begin{align*}
f(x^*)=-\sum_{i=1}^k q_i(x^*_i)^2+h^\mathsf T x^* \ge f(x^d)=0.
\end{align*}
Thus, we get
\begin{align*}
f(-x^*)
& =-\sum_{i=1}^{k}q_i(x^*_i)^2-h^\mathsf T x^*
\le f(x^*) - \sum_{i=1}^{k}q_i(x^*_i)^2-h^\mathsf T x^*\\
& =-2\sum_{i=1}^{k}q_i(x^*_i)^2
=-2\sum_{i \in N^\ell}q_i(x^*_i)^2.
\end{align*}
We obtain
\begin{align*}
f(x^\star)-f(x^c) &
\le -2\sum_{i \in N^\ell}q_i(x^*_i)^2+2\sum_{i \in N^\ell}q_i(x^*_i+x^\star_i)x^*_i
=2\sum_{i \in N^\ell}q_ix^\star_ix^*_i \\
& \le 2\sum_{i \in N^\ell}q_i \abs{x^\star_i} \abs{x^*_i}
\le 2(\psi_\ell+ n \Delta) \sum_{i \in N^\ell} q_i \abs{x^*_i}.
\end{align*}
The last inequality holds because, from \ref{xstar f}, we have $\norminf{x^\star}=\norminf{x^c-x^*}\le \psi_\ell+n\Delta.$
\end{prf}
\begin{claim}\label{rlb}
We have
$
f_{\max}^c-f(x^c)\ge \frac{1}{4}\sum_{i \in N^\ell}q_i(x^*_i)^2.
$
\end{claim}
\begin{prfc}
Define the midpoint of $x^c$ and $x^\star$ as
\begin{align*}
x^\diamond:=\frac{x^c+x^\star}{2}.
\end{align*}
Then
\begin{align*}
f(x^\diamond) & = f \pare{ \frac{x^c+x^\star}{2} }
= \frac{1}{4}\sum_{i=1}^{k}-q_i \big( (x^c_i)^2+2x^c_ix^\star_i+(x^\star_i)^2 \big) +\frac 12 h^\mathsf T ( x^c + x^\star) \\
& = \pare{ \frac{1}{2}\sum_{i=1}^{k}-q_i(x^c_i)^2+\frac{1}{2}h^\mathsf T x^c } + \pare{ \frac{1}{2}\sum_{i=1}^{k}-q_i(x^\star_i)^2+\frac{1}{2}h^\mathsf T x^\star } +\frac{1}{4}\sum_{i=1}^{k}q_i(x^\star_i-x^c_i)^2 \\
& = \frac{1}{2}f(x^c)+\frac{1}{2}f(x^\star) + \frac{1}{4}\sum_{i \in N^\ell}q_i(x^*_i)^2
\ge f(x^c) + \frac{1}{4}\sum_{i \in N^\ell} q_i (x^*_i)^2.
\end{align*}
In the last inequality, we used $f(x^\star) \ge f(x^c)$, which holds because $x^c$ is optimal to \eqref{pr QP} and $x^\star$ is feasible to the same problem.
Since $x^\diamond$ is the midpoint of $x^\star$ and $x^c$, we know that $x^\diamond \in P.$
Thus we have
\begin{equation*}
f_{\max}^c-f(x^c)\ge f(x^\diamond)-f(x^c) \ge \frac{1}{4}\sum_{i \in N^\ell} q_i (x^*_i)^2. \tag*{\qed}
\end{equation*}
\end{prfc}
\begin{claim}\label{rresult}
The vector $x^\star$ is an $\epsilon$-approximate solution to \eqref{pr QP}.
\end{claim}
\begin{prf}
As in the proof of Claim~\ref{claim solution}, it is simple to check that the quantity $f_{\max}^c-f(x^c)$ in the definition of $\epsilon$-approximate solution is positive.
This allows us to consider the ratio $\frac{f({x^\star}) - f(x^c)}{f_{\max}^c-f(x^c)}$, and our aim is to show that it is upper bounded by $\epsilon$.
Using Claim~\ref{rub}, Claim~\ref{rlb}, and Observation~\ref{lem 2}, we can derive the following bound:
\begin{align*}
\frac{f({x^\star}) - f(x^c)}{f_{\max}^c-f(x^c)}
& \le 8(\psi_\ell+ n \Delta) \frac{\sum_{i \in N^\ell}q_i\abs{x_i^*}}{\sum_{i \in N^\ell}q_i(x^*_i)^2} \\
& \le 8(\psi_\ell+ n \Delta) \max_{i \in N^\ell} \frac{\cancel{q_i}\abs{x_i^*}}{\cancel{q_i}(x^*_i)^2}=\frac{8(\psi_\ell+ n \Delta)}{\abs{x_s^*}},
\end{align*}
where $s$ is an index such that $\abs{x^*_s} = \min \{\abs{x^*_i} \mid i \in N^\ell\}.$
From \ref{xstar d}, we have
$\abs{x_s^*} \ge \chi_{\ell+1}-n\Delta$, and using the definition of $\chi_{\ell+1}$ the latter quantity equals $n\Delta +\frac{8}{\epsilon}(\psi_\ell+n\Delta).$
We get
\begin{align*}
\frac{8(\psi_\ell+ n \Delta)}{\abs{x^*_s}}
\le \frac{8(\psi_\ell+ n \Delta)}{n\Delta +\frac{8}{\epsilon}(\psi_\ell+n\Delta)}<\frac{8(\psi_\ell+ n \Delta)}{\frac{8}{\epsilon}(\psi_\ell+n\Delta)}= \epsilon.
\end{align*}
This implies that $x^\star$ is an $\epsilon$-approximate solution to \eqref{pr QP}.
\end{prf}
From the definition of $x^\star$ and from \ref{xstar f}, we obtain
\begin{align*}
\norminf{x^\star-x^d}= \norminf{x^c-x^*}\le \psi_\ell+n\Delta \le \psi_k+n\Delta.
\end{align*}
Moreover, from Claim~\ref{bound}, we have $\psi_k+n\Delta \le n\Delta(\frac{10\Delta}{\epsilon}+1)^k.$
This completes the proof of Theorem~\ref{th main}\ref{th main 2}, and of Theorem~\ref{th main}.
\section{Lower bounds on the distance of solutions}
\label{sec tight}
In this section we discuss how far from optimal are the proximity bounds in Theorem~\ref{th main}.
The main ingredient in the derivation of our lower bounds is a polyhedron $\bar P$ that we introduce next.
\begin{definition}
\label{def poly}
For every
$n, \Delta, t \in \mathbb Z$ with
$n\ge 1$,
$\Delta \ge 1$,
$t \ge 0$,
and
$\beta \in (0, 1)$,
let $\bar P \subset \mathbb R^n$ be the polyhedron defined by the following inequalities:
\begin{align*}
& -t \le x_1-\Delta \sum_{i=2}^n x_i \le t \\
& 0 \le x_i \le \beta & i=2,\dots, n.
\end{align*}
\end{definition}
Clearly, the polyhedron $\bar P$ has dimension $n$ if $t \ge 1$.
Note that $\bar P$ can be obtained from the polytope
\begin{align*}
\bar P_y := \{(y_1, x_2, \dots,x_n) \mid -t \le y_1 \le t, \ 0 \le x_i \le \beta, i=2,\dots,n \}
\end{align*}
by replacing variable $y_1$ with $x_1 := y_1 + \Delta \sum_{i=2}^n x_i.$
The vertices of the polytope $\bar P_y$ are all
vectors with components $y_1 = \pm t$, and $x_i \in \{0,\beta\}$ for $i=2,\dots,n$.
Therefore, $\bar P$ is bounded and its vertices are all
vectors with components $x_i \in \{0,\beta\}$ for $i=2,\dots,n$, and component $x_1 = \pm t + \Delta \sum_{i=2}^n x_i$.
In particular, the vertex with the largest $x_1$ is
\begin{align*}
v:=\pare{t+(n-1)\beta\Delta,\beta,\dots,\beta},
\end{align*}
and will play an important role in our arguments.
Next, we focus on the integer points in $\bar P$.
Since $\beta < 1$, any vector in $\bar P \cap \mathbb Z^n$ satisfies $x_i=0$, for $i=2,\dots, n$, and the first component of such vectors are $-t,-t+1,\dots,t$.
Since $t \ge 0$, the set $\bar P \cap \mathbb Z^n$ contains the origin and is therefore nonempty.
In particular, the integer point in $\bar P$ with the largest $x_1$ is
\begin{align*}
u:=(t,0,\dots,0).
\end{align*}
The vectors $u$ and $-u$ will often be used in the later proofs.
\begin{observation}
\label{obs delta}
Let $A$ be the constraint matrix defining $\bar P$.
Then each
subdeterminant of $A$ is in $\{0, \pm 1, \pm \Delta\}$.
\end{observation}
\begin{prf}
The constraint matrix of the system defining $\bar P$ is
\begin{align*}
A =
\begin{pmatrix}
1 & -\Delta_{n-1}^\mathsf T \\
-1 & \Delta_{n-1}^\mathsf T \\
0_{n-1} & I_{n-1} \\
0_{n-1} & -I_{n-1}
\end{pmatrix},
\end{align*}
where $I_{n-1}$ denotes the $(n-1) \times (n-1)$ identity matrix, and $0_{n-1}$ (resp.~$\Delta_{n-1}$) denotes the $(n-1)$-dimensional vector with all entries equal to zero (resp.~$\Delta$).
Let $d$ be the determinant of a square submatrix $M$ of $A$.
If $M$ has linearly dependent rows, then $d=0$.
Thus we now assume that $M$ does not have linearly dependent rows.
Up to mutiplying rows of $M$ by $-1$, which is an operation that can only change the sign of the determinant $d$, the matrix $M$ is a submatrix of
\begin{align*}
\begin{pmatrix}
1 & -\Delta_{n-1}^\mathsf T \\
0_{n-1} & I_{n-1} \\
\end{pmatrix}.
\end{align*}
It is well known that adding unit rows to a matrix can only add $0,1$ and change the sign to its possible
subdeterminants.
Therefore, either $d \in \{0,1\}$, or $d$ is a
subdeterminant
of the matrix
$
\begin{pmatrix}
1 & -\Delta_{n-1}^\mathsf T
\end{pmatrix}.
$
The latter matrix has only one row and its
subdeterminants
are $1, -\Delta$.
\end{prf}
For brevity, in this section, we say that a \eqref{pr IQP} or \eqref{pr QP} has \emph{subdeterminant $\Delta$} if the maximum of the absolute values of the subdeterminants of the constraint matrix $A$ is $\Delta$.
\subsection{Tightness in Integer Linear Programming}
In this section we consider our problems \eqref{pr IQP} and \eqref{pr QP} under the additional assumption $k=0$.
In this special case, \eqref{pr IQP} is a general Integer Linear Programming (ILP) problem, while \eqref{pr QP} is the corresponding Linear Programming (LP) problem, also known as the standard linear relaxation of (ILP).
We remark that, for $k=0$, Theorem~\ref{th main} reduces to the proximity bound by Cook et al.~\cite{CooGerSchTar86} for Integer Linear Programming.
In particular, this result yields the upper bound
\begin{align*}
\min \bra{ \norminf{x^c-x^d} \mid x^d \text{ opt.~to (ILP)}, \ x^c \text{ opt.~to (LP)} } \le n \Delta.
\end{align*}
The impact of the polytope $\bar P$
is immediately apparent, as it allows us to prove that the above upper bound
$n \Delta$ is asymptotically best possible.
To the best of our knowledge this tightness result was previously known only for $\Delta = 1$ \cite{SchBookIP,PaaWeiWel18}.
\begin{proposition}
\label{prop ILP ub}
For every
$n, \Delta \in \mathbb Z$ with
$n\ge 1$,
$\Delta \ge 1$, and
$\beta \in (0, 1)$,
there exists an instance of (ILP)
with subdeterminant $\Delta$ for which
\begin{align*}
\min \bra{ \norminf{x^c-x^d} \mid x^d \text{ opt.~to \textnormal{(ILP)}}, \ x^c \text{ opt.~to \textnormal{(LP)}}} = (n-1)\beta\Delta \in \Omega(n\Delta).
\end{align*}
\end{proposition}
\begin{prf}
Let $n$, $\Delta$, $\beta$ be as in the statement.
Consider the (ILP) problem
\begin{align}
\label{eq IQP tight}
\begin{split}
\max \ & x_1 \\
\textnormal{s.t.} \ & x \in \bar P \cap \mathbb Z^n,
\end{split}
\end{align}
where the parameter $t$ in the definition of $\bar P$ can be chosen to be any integer greater than or equal to zero.
From Observation~\ref{obs delta}, problem \eqref{eq IQP tight} has subdeterminant $\Delta$.
The unique optimal solution of \eqref{eq IQP tight} is the vector $u$, while the unique optimal solution of the corresponding (LP) is the vertex $v$ of $\bar P$.
We obtain $\norminf{v-u} = (n-1)\beta\Delta$.
\end{prf}
\subsection{
Lower bounds in Integer Quadratic Programming}
Let us now get back to the general case of \eqref{pr IQP} where $k$ can be positive.
In this setting, even for $k=1$, problem \eqref{pr ex} in Example~\ref{ex no bound} shows that it is not possible to upper bound the distance
\begin{align*}
\min \bra{ \norminf{x^c-x^d} \mid x^d \text{ opt.~to } \eqref{pr IQP}, \ x^c \text{ opt.~to } \eqref{pr QP} }
\end{align*}
with a function that depends only on $n$ and $\Delta$.
Therefore, we focus instead on the two quantities
\begin{align*}
\delta^*_\epsilon &
:= \min \bra{ \norminf{x^c-x^*} \mid x^* \text{ $\epsilon$-approx.~to } \eqref{pr IQP}, \ x^c \text{ opt.~to } \eqref{pr QP} }, \\
\delta^\star_\epsilon & :=
\min \bra{ \norminf{x^\star - x^d} \mid x^d \text{ opt.~to } \eqref{pr IQP}, \ x^\star \text{ $\epsilon$-approx.~to } \eqref{pr QP} }.
\end{align*}
Our Theorem~\ref{th main} implies that both $\delta^*_\epsilon$ and $\delta^\star_\epsilon$ are upper bounded by
\begin{align*}
n\Delta \pare{\frac{10\Delta}{\epsilon}+1}^k
\quad \in O\pare{\frac{n \Delta^{k+1}}{\epsilon^k}}.
\end{align*}
In the next two sections we gain insight on how far from optimal are our proximity results.
This is done by providing lower bounds on both $\delta^*_\epsilon$, in Section~\ref{sec tight a}, and on $\delta^\star_\epsilon$, in Section~\ref{sec tight b}.
Note that our bounds can be further improved, as we are only interested here in the asymptotic behaviour of $\delta^*_\epsilon$ and $\delta^\star_\epsilon$.
We remark that the problems that we present in the following results are of the form \eqref{pr IQP} and \eqref{pr QP} with an additional constant in the objective function.
We decided to keep these constants to simplify the presentation, and we observe that the presence of these constants does not affect optimal or $\epsilon$-approximate solutions thanks to Lemma~\ref{lem tans}.
\subsubsection{Lower bounds on $\delta^*_\epsilon$}
\label{sec tight a}
To begin with, we present a special \eqref{pr IQP} problem, which will be useful in the subsequent discussion.
For every $n, \Delta, t \in \mathbb Z$ with $n\ge 1$, $\Delta \ge 1$, $t \ge 0$, and $a \in \mathbb R$, $\beta \in (0, 1)$, consider the \eqref{pr IQP}
\begin{align}
\label{pr tight}
\begin{split}
\min \ & f(x)=
-(x_1-a)^2
- \frac{(t+n\Delta)^2}{\beta^2} \sum_{i=2}^n x_i^2 \\
\textnormal{s.t.} \ & x \in \bar P \cap \mathbb Z^n,
\end{split}
\end{align}
where the polytope $\bar P$ is given in Definition~\ref{def poly}.
Note that problem \eqref{pr tight} has $k=n$ and, from Observation~\ref{obs delta}, subdeterminant $\Delta$.
The next lemma provides some information about \eqref{pr tight} and its corresponding \eqref{pr QP}.
We remind the reader that the vectors $u,v$ are defined right after Definition~\ref{def poly}.
\begin{lemma}
\label{optial QP}
If $0 < a < (n-1)\beta\Delta$, then the vector $-u$ is the unique optimal solution to \eqref{pr tight}, and the vector $v$ is the unique optimal solution to the corresponding \eqref{pr QP}.
\end{lemma}
\begin{prf}
Consider problem \eqref{pr tight} and assume $0 < a < (n-1)\beta\Delta$.
We first show that the vector $-u$ is the unique optimal solution to \eqref{pr tight}.
We have seen that any vector in $\bar P \cap \mathbb Z^n$ satisfies $x_i = 0$, for $i=2,\dots,n$, and the first component ranges in $-t,-t+1,\dots,t$.
Our assumption $a > 0$ then implies that the vector $-u = (-t,0,\dots,0)$ is the unique optimal solution to \eqref{pr tight}.
Next, we show that the vertex $v$ of $\bar P$ is the unique optimal solution to the corresponding \eqref{pr QP}.
Note that
\begin{align*}
f(v)& =-(t+(n-1)\beta \Delta -a)^2-(n-1)(t+n\Delta)^2.
\end{align*}
Since $\bar P$ is a polytope and the objective is concave, we only need to show that any other vertex $v'$ of $\bar P$ has cost strictly larger than $v$.
First, assume that $v'_i = \beta$ for every $i=2,\dots,n$.
Then $v'_1 = -t+(n-1)\beta \Delta.$
Notice that, if $t=0$, then $v'$ and $v$ are the same point.
Therefore, we assume that $t > 0$.
We have
\begin{align*}
f(v')& =-(-t+(n-1)\beta \Delta -a)^2-(n-1)(t+n\Delta)^2.
\end{align*}
Since $(n-1)\beta \Delta -a > 0$, due to the fact that $t > 0$, we obtain $\abs{t+(n-1)\beta \Delta -a} > \abs{-t+(n-1)\beta \Delta -a}$.
We have therefore shown $f(v) < f(v')$.
We can now assume that $v'_i = 0$ for some $i=2,\dots,n$.
If we denote by $m$ the number of components among $v'_2,\dots,v'_n$ that are equal to $\beta$, then we have $m \le n-2$.
Hence
\begin{align*}
f(v')& =-(v'_1 - a)^2-m(t+n\Delta)^2
\ge -(v'_1 - a)^2-(n-2)(t+n\Delta)^2 \\
& > -(t+n\Delta)^2-(n-2)(t+n\Delta)^2
= - (n-1)(t+n\Delta)^2 \ge f(v).
\end{align*}
Now we explain how we obtain the second inequality.
From the definition of $\bar P,$ we know that $-t \le v'_1 \le t+ (n-1)\beta \Delta.$
Since $0\le a \le (n-1)\beta\Delta,$ we get
\begin{align*}
-t
-(n-1)\beta\Delta
\le v'_1 - a \le t+(n-1)\beta\Delta.
\end{align*}
We obtain
\begin{align*}
(v'_1 - a)^2 \le (t+(n-1)\beta\Delta)^2 < (t+n\Delta)^2,
\end{align*}
which implies the second inequality.
In this second case we have shown that $f(v) < f(v')$ holds for every $t \ge 0$.
This concludes the proof that $v$ is the unique optimal solution to the \eqref{pr QP} corresponding to \eqref{pr tight}.
\end{prf}
In the next proposition we highlight a key difference between Separable Concave Integer Quadratic Programming and Integer Linear Programming.
More in detail, we discuss an important difference between problem \eqref{pr IQP} with $k \ge 1$ and the same problem with $k=0$.
Consider a feasible instance of \eqref{pr IQP}, and let $x^c$ be an optimal solution to the corresponding \eqref{pr QP}.
According to Cook et al.~\cite{CooGerSchTar86}, we can always find integer points $x$ in $P$ with $\norminf{x^c - x} \le n\Delta.$
Furthermore, if $k=0$, one of these vectors is optimal to \eqref{pr IQP}.
However, this is not true for the case $k \ge 1$.
In fact, when
$k \ge 1$,
the set $\{x \in P \cap \mathbb Z^n \mid \norminf{x^c-x} \le n\Delta\}$ not only might contain no optimal solution to \eqref{pr IQP}, but it also might contain only arbitrarily bad solutions, i.e., vectors that are not $\epsilon$-approximate solution to \eqref{pr IQP}, for any $\epsilon \in (0,1)$.
\begin{proposition}
\label{prop IQP ub first}
For every $\Delta \in \mathbb Z$ with $\Delta \ge 1$, and
$\epsilon \in (0,1),$ there is an instance of \eqref{pr IQP}
with subdeterminant $\Delta$
and $k=n$
for which $\delta^*_\epsilon > n\Delta.$
\end{proposition}
\begin{prf}
Let $\Delta, \epsilon$ be as in the statement, and consider problem \eqref{pr tight} with $n := \ceil{\frac{4-3\sqrt{\epsilon}}{1-\sqrt{\epsilon}}} \ge 5$, $\beta := \frac{n-4}{n-3} \in [\frac 12 ,1)$, $a := (n-3)\beta\Delta$, and $t := \frac{n+(n-3)\beta}{2} \Delta \ge 3$.
Note that, both $a$ and $t$ are integer.
From Lemma~\ref{optial QP}, we know that $x^d = -u$ is the unique optimal solution to \eqref{pr tight} and $x^c = v$ is the unique optimal solution to the corresponding \eqref{pr QP}.
Let
$S:=\{x \in \bar P \cap \mathbb Z^n \mid \norminf{x^c-x} \le n\Delta\}$.
It suffices to show that there is no $\epsilon$-approximate solution to \eqref{pr tight} in $S$.
Using the definition of $v$, we derive
\begin{align*}
S =\{x \in \mathbb R^n \mid t-(n-(n-1)\beta)\Delta \le x_1 \le t, \ x_1 \in \mathbb Z, \ x_i=0, i=2,\dots,n \},
\end{align*}
and it can be checked that the quantity $t-(n-(n-1)\beta) \Delta$ is in $(-t,t)$.
Using $n \ge 5,$ it can be checked that $a$ is smaller than the midpoint between the two points $t-(n-(n-1)\beta)\Delta$ and $t$.
Due to the concavity of the objective, and the fact that $t \in \mathbb Z$, this implies that the vector $u$ is a minimizer of the objective function $f(x)$ over the set $S$.
Therefore, it suffices to show that the vector $u$ is not an $\epsilon$-approximate solution to \eqref{pr tight}.
Let $u':=(a,0,\dots,0).$
Since $-t<a<t,$ we have $u' \in \bar P.$
Moreover, since $a$ is integer, it is simple to check that $f^d_{\max}=0$ and it is achieved at $u'.$
Furthermore, we have
\begin{align*}
f(u) & = -(t-a)^2 = -(t-(n-3)\beta\Delta)^2
=-\pare{\frac{n-(n-3)\beta}{2}\Delta}^2
=-4\Delta^2, \\
f(x^d) & = -(t+a)^2 =-(t+(n-3)\beta\Delta)^2
=-\pare{\frac{n+3(n-3)\beta}{2}\Delta}^2
=-(2n-6)^2\Delta^2.
\end{align*}
We obtain
\begin{align*}
\frac{f(u)-f(x^d)}{f^d_{\max}-f(x^d)}
=\frac{\cancel 4 (n-4)(n-2)\cancel{\Delta^2}}{\cancel 4 (n-3)^2\cancel{\Delta^2}}
>\frac{(n-4)^2}{(n-3)^2}
=\pare{1-\frac{1}{n-3}}^2 \ge \epsilon,
\end{align*}
where the last inequality can be checked by plugging in $n.$
Therefore, the vector $u$ is not an $\epsilon$-approximate solution to \eqref{pr tight}.
\end{prf}
In particular, Proposition~\ref{prop IQP ub first}
shows that if $k=n$, then $\delta^*_\epsilon$ can grow at least linearly with respect to both $n$ and $\Delta$.
In the next proposition, we use Lemma~\ref{optial QP} to derive our main lower bound on $\delta^*_\epsilon$.
\begin{proposition}
\label{prop IQP ub}
For every $n, \Delta \in \mathbb Z$ with $n\ge 2$, $\Delta \ge 1$, and $\epsilon \in (0,1]$,
there exists an instance of \eqref{pr IQP}
with subdeterminant $\Delta$
and $k=n$
for which
\begin{align*}
\delta^*_\epsilon \ge 4\pare{\frac{1}{\epsilon}-1} + \frac 23 (n-1) \Delta
\quad \in \Omega\pare{\frac{1}{\epsilon} + n\Delta}.
\end{align*}
\end{proposition}
\begin{prf}
Let $n, \Delta, \epsilon$
be as in the statement, and consider problem \eqref{pr tight} with $a := \frac 12$,
$\beta := \frac 23$,
and $t := \ceil{\frac{2}{\epsilon}-1} -1 \ge 0$.
Our assumptions imply $0 < a < (n-1)\beta\Delta.$
Therefore, Lemma~\ref{optial QP} implies that $x^d = -u$ is the unique optimal solution to \eqref{pr tight} and that $x^c = v$ is the unique optimal solution to the corresponding \eqref{pr QP}.
To prove the proposition, it suffices to show that the only $\epsilon$-approximate solution to \eqref{pr tight} is the optimal solution $-u$.
In fact, this implies $\delta^*_\epsilon = \norminf{x^c-x^d} = \norminf{v+u}$, and the latter norm can be bounded as follows:
\begin{align*}
\norminf{v+u}
& =
2 t + (n-1) \beta \Delta
=
2\pare{\ceil{\frac{2}{\epsilon}-1}-1} + (n-1) \beta \Delta \\
& \ge
2\pare{\frac{2}{\epsilon}-2} + (n-1) \beta \Delta
=
4\pare{\frac{1}{\epsilon}-1} + (n-1) \beta \Delta.
\end{align*}
Therefore, in the remainder of the proof we show that the only $\epsilon$-approximate solution to \eqref{pr tight} is the optimal solution $-u$.
If $\epsilon =1$, this is easy to see.
In fact, our definition of $t$ implies $t=0.$
Therefore, the unique feasible point for \eqref{pr tight} is the origin.
Therefore, in the remainder of the proof we assume $\epsilon \in (0,1)$.
Let $u':=(-t+1,0,\dots,0)$.
Note that $u'$ is in $\bar P$ since $\epsilon < 1$ implies $t \ge 1.$
We have
$f(u')=-\pare{t-\frac{1}{2}}^2=f(u).$
Furthermore, it is simple to see that any feasible vector for \eqref{pr tight} different from $\pm u,u'$ has cost strictly larger than $f(u)$.
It is simple to check that $f^d_{\max} = - \frac 14$, since the maximum is achieved at the origin.
The vector $u$ is an $\epsilon$-approximate solution to \eqref{pr tight} if and only if
\begin{align*}
\frac{f(u) - f(x^d)}{f^d_{\max} - f(x^d)}
=
\frac{f(u)-f(-u)}{f^d_{\max}-f(-u)}
=
\frac{-\pare{t-\frac 12}^2 + \pare{t+\frac 12}^2}{- \frac 1 4 + \pare{t+\frac 12}^2}
=
\frac{2}{t+1}
\le
\epsilon.
\end{align*}
Note that our definition of $t$ implies $t < \frac 2\epsilon -1$, thus $\frac{2}{t+1} > \epsilon$.
This shows that the vector $u$ is not an $\epsilon$-approximate solution to \eqref{pr tight}.
Since $-u$ is the only vector in $\bar P \cap \mathbb Z^n$ with cost strictly smaller than $f(u)$, the only $\epsilon$-approximate solution to \eqref{pr tight} is the optimal solution $-u$.
\end{prf}
Similarly to Proposition~\ref{prop IQP ub first}, also Proposition~\ref{prop IQP ub} implies that
if $k=n$, then
$\delta^*_\epsilon$ can grow at least linearly with respect to both $n$ and $\Delta$.
However, the bound in Proposition~\ref{prop IQP ub} is a function of $\epsilon$ as well.
It implies that $\delta^*_\epsilon$ can grow at least linearly with respect to $\frac 1 \epsilon.$
\subsubsection{Lower bounds on $\delta^\star_\epsilon$}
\label{sec tight b}
In this section we study the tightness of Theorem~\ref{th main}\ref{th main 2} by providing a lower bound on $\delta^\star_\epsilon$ that is a function of $n$, $\Delta$, and $\frac 1 \epsilon$.
\begin{proposition}
\label{prop new}
For every $n,\Delta \in \mathbb Z$ with $n\ge 2$, $\Delta \ge 2$ and $\epsilon \in (0,\frac 12)$, there exists an instance of \eqref{pr IQP}
with subdeterminant $\Delta$ and $k=1$
for which
\begin{align*}
\delta^\star_\epsilon
\ge \frac{(n-1)\Delta-1}{\epsilon}-2 \quad \in \Omega \pare{\frac{n\Delta}{\epsilon}}.
\end{align*}
\end{proposition}
\begin{prf}
Let $n,\Delta,\epsilon$ be as in the statement, and consider the \eqref{pr IQP}
\begin{align}
\label{pr tight in 8}
\begin{split}
\min \ & f(x)=
-x_1^2 \\
\textnormal{s.t.} \ & x \in \tilde P \cap \mathbb Z^n,
\end{split}
\end{align}
where $\tilde P:= \bar P \cap \{x\in \mathbb R^n\mid x_1-\Delta\sum_{i=2}^nx_i\le \beta-1 +t \}.$
Here, the polytope $\bar P$ is given in Definition~\ref{def poly} with $\beta := \frac 12$ and $t:=\floor{\frac{(n-1)\beta\Delta+\beta-1}{\epsilon}} \ge 1.$
Notice that the constraint matrix defining $\tilde P$ coincides with the one defining $\bar P$.
Therefore, Observation~\ref{obs delta} implies that problem \eqref{pr tight in 8} has subdeterminant $\Delta$.
Since $\tilde P \subseteq \bar P$, we have $\tilde P \cap \mathbb Z^n \subseteq \bar P \cap \mathbb Z^n$.
It can be easily checked that $-u \in \tilde P$, while $u \notin \tilde P$, therefore
$x^d = -u$ is the unique optimal solution to \eqref{pr tight in 8}.
Let $w := ((n-1)\beta\Delta+\beta-1+t,\beta,\dots,\beta)$.
Observe that $w$ is a vector in $\tilde P$ with the largest $x_1$.
In fact, we know that every $x \in \bar P$ satisfies $x_i \le \beta$, for $i=2,\dots,n,$ thus for every $x \in \tilde P$ we have
$x_1 \le \Delta\sum_{i=2}^{n}x_i + \beta-1+ t \le (n-1)\beta\Delta+\beta-1 + t$.
Furthermore, the vector $-u$
is a vector in $\tilde P$ with the smallest $x_1.$
In fact, we know that $-u$ is the vertex of $\bar P$ with the smallest $x_1$, and $-u \in \tilde P$.
It can be checked that
\begin{align*}
f(w) & = -((n-1)\beta\Delta+\beta-1+t)^2 < -t^2=f(-u),
\end{align*}
because $(n-1)\beta\Delta+\beta-1 > 2\beta-1=0. $
In particular, we conclude that $x^c = w$ is an optimal solution to the \eqref{pr QP} corresponding to \eqref{pr tight in 8}.
Next, we show that $-u$ is not an $\epsilon$-approximate solution to \eqref{pr QP}.
Notice that $f^c_{\max}=0$ as it is achieved at the origin.
We have
\begin{align*}
\frac{f(-u)-f(x^c)}{f^c_{\max}-f(x^c)}
&= 1-\frac{t^2}{((n-1)\beta\Delta+\beta-1+t)^2} \\
& =1-\frac{1}{1+\frac{((n-1)\beta\Delta+\beta-1)^2}{t^2}+2\frac{(n-1)\beta\Delta+\beta-1}{t}} \\
&\ge 1-\frac{1}{1+\epsilon^2+2\epsilon}=\epsilon\frac{2+\epsilon}{(1+\epsilon)^2}>\epsilon,
\end{align*}
where the first inequality holds because $t \le \frac{(n-1)\beta\Delta+\beta-1}{\epsilon},$ while the second inequality is correct because $\frac{2+\epsilon}{(1+\epsilon)^2} > 1,$ when $\epsilon \in (0,\frac 12).$
Thus, $-u$ is not an $\epsilon$-approximate solution to \eqref{pr QP}.
Since $-u$ is a vector in $\tilde P$ with the smallest $x_1$, and due to the form of the objective function, every $\epsilon$-approximate solution $x^\star$ to \eqref{pr QP} must satisfy $x_1^\star > u_1 = t,$ which implies
\begin{align*}
\norminf{x^\star - x^d} > 2t \ge 2\frac{(n-1)\beta\Delta+\beta-1}{\epsilon}-2 = \frac{(n-1)\Delta-1}{\epsilon}-2.
\end{align*}
This implies $\delta^\star_\epsilon \ge \frac{(n-1)\Delta-1}{\epsilon}-2.$
\end{prf}
In particular, Proposition~\ref{prop new} shows that, in Theorem~\ref{th main}\ref{th main 2}, the dependence of $\delta^\star_\epsilon$ on $n$ is tight.
Furthermore, it shows that $\delta^\star_\epsilon$ can grow at least linearly with respect to both $\Delta$ and $\frac 1 \epsilon$.
Therefore, in Theorem~\ref{th main}\ref{th main 2}, the dependence on $\epsilon$ is tight for $k=1$.
\ifthenelse {\boolean{MPA}}
{
\bibliographystyle{spmpsci}
}
{
\bibliographystyle{plain}
}
| {
"timestamp": "2021-04-16T02:03:49",
"yymm": "2006",
"arxiv_id": "2006.01718",
"language": "en",
"url": "https://arxiv.org/abs/2006.01718",
"abstract": "A classic result by Cook, Gerards, Schrijver, and Tardos provides an upper bound of $n \\Delta$ on the proximity of optimal solutions of an Integer Linear Programming problem and its standard linear relaxation. In this bound, $n$ is the number of variables and $\\Delta$ denotes the maximum of the absolute values of the subdeterminants of the constraint matrix. Hochbaum and Shanthikumar, and Werman and Magagnosc showed that the same upper bound is valid if a more general convex function is minimized, instead of a linear function. No proximity result of this type is known when the objective function is nonconvex. In fact, if we minimize a concave quadratic, no upper bound can be given as a function of $n$ and $\\Delta$. Our key observation is that, in this setting, proximity phenomena still occur, but only if we consider also approximate solutions instead of optimal solutions only. In our main result we provide upper bounds on the distance between approximate (resp., optimal) solutions to a Concave Integer Quadratic Programming problem and optimal (resp., approximate) solutions of its continuous relaxation. Our bounds are functions of $n, \\Delta$, and a parameter $\\epsilon$ that controls the quality of the approximation. Furthermore, we discuss how far from optimal are our proximity bounds.",
"subjects": "Optimization and Control (math.OC)",
"title": "Proximity in Concave Integer Quadratic Programming",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109534209825,
"lm_q2_score": 0.8031738034238807,
"lm_q1q2_score": 0.7909743591126287
} |
https://arxiv.org/abs/2206.10220 | Linear multistep methods and global Richardson extrapolation | In this work, we study the application the classical Richardson extrapolation (RE) technique to accelerate the convergence of sequences resulting from linear multistep methods (LMMs) for solving initial-value problems of systems of ordinary differential equations numerically. The advantage of the LMM-RE approach is that the combined method possesses higher order and favorable linear stability properties in terms of $A$- or $A(\alpha)$-stability, and existing LMM codes can be used without any modification. | \section{Introduction}
Richardson extrapolation (RE) \cite{richardson1911, richardson1927} is a classical technique to accelerate the convergence of numerical sequences depending on a small parameter, by eliminating the lowest order error term(s) from the corresponding asymptotic expansion.
When the sequence is generated by a numerical method solving the initial-value problem
\begin{equation}\label{ODE}
y'(t)=f(t,y(t)), \quad y(t_0)=y_0,
\end{equation}
the parameter in RE can be chosen as the discretization step size $h>0$. The application of RE to sequences generated by one-step---e.g., Runge--Kutta---methods is described, for example, in \cite{butcher, hairernorsettwanner}. In \cite{zlatev}, global (also known as passive) or local (active) versions of RE are implemented with Runge--Kutta sequences. These combined methods can find applications in air pollution problems \cite{zlatev2022} or in machine learning \cite{falgout2021}, for example.
In this paper, we analyze the application of global Richardson extrapolation (GRE) to sequences generated by linear multistep methods (LMMs) approximating the solution of \eqref{ODE}.
We will refer to a $k$-step LMM as the underlying LMM, and its recursion has the usual form
\begin{equation}\label{multistepdefn}
\sum_{j=0}^k \alpha_j y_{n+j}=\sum_{j=0}^k h \beta_j f_{n+j},
\end{equation}
where $f_m:=f(t_m, y_m)$, and the numbers $\alpha_j\in\mathbb{R}$ and $\beta_j\in \mathbb{R}$ ($j=0, \ldots, k$) are the given method coefficients with $\alpha_k\ne 0$. The LMM is implicit, if $\beta_k\ne 0$.
In Section \ref{sec:main}, given an underlying LMM, we define its extrapolated version, referred to as LMM-GRE, and investigate its convergence. Then, we carry out linear stability analysis for the LMM-GREs. In Section \ref{sec24}, we focus on the BDF family as underlying LMMs due to their good stability properties, although the results from Sections \ref{sec21}--\ref{sec23} are clearly applicable to other LMM families as well.
The numerical experiments in Section \ref{sec:numerics} demonstrate the expected convergence order---here, we use several different types of LMMs with GRE to solve \eqref{ODE}.
As a conclusion of this study, we see that\\
(i) to implement a LMM-GRE, existing LMM codes can directly be used, thanks to the simple linear combination appearing in definition \eqref{GREdef}; moreover, \\
(ii) the higher computational cost of a LMM-GRE is compensated by its higher convergence order and favorable linear stability properties.
\begin{rem} In \cite[Section 3.4]{zlatev}, the authors comment on the possible combination of LMMs and \emph{local} Richardson extrapolation. Working out the necessary details and convergence theorems for this case could be the subject of a future study.
\end{rem}
\section{Results for LMM-GREs}\label{sec:main}
\subsection{Definition}\label{sec21}
Let us assume that the function $f$ in \eqref{ODE} is sufficiently smooth, hence the initial-value problem has a unique smooth solution $y$, and we seek its approximation on an interval $[t_0,t_{\text{final}}]$. To this end, we apply a $k$-step LMM to \eqref{ODE} on a uniform grid $\{t_n\}$ to generate the sequence $y_n(h)$ according to \eqref{multistepdefn}. Here, $h:=t_{n+1}-t_n>0$ is the step size (or grid length), and $y_n(h)$ is supposed to approximate the exact solution at $t_n$, that is, $y_n(h)\approx y(t_n)$. We assume that the LMM is of order $p\ge 1$.
The idea of classical RE is to take a suitable linear combination of two approximations, one generated on a coarser grid and one on a finer grid, to obtain a better approximation of the solution $y$ of \eqref{ODE}. Here, we will only consider its simplest form and define
\begin{equation}\label{GREdef}
r_n(h):=\frac{2^p}{2^p-1}\cdot y_{2n}\left(\frac{h}{2}\right)-\frac{1}{2^p-1} \cdot y_n(h),
\end{equation}
that is, the coarser and finer grids have grid lengths $h$ and $h/2$, respectively.
Since the sequence $y_n(h)$ on the coarser grid and the sequence $y_{2n}\left(\frac{h}{2}\right)$ on the finer grid are computed independently (their linear combination is formed only in the last step), we refer to this procedure as \textit{global} (or passive) \textit{extrapolation}, or, in short, LMM-GRE.
\subsection{Convergence}\label{sec22}
\begin{lem}\label{convergencelemma} Under the above assumptions on the function $f$ in \eqref{ODE} and on the LMM, further, if
the starting values $y_j(h)$ and $y_j\left(\frac{h}{2}\right)$ ($j=1,2,\ldots, k-1$) of the LMM are ${\cal{O}}(h^{p+1})$-close to the corresponding exact solution values, then the sequence $r_n(h)$ converges to the exact solution $y$ of \eqref{ODE}, and the order of convergence is at least $p+1$.
\end{lem}
\begin{proof}
The proof relies on the fact that---under the assumptions of the lemma---the global error $y_n(h)-y(t_n)$ of a LMM possesses an asymptotic expansion in $h$.
More precisely, according to, e.g., \cite[Section 6.3.4]{gautschi}, there exist a function $\mathbf{e}$ and a constant $C_{k,p}$ such that
\begin{equation}\label{globerr}
y_n(h)-y(t_n)=C_{k,p}\cdot h^p \cdot \mathbf{e}(t_n)+{\cal{O}}(h^{p+1})\quad\text{as }h\to 0^+,
\end{equation}
for any $n\in\mathbb{N}$ for which $t_n\in [t_0,t_{\text{final}}]$.
Here,
the function $\mathbf{e}$ depends only on $f$ in \eqref{ODE} (and not on the chosen LMM), while the error constant $C_{k,p}$ depends only on the $k$-step LMM (and not on \eqref{ODE} or on $h$). Then, by applying \eqref{globerr} on a grid with grid length $h/2$ and focusing on the same (i.e., $h$-independent) grid point $t_n=t^*$, we have
\begin{equation}\label{halfgloberr}
y_{2n}\left(\frac{h}{2}\right)-y(t^*)=C_{k,p}\cdot \left(\frac{h}{2}\right)^p \cdot \mathbf{e}(t^*)+{\cal{O}}(h^{p+1}).
\end{equation}
Combining \eqref{GREdef}--\eqref{halfgloberr}, we easily see that
$
r_n(h)-y(t^*)={\cal{O}}(h^{p+1})\quad\text{as }h\to 0^+.
$
\end{proof}
\subsection{Linear stability analysis}\label{sec23}
Let us now recall the definition of the region of absolute stability of a LMM---here, this
region will be denoted by $\mathcal{S}_\text{LMM}$.
It is known (see \cite{hairerwanner} or \cite[Section 2.3]{optsubs}) that $\mathcal{S}_\text{LMM}$ can be characterized by the following boundedness condition. Let us fix some $h>0$ and $\lambda\in\mathbb{C}$ such that for $\mu:=h\lambda$ one has $\alpha_k-\mu\beta_k\ne 0$. Suppose that the LMM \eqref{multistepdefn}, with step size $h$ and starting values $y_0, y_1, \ldots, y_{k-1}$, applied to the usual scalar linear test equation
\begin{equation}\label{Dahlquist}
y'(t)=\lambda y(t), \quad y(t_0)=y_0
\end{equation}
generates the sequence $y_n$ (${n\in\mathbb{N}}$). Then $\mu\in \mathcal{S}_\text{LMM}\subset\mathbb{C}$ if and only if the
sequence $y_n$ is bounded for any choice of the starting values $ y_0, y_1, \ldots, y_{k-1}$.
\begin{rem}\label{orderreductionremark}
Considering the differential equation \eqref{Dahlquist}, if $\mu\in\mathbb{C}$ is chosen such that $\alpha_k-\mu\beta_k=0$, then the order of the recursion generated by the LMM becomes strictly less than $k$, hence the starting values $ y_0, y_1, \ldots, y_{k-1}$ could not be chosen arbitrarily (see also \cite[Remark 2.7]{optsubs}).
\end{rem}
We define the region of absolute stability, ${\mathcal{S}}_\text{GRE}\subset\mathbb{C}$, of the combined LMM-GRE method \eqref{GREdef} analogously to that of the underlying LMM.
Let us apply \eqref{GREdef} to the scalar linear test equation \eqref{Dahlquist} with some $h>0$ and $\lambda\in\mathbb{C}$.
Then ${\mathcal{S}}_\text{GRE}$ is defined to be\\
\textit{the set of numbers $\mu:=h\lambda$ for which the sequence $r_n(h)$ is bounded (in $n\in\mathbb{N}$) for any choice of the starting values of the sequence
$y_n(h)$ and for any choice of the starting values of the sequence $y_{m}\left(\frac{h}{2}\right)$, but excluding the values of $\mu$ for which $\alpha_k-\mu\beta_k= 0$ or $\alpha_k-\frac{\mu}{2}\beta_k= 0$.}\\
Now we can relate the stability region of the combined method to that of the underlying LMM as follows. For a set $S\subset\mathbb{C}$, we define $2\, S:=\{2z:z\in S\}$.
\begin{lem}\label{stabregionlemma} We have the inclusions $(\emph{i})\ {\mathcal{S}}_\text{\emph{LMM}}\cap \left(2\,{\mathcal{S}}_\text{\emph{LMM}}\right)\subseteq
{\mathcal{S}}_\text{\emph{GRE}}$,\ and $(\emph{ii})\ {\mathcal{S}}_\text{\emph{GRE}}\subseteq {\mathcal{S}}_\text{\emph{LMM}}$.
\end{lem}
\begin{proof}
Suppose that $h>0$ and $\lambda\in\mathbb{C}$ have been chosen such that $h\lambda\in{\mathcal{S}}_\text{{LMM}}\cap \left(2\,{\mathcal{S}}_\text{{LMM}}\right)$, and we apply the LMM-GRE method with this step size $h$ to \eqref{Dahlquist} with this $\lambda$. Then both sequences $y_n(h)$ and $y_{m}\left(\frac{h}{2}\right)$ are bounded for any choice of their respective $k$ starting values. Hence the sequence $r_n(h)$, as their linear combination, is also bounded. This proves $(\mathrm{i})$.
To prove $(\mathrm{ii})$, let us choose $h>0$ and $\lambda\in\mathbb{C}$ such that $h\lambda\in{\mathcal{S}}_\text{\textrm{GRE}}$. Then the sequence $r_n(h)$ is bounded. By choosing every starting value $0$, we can have that the sequence $y_{m}\left(\frac{h}{2}\right)$ is identically $0$. Hence $r_n(h)=-\frac{1}{2^p-1} \cdot y_n(h)$, so the sequence $y_n(h)$ is also bounded. Therefore $h\lambda\in{\mathcal{S}}_\text{\textrm{LMM}}$.
\end{proof}
\begin{rem}
The reasoning in the above proof of $(\emph{ii})$ could not be applied to prove
${\mathcal{S}}_\text{\emph{GRE}}\subseteq 2{\mathcal{S}}_\text{\emph{LMM}}$: although the boundedness of $r_n(h)$ implies (via a special choice of the starting values of the sequence $y_n(h)$) that the sequence $y_{2n}\left(\frac{h}{2}\right)$ is also bounded, this alone would be insufficient to guarantee the boundedness of the sequence
$y_{m}\left(\frac{h}{2}\right)$ ($m\in\mathbb{N}$).
\end{rem}
To conclude this section, we give a sufficient condition for the stability regions ${\mathcal{S}}_\text{{LMM}}$ and ${\mathcal{S}}_\text{{GRE}}$ to coincide.
As it is well-know, all practically relevant LMMs are zero-stable \cite{suli}.
\begin{lem}\label{convexlemma} Assume that the underlying LMM is zero-stable, and ${\mathcal{S}}_\text{\emph{LMM}}$ is convex. Then ${\mathcal{S}}_\text{\emph{GRE}}={\mathcal{S}}_\text{\emph{LMM}}$.
\end{lem}
\begin{proof} Zero-stability implies that $0\in{\mathcal{S}}_\text{{LMM}}$, so from the convexity of ${\mathcal{S}}_\text{{LMM}}$ we have that
${\mathcal{S}}_\text{{LMM}}\subseteq 2{\mathcal{S}}_\text{{LMM}}$. But this means that ${\mathcal{S}}_\text{{LMM}}\cap (2{\mathcal{S}}_\text{{LMM}})={\mathcal{S}}_\text{{LMM}}$, so from Lemma \ref{stabregionlemma} we get that ${\mathcal{S}}_\text{{GRE}}={\mathcal{S}}_\text{{LMM}}$.
\end{proof}
\begin{rem} By analyzing the root-locus curve \cite{hairerwanner} of the underlying LMM as a parametric curve, it can be proved that ${\mathcal{S}}_\text{\emph{LMM}}$ is convex, for example, for the Adams--Bashforth method with $k=2$ steps, or for the Adams--Moulton method with $k=2$ steps. However, for the Adams--Bashforth method with $k=3$ steps, ${\mathcal{S}}_\text{\emph{LMM}}$ is not convex.
\end{rem}
\subsection{$\text{BDF}k$-GRE methods}\label{sec24}
We obtain an efficient family of LMM-GRE methods if the underlying LMM is a $k$-step BDF method (referred to as a
$\text{BDF}k$-method) with some $1\le k\le 6$ (recall that for zero-stability we need $k\le 6$). It is known that a $\text{BDF}k$-method has order $p=k$, see \cite{hairerwanner}.
Suppose that the sequences $y_n(h)$ and $y_{2n}\left(\frac{h}{2}\right)$ in \eqref{GREdef} are generated by a $\text{BDF}k$-method, and the starting values for both sequences are $(k+1)^\text{st}$-order accurate. Then, due to Lemma \ref{convergencelemma}, \emph{the sequence $r_n(h)$ with $p:=k$ converges to the solution of \eqref{ODE} with order $k+1$}.
To measure the size of the region of absolute stability of the $\text{BDF}k$-GRE methods, one can invoke the concepts of $A$-stability and $A(\alpha)$-stability \cite{hairerwanner}.
It is easily seen that scaling the region of absolute stability of the underlying method ${\mathcal{S}}_\text{{LMM}}$ by a factor of $2$ preserves the $A(\alpha)$-stability angles (see \cite[Figure 1]{optsubs} for an illustration). Hence, due to Lemma \ref{stabregionlemma}, \emph{the $\text{BDF}k$-GRE method has the same $A(\alpha)$-stability angle as that of the underlying $\text{BDF}k$-method}.
In Table \ref{tab:1}, we present the order of convergence and the $A(\alpha)$-stability angles for the $\text{BDF}k$-GRE methods. (For the \emph{exact} values of the angles $\alpha$, see, e.g., \cite[Table 1]{optsubs}.) The $\text{BDF}k$-GRE methods are particularly suitable for stiff problems.
\begin{table}[h]
\footnotesize
\caption{Convergence order and $A(\alpha)$-stability angles for the $\text{BDF}k$-GRE methods}
\label{tab:1}
\centerline{\begin{tabular}{l|l|l}
\hline\noalign{\smallskip}
$k$ & order & $A(\alpha)$-stability angle\\
\noalign{\smallskip}\hline\noalign{\smallskip}
1 & $p=2$ & $90^\circ$, $A$-stable \\
2 & $p=3$ & $90^\circ$, $A$-stable \\
3 & $p=4$ & $86.032^\circ$ \\
4 & $p=5$ & $73.351^\circ$ \\
5 & $p=6$ & $51.839^\circ$ \\
6 & $p=7$ & $17.839^\circ$ \\
\noalign{\smallskip}\hline
\end{tabular}}
\end{table}
Notice, in particular, that the $\text{BDF}2$-GRE method is a $3^\text{rd}$-order $A$-stable method (recall that, due to the classical Dahlquist theorem, no $3^\text{rd}$-order $A$-stable LMM can exist).
In terms of computational cost, due to the presence of the coarser and finer grids, the sequence $r_n(h)$ in \eqref{GREdef} corresponding to a LMM-GRE method is approximately three times as expensive to generate as the sequence $y_n(h)$ corresponding to the underlying LMM. However, the extra computing time is balanced by the higher order and $A(\alpha)$-stability; the $\text{BDF}5$-GRE method, for example, has order $6$, and its $A(\alpha)$-stability angle is approximately three times as large as the stability angle of the classical $6^\text{th}$-order $\text{BDF}6$-method.
\section{Numerical experiments}\label{sec:numerics}
To verify the rate of convergence of LMM-GREs, we chose some benchmark problems,
including a Lotka--Volterra system
\[
y'_1(t)=0.1y_1(t)-0.3y_1(t)y_2(t), \quad y'_2(t)=0.5(y_1(t)-1)y_2(t)
\]
for $t\in[0,62]$ with initial condition $y(0)=(1,1)^\top$; or the mildly stiff {van der Pol} equation
\[
y'_1(t)=y_2(t), \quad\quad y'_2(t)=2(1-y_1^2(t))y_2(t)-y_1(t)
\]
for $t\in[0,20]$ with initial condition $y(0)=(2,0)^\top$.
As underlying LMMs, we considered the $2^\text{nd}$- and $3^\text{rd}$ order Adams--Bashforth (AB), Adams--Moulton (AM), and BDF methods. The AM methods were implemented in predictor-corrector style. For starting methods, we chose the $2^\text{nd}$- and $3^\text{rd}$-order Ralston methods, having minimum error bounds \cite{ralston1962}. For the nonlinear algebraic equations arising in connection with implicit LMMs, we use MATLAB's \texttt{fsolve} command. Following \cite[Appendix A]{leveque}, the fine-grid solutions obtained by the classical $4^\text{th}$-order Runge--Kutta method with $2^{16}$ grid points are used to measure the global error in maximum norm and to estimate the corresponding convergence order. Table \ref{tab:2} and Figure \ref{fig:1} illustrate the expected order of convergence for all tested LMM-GREs.
\begin{table}[ht!]
\begin{center}
\caption{The estimated order of convergence for the Lotka--Volterra system for different LMM-GREs with $64,128,\ldots,1024$ grid points}
\label{tab:2}
\begin{tabular}{ c|c|c|c|c|c}
\hline
$\text{AB}2$-GRE & $\text{AM}2$-GRE & $\text{BDF}2$-GRE & $\text{AB}3$-GRE &$\text{AM}3$-GRE & $\text{BDF}3$-GRE\\
\hline
3.7674 & 3.3545 & 3.2045 & 4.1864 & 3.7644 & 3.6254\\
3.5761 & 3.2095 & 3.1703 & 3.9928 & 3.8903 & 3.7630\\
3.1981 & 3.1068 & 3.1340 & 3.9873 & 3.9718 & 3.9411\\
3.0297 & 3.0520 & 3.0807 & 3.9975 & 3.9856 & 3.9896\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[ht!]
\includegraphics[width=0.9\textwidth]{van_der_pol_figure.eps}\caption{Results for the van der Pol equation for LMM-GREs, number of grid points versus the global error in maximum norm}\label{fig:1}
\end{figure}
\newpage
\subsection*{Acknowledgement}
The authors are indebted to an anonymous referee of the manuscript for their suggestions
that helped improving the presentation of the material, especially, for suggesting Lemma \ref{convexlemma} and its proof.\\
The project ,,Application-domain specific highly reliable IT solutions'' has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme. I.~Fekete was supported by the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, and also by the \'UNKP-21-5 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund.
\footnotesize
| {
"timestamp": "2022-06-22T02:45:06",
"yymm": "2206",
"arxiv_id": "2206.10220",
"language": "en",
"url": "https://arxiv.org/abs/2206.10220",
"abstract": "In this work, we study the application the classical Richardson extrapolation (RE) technique to accelerate the convergence of sequences resulting from linear multistep methods (LMMs) for solving initial-value problems of systems of ordinary differential equations numerically. The advantage of the LMM-RE approach is that the combined method possesses higher order and favorable linear stability properties in terms of $A$- or $A(\\alpha)$-stability, and existing LMM codes can be used without any modification.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Linear multistep methods and global Richardson extrapolation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109516378094,
"lm_q2_score": 0.803173801068221,
"lm_q1q2_score": 0.7909743553605514
} |
https://arxiv.org/abs/2205.00786 | Solving PDEs by Variational Physics-Informed Neural Networks: an a posteriori error analysis | We consider the discretization of elliptic boundary-value problems by variational physics-informed neural networks (VPINNs), in which test functions are continuous, piecewise linear functions on a triangulation of the domain. We define an a posteriori error estimator, made of a residual-type term, a loss-function term, and data oscillation terms. We prove that the estimator is both reliable and efficient in controlling the energy norm of the error between the exact and VPINN solutions. Numerical results are in excellent agreement with the theoretical predictions. | \section{Introduction} \label{sec1}
The possibility of using deep-learning tools for solving complex physical models has attracted the attention of many scientists over the last few years. We have in mind in this paper models that are mathematically described by partial differential equations, supplemented by suitable boundary and initial conditions. In the most general setting, if no information on the model is available except the knowledge of some of its solutions, the model may be completely surrogated by one or more neural network, trained by data (i.e., by the known solutions). However, in most situations of interest, the mathematical model is known (e.g., the Navier-Stokes equations describing an incompressible flow), and such information may be suitably exploited in training the network(s): one gets the so-called Physics Informed Neural Networks (PINNs). This approach was first proposed in \cite{raissi2019physics}, and it inspired further works such as e.g. \cite{tartakovsky2018learning} or \cite{yang2019adversarial}, until the recent paper \cite{LanthalerMishraKarniadakis2021} which presents a very general framework for the solution of operator equations by deep neural networks. PINNs are trained by using the strong form of the differential equations, which are enforced at a set of points in the domain by suitably defining the loss function. In this sense, PINNs can be viewed as particular instances of least-square/collocations methods.
Based on the weak formulation of the differential model, the so-called Variational Physics-Informed Neural Networks (VPINNs), proposed in \cite{kharazmi2019variational}, enforce the equations by means of suitably chosen test functions, not necessarily represented by neural networks \cite{khodayi2020varnet}; they are instances of least-square/Petrov-Galerkin methods. While the construction of the loss function is generally less expensive for PINNs than for VPINNs, the latter allow for the treatment of models with less regular solutions, as well as an easier enforcement of boundary conditions. In addition, the error analysis for VPINNs takes advantage of the available results for the discretization of variational problems, in fulfilling the assumptions of Lax-Richmyer's theorem `stability plus consistency imply convergence'. Actually, consistency results follow rather easily from the recently established approximation properties of neural networks in Sobolev spaces (see, e.g., \cite{elbrachter2021deep}, \cite{guhring2020error}, \cite{opschoor2020deep}, \cite{kutyniok2021theoretical}, \cite{opschoor2021exponential}, \cite{gonon2021deep}), whereas the derivation of stability estimates for the neural network solution appears to be a less trivial task: indeed, a neural network is identified by its weights, which are usually much more than the conditions enforced in its training. In other words, the training of a neural network is functionally an ill-posed problem.
To this respect, we considered in \cite{BeCaPi2021} a Petrov-Galerkin framework in which trial functions are defined by means of neural networks, whereas test functions are made of continuous, piecewise linear functions on a triangulation of the domain. Relying on an inf-sup condition between spaces of piecewise polynomial functions, we derived an a priori error estimate in the energy norm between the exact solution of an elliptic boundary-value problem and a high-order interpolant of a deep neural network, which minimizes the loss function. Numerical results indicate that the error follows a similar behavior when the interpolation operator is turned off.
The purpose of the present paper is to perform an a posteriori error analysis for VPINNs, i.e., to get estimates on the error which only depend on the computed VPINN solution, rather than the unknown exact solution. This is important to get a practical and quantitative information on the quality of the approximation. After setting the model elliptic boundary-value problem in Sect. \ref{sec:setting}, and the corresponding VPINN discretization in Sect. \ref{sec:sub_discretization}, we define in Sect. \ref{sec:aposteriori-theory} a computable residual-type error estimator, and prove that it is both reliable and efficient in controlling the energy error between the exact solution and the VPINN solution. Reliability means that the global error is upper bounded by a constant times the estimator, efficiency means that the estimator cannot over-estimate the energy error, since the latter is lower bounded by a constant times the former up to data oscillation terms. The proposed estimator is obtained by summing up several terms: one is the classical residual-type estimator in finite elements, measuring the bulk error inside each element of the triangulation as well as the inter-element gradient jumps; another term accounts for the magnitude of the loss function after minimization is performed; the remaining terms measure data oscillations, i.e., the errors committed by locally projecting the equation's coefficients and right-hand side upon suitable polynomial spaces. The estimator can be written as a sum of elemental contributions, thereby allowing its use within an adaptive discretization strategy which refines the elements carrying the largest contributions to the estimator.
\section{The model boundary-value problem} \label{sec:setting}
Let $\Omega \subset \mathbb{R}^n$ be a bounded polygonal/polyhedral domain with Lipschitz boundary $\Gamma=\partial\Omega$.
Let us consider the model elliptic boundary-value problem
\begin{equation}\label{eq:model-pb}
\begin{cases}
Lu:=-\nabla \cdot (\mu \nabla u) + \boldsymbol{\beta}\cdot \nabla u + \sigma u =f & \text{in \ } \Omega\,, \\
u=0 & \text{on \ } \Gamma \,, \end{cases}
\end{equation}
where $\mu, \sigma \in {\rm L}^\infty(\Omega)$, $ \boldsymbol{\beta} \in ({\rm W}^{1,\infty}(\Omega))^n$ satisfy $\mu \geq \mu_0$, $\sigma - \frac12 \nabla \cdot \boldsymbol{\beta} \geq 0$ in $\Omega$ for some constant $\mu_0>0$, whereas $f \in L^2(\Omega)$.
\smallskip
Setting $V={\rm H}^1_{0}(\Omega)$, define the bilinear and linear forms
\begin{equation}\label{eq:form a}
a:V\times V \to \mathbb{R}\,, \qquad a(w,v)=\int_\Omega \mu \nabla w \cdot \nabla v + \boldsymbol{\beta}\cdot \nabla w \, v + \sigma w \, v\,,
\end{equation}
\begin{equation}\label{eq:forms F}
F:V\to \mathbb{R}\,, \qquad F(v)=\int_\Omega f \, v \,;
\end{equation}
denote by $\alpha \geq \mu_0$ the coercivity constant of the form $a$, and by $\Vert a \Vert$, $\Vert F \Vert$ the continuity constants of the forms $a$ and $F$. Problem \eqref{eq:model-pb} is formulated variationally as follows: {\it Find $u \in V $ such that}
\begin{equation}\label{eq:model-pb-var}
a(u,v)=F(v) \qquad \forall v \in V\,.
\end{equation}
\begin{remark}[Other boundary conditions]\label{rem:other-bcs}
{\rm
The forthcoming formulation of the discretized problem and the a posteriori error analysis can be extended without pain to cover the case of mixed Dirichlet-Neumann boundary conditions, namely $u=g$ on $\Gamma_D$, $\mu \partial_n u =\psi$ on $\Gamma_N$, with $\Gamma_D \cup \Gamma_N = \Gamma$. We just consider homogeneous Dirichlet conditions to avoid an excess of technicalities.
}
\end{remark}
\subsection{The VPINN discretization} \label{sec:sub_discretization}
We aim at approximating the solution of Problem \eqref{eq:model-pb} by a generalized Petrov-Galerkin strategy.
To define the subset of $V$ of trial functions, let us choose a fully-connected feed-forward neural network structure $\NN$, with $n$ input variables and 1 output variable, identified by the number of layers $L$, the layer widths $N_\ell$, $\ell=1, \dots, L$, and the activation function $\rho$. Thus, each choice of the weights ${\mathbf w} \in \mathbb{R}^N$ defines a mapping $w^\NN : \boldsymbol{x} \mapsto w(\boldsymbol{x},{\mathbf w})$, which we think as restricted to the closed domain $\bar{\Omega}$; let us denote by $W^\NN$ the manifold containing all functions that can be generated by this neural network structure. We enforce the homogeneous Dirichlet boundary conditions by multiplying each $w$ by a fixed smooth function $\Phi \in V$ (we refer to \cite{sukumar2022} for a general strategy to construct this function); we assume that $v^\NN = \Phi w^\NN$ belongs to $V$ for any $w^\NN \in W^\NN$. In conclusion, our manifold of trial functions will be
$$
V^\NN = \{ v^\NN \in V : v^\NN=\Phi w^\NN \text{ for some }w^\NN \in W^\NN \}\,.
$$
To define the subspace of $V$ of test functions, let us introduce a conforming, shape-regular triangulation ${\cal T}_h= \{ E \}$ of $\bar{\Omega}$ with meshsize $h>0$ and let $V_h \subset V$ be the linear subspace formed by the functions which are piecewise linear polynomials over the triangulation ${\cal T}_h$. Furthermore, let us introduce computable approximations of the forms $a$ and $F$ by numerical quadratures. Precisely, for any $E \in {\cal T}_h$, let $\{(\xi^E_\iota,\omega^E_\iota) : \iota \in I^E\}$ be the nodes and weights of a quadrature formula of precision $q \geq 2$
on $E$. Then, assuming that all data $\mu$, $\boldsymbol{\beta}$, $\sigma$, $f$ are continuous in each element of the triangulation, we define the approximate forms
\begin{equation}\label{eq:def-ah}
a_h(w,v)= \sum_{E \in {\cal T}_h} \sum_{\iota \in I^E} [\mu \nabla w \cdot \nabla v + \boldsymbol{\beta}\cdot \nabla w \, v + \sigma w v](\xi^E_\iota) \,\omega^E_\iota\,,
\end{equation}
\begin{equation}\label{eq:def-Fh}
F_h(v) = \sum_{E \in {\cal T}_h} \sum_{\iota \in I^E} [ f v](\xi^E_\iota) \,\omega^E_\iota \,.
\end{equation}
With these ingredients at hand, we would like to approximate the solution of Problem \eqref{eq:model-pb-var} by some $u^{\cal N\!N} \in V^{\cal N\!N}$ satisfying
\begin{equation}\label{eq:PGproblem}
a_h(u^{\cal N\!N},v_h)=F_h(v_h) \qquad \forall v_h \in V_h\,.
\end{equation}
In order to handle this problem by the neural network, let us introduce a basis in $V_h$, say $V_h = \text{span}\{\varphi_i : i\in I_h\}$, and for any $w \in V $ let us define the residuals
\begin{equation}\label{eq:residuals}
r_{h,i}(w)=F_h(\varphi_i)-a_h(w,\varphi_i)\,, \qquad i \in I_h\,,
\end{equation}
as well as the loss function
\begin{equation}\label{eq:loss-function}
R_h^2(w) = \sum_{i \in I_h} r_{h,i}^2(w) \,.
\end{equation}
Then, we search for a global minimum of the loss function in $V^{\cal N\!N}$, i.e., we consider the following minimization problem: {\it Find $u^{\cal N\!N} \in V^{\cal N\!N}$ such that}
\begin{equation}\label{eq:min-prob}
u^{\cal N\!N} \in \displaystyle{\text{arg}\!\!\!\!\min_{w \in V^{\cal N\!N}}}\, R_h^2(w) \,.
\end{equation}
Note that any solution $u^{\cal N\!N}$ of \eqref{eq:PGproblem} annihilates the loss function, hence it is a solution of \eqref{eq:min-prob}; such a solution may not be unique, since the set of equations \eqref{eq:PGproblem} may be underdetermined (in particular, for $f=0$ one may obtain a non-zero $u^{\cal N\!N}$, see \cite[Sect. 6.3]{BeCaPi2021}). On the other hand, system \eqref{eq:PGproblem} may be overdetermined, and admit no solution; in this case, the loss function will have strictly positive minima.
\begin{remark}[Discretization with interpolation] \label{rem:no-interp}{\rm
In order to reduce and control the randomic effects related to the use of a network depending upon a large number of weights, in \cite{BeCaPi2021} we proposed to locally project the neural network upon a space of polynomials, before computing the loss function.
To be precise, we have considered a conforming, shape-regular partition ${\cal T}_H=\{G\}$ of $\bar{\Omega}$, which is equal to or coarser than ${\cal T}_h$ (i.e., each element $E \in {\cal T}_h$ is contained in an element $G \in {\cal T}_H$) but compatible with ${\cal T}_h$ (i.e., its meshsize $H>0$ satisfies $H\lesssim h$). Let $V_H \subset V$ be the linear subspace formed by the functions which are piecewise polynomials of degree $k_\text{int}=q+1$ over the triangulation ${\cal T}_H$, and let ${\cal I}_H : {\rm C}^0(\bar{\Omega}) \to V_H$ be the associated element-wise Lagrange interpolation operator.
Given a neural network $w \in V^\NN$, let us denote by $w_H= {\cal I}_H w^\NN \in V_H$ its piecewise polynomial interpolant.
Then, the definition \eqref{eq:residuals} of local residuals is modified as
\begin{equation}\label{eq:residuals-tilde}
\tilde{r}_{h,i}(w)=F_h(\varphi_i)-a_h(w_H,\varphi_i)\,, \qquad i \in I_h\,;
\end{equation}
consequently, the loss function takes the form
\begin{equation}\label{eq:loss-function-tilde}
\tilde{R}_h^2(w) = \sum_{i \in I_h} \tilde{r}_{h,i}^2(w) \,,
\end{equation}
and we define a new approximation of the solution of Problem \eqref{eq:model-pb-var} by setting
\begin{equation}\label{eq:min-prob-tilde}
\tilde{u}^\NN_H = {\cal I}_H \tilde{u}^\NN \in V_H\,, \qquad \text{where} \quad \tilde{u}^{\cal N\!N} \in \displaystyle{\text{arg}\!\!\!\!\min_{w \in U^{\cal N\!N}}}\, \tilde{R}_h^2(w) \,.
\end{equation}
In \cite{BeCaPi2021} we derived an a priori error estimate for the error $\Vert u - \tilde{u}^\NN_H \Vert_V$, and we documented the error decay as $h \to\infty$, which turns out to have a more regular behavior that the error $\Vert u - {u}^\NN \Vert_V$, although the latter is usually smaller.
The subsequent a posteriori error analysis could be extended to give a control on the error produced by $\tilde{u}^\NN_H $ as well. For the sake of simplicity, we do not pursue such a task here.
}
\end{remark}
\section{The a posteriori error estimator}\label{sec:aposteriori-theory}
In order to build an error estimator, let us first choose, for any $E \in {\cal T}_h$ and any $k \geq 0$, a projection operator $\Pi_{E,k} : L^2(E) \to \mathbb{P}_k(E)$ satisfying
\begin{equation}\label{eq:mean}
\int_E \Pi_{E,k} \varphi = \int_E \varphi \qquad \forall \varphi \in L^2(E) \,.
\end{equation}
This allows us to introduce approximate bilinear and linear forms
\begin{equation}\label{eq:form aPi}
a_\pi(w,v)=\sum_{E \in {\cal T}_h} \int_E \Pi_{E,q}\left( \mu \nabla w \right) \cdot \nabla v + \Pi_{E,q-1}\left( \boldsymbol{\beta}\cdot \nabla w + \sigma w\right) v\,,
\end{equation}
\begin{equation}\label{eq:forms FPi}
F_\pi (v)=\sum_{E \in {\cal T}_h} \int_E \left(\Pi_{E,q-1} f\right) v \,,
\end{equation}
which are useful in the forthcoming derivation. Indeed, the coercivity of the form $a$ allows us to bound the $V$-norm of the error as follows:
\begin{equation}\label{eq:inf-sup}
\vert u - u^\NN \vert_{1,\Omega} \leq \frac1\alpha \sup_{v \in V} \frac{a(u - u^\NN,v)}{\vert v \vert_{1,\Omega}} \,.
\end{equation}
We split the numerator as
\begin{equation}\label{eq:split-a}
\begin{split}
a(u - u^\NN,v) &= F(v) -a(u^\NN,v)
= \underbrace{F(v)-F_\pi(v)}_{(\text{I})} \ + \ \underbrace{F_\pi(v)-a_\pi(u^\NN,v)}_{(\text{III})}\\
& \quad + \ \underbrace{a_\pi(u^\NN,v) - a(u^\NN,v)}_{(\text{II})}
\end{split}
\end{equation}
and we proceed to bound each term on the right-hand side.
The terms $({\rm I})$ and $({\rm II})$ account for the element-wise projection error upon polynomial spaces; they are estimated in the next two Lemmas.
\begin{lemma}\label{lem:bound-I}
The quantity $({\rm I})$ defined in \eqref{eq:split-a} satisfies
\begin{equation}\label{eq:bound-I}
\vert ( {\rm I} ) \vert \lesssim \Big(\sum_{E \in {\cal T}_h} \eta_{{\rm rhs},1}^2(E) \Big)^{1/2} \vert v \vert_{1,\Omega}\,,
\end{equation}
with
\begin{equation}\label{eq:eta-f}
\eta_{{\rm rhs},1}(E) = h_E \Vert f - \Pi_{E,q-1} f \Vert_{0,E} \,.
\end{equation}
\end{lemma}
\proof Setting $m_E(v)=\frac1{\vert E \vert} \int_E v$ and using \eqref{eq:mean}, we get
$$
( {\rm I} ) = \sum_{E \in {\cal T}_h} \int_E \left( f - \Pi_{E,q-1} f \right)(v-m_E(v) ) \,,
$$
and we conclude using the bound $\Vert v - m_E(v) \Vert_{0,E} \lesssim h_E \vert v \vert_{1,E}$. \endproof
\begin{lemma}\label{lem:bound-III}
The quantity $({\rm II})$ defined in \eqref{eq:split-a} satisfies
\begin{equation}\label{eq:bound-III}
\vert ( {\rm II} ) \vert \lesssim \Big( \sum_{E \in {\cal T}_h} \big( \eta_{{\rm coef},1}^2(E) + \eta_{{\rm coef},2}^2(E) + \eta_{{\rm coef},3}^2(E) \big)
\Big)^{1/2} \vert v \vert_{1,\Omega}\,,
\end{equation}
with
\begin{equation}\label{eq:eta-coef-13}
\begin{split}
\eta_{{\rm coef},1}(E) &= \Vert \mu \nabla u^\NN - \Pi_{E,q} (\mu \nabla u^\NN) \Vert_{0,E} \,, \\[3pt]
\eta_{{\rm coef},2}(E) &= h_E \Vert \boldsymbol{\beta}\cdot \nabla u^\NN - \Pi_{E,q-1}( \boldsymbol{\beta}\cdot \nabla u^\NN) \Vert_{0,E} \,, \\[3pt]
\eta_{{\rm coef},3}(E) &= h_E \Vert \sigma u^\NN - \Pi_{E,q-1}( \sigma u^\NN) \Vert_{0,E} \,.
\end{split}
\end{equation}
\end{lemma}
\proof It holds
\begin{equation*}
\begin{split}
({\rm II}) &= \sum_{E \in {\cal T}_h} \int_E \Big( \mu \nabla u^\NN - \Pi_{E,q}(\mu \nabla u^\NN) \Big) \cdot \nabla v \\
& \quad + \sum_{E \in {\cal T}_h} \int_E \Big( \boldsymbol{\beta}\cdot \nabla u^\NN - \Pi_{E,q-1}( \boldsymbol{\beta}\cdot \nabla u^\NN) \Big) (v - m_E(v)) \\
& \quad + \sum_{E \in {\cal T}_h} \int_E \Big( \sigma u^\NN - \Pi_{E,q-1}( \sigma u^\NN) \Big) (v - m_E(v)) \,,
\end{split}
\end{equation*}
where we have used again \eqref{eq:mean}. We conclude as in the proof of Lemma \ref{lem:bound-I}.\endproof
Let us now focus on the quantity $({\rm III})$, which can be written as
\begin{equation}\label{eq:split-III}
({\rm III}) = \underbrace{F_\pi(v-v_h) - a_\pi(u^\NN,v-v_h)}_{(\text{IV})} + \underbrace{F_\pi(v_h) - a_\pi(u^\NN,v_h)}_{(\text{V})} \,, \qquad \forall v_h \in V_h\,;
\end{equation}
in turn, the quantity $({\rm V})$ can be written as
\begin{equation}\label{eq:split-V}
({\rm V}) = \underbrace{F_\pi(v_h) -F_h(v_h)}_{(\text{VII})} + \underbrace{F_h(v_h)-a_h(u^\NN,v_h)}_{(\text{VI})} + \underbrace{a_h(u^\NN,v_h)-a_\pi(u^\NN,v_h)}_{(\text{VIII})} \,.
\end{equation}
The bound of $({\rm IV})$ is standard in finite-element a posteriori error analysis: it involves the local bulk residuals
\begin{equation}\label{eq:def-bulk}
{\rm bulk}_E(u^\NN) = \Pi_{E,q-1}f +\nabla \cdot \Pi_{E,q} (\mu \nabla u^\NN) - \Pi_{E,q-1}( \boldsymbol{\beta}\cdot \nabla u^\NN + \sigma u^\NN)
\end{equation}
and the interelement jumps at each edge $e$ shared by two elements, say $E_1$ and $E_2$ with opposite normal unit vectors $\boldsymbol{n}_1$ and $\boldsymbol{n}_2$, namely
\begin{equation}\label{eq:def-jump}
{\rm jump}_e(u^\NN) = \Pi_{E_1,q}(\mu \nabla u^\NN)\cdot \boldsymbol{n}_1 + \Pi_{E_2,q}(\mu \nabla u^\NN)\cdot \boldsymbol{n}_2
\,;
\end{equation}
in addition, one defines ${\rm jump}(u^\NN, e) =0$ if $e \subset \partial \Omega$.
To derive the bound, the test function $v_h$ in \eqref{eq:split-III} is chosen as $v_h=I_h^C v$, the Cl\'ement interpolant of $v$ on ${{\cal T}_h}$ \cite{clement1975}, which satisfies
\begin{equation}\label{eq:clement}
\Vert v-I_h^C v \Vert_{k,E} \lesssim h_E^k \vert v \vert_{1, D_E}, \qquad k=0,1 \,,
\end{equation}
where $D_E = \cup \{E' \in {{\cal T}_h} : E \cap E' \not= \emptyset\}$.
\begin{lemma}\label{lem:bound-IV}
The quantity $({\rm IV})$ defined in \eqref{eq:split-III} satisfies
\begin{equation}\label{eq:bound-IV}
\vert ( {\rm IV} ) \vert \lesssim \Big(\sum_{E \in {\cal T}_h} \eta_{{\rm res}}^2(E) \Big)^{1/2} \vert v \vert_{1,\Omega}\,,
\end{equation}
where
\begin{equation}\label{eq:eta-res}
\eta_{{\rm res}}(E) = h_E \Vert \, {\rm bulk}_E(u^\NN) \, \Vert_{0,E} + h_E^{1/2} \sum_{e \subset \partial E} \Vert \,{\rm jump}_e(u^\NN) \, \Vert_{0,e} \,,
\end{equation}
with ${\rm bulk}_E(u^\NN)$ defined in \eqref{eq:def-bulk} and ${\rm jump}_e(u^\NN)$ defined in \eqref{eq:def-jump}.
\end{lemma}
\begin{proof} We refer e.g. to \cite{verfurth1996} for more details.
\end{proof}
Before considering the quantity $({\rm VI})$, let us state a useful result of equivalence of norms.
\begin{lemma}\label{lem:equi-norm}
For any $v_h = \sum_{i \in I_h} v_i \varphi_i \in V_h$, let $\boldsymbol{v} = (v_i)_{i \in I_h}$ be the vector of its coefficients. There exist
constants $0< c_h \leq C_h$, possibly depending on $h$ such that
\begin{equation}\label{eq:norm-equiv-Vh}
c_h \vert v_h \vert_{1, \Omega} \leq \Vert \boldsymbol{v} \Vert_2 \leq C_h \vert v_h \vert_{1, \Omega} \qquad \forall v_h \in V_h \,,
\end{equation}
where $\Vert \boldsymbol{v} \Vert_2 = \left( \sum_{i \in I_h} v_i^2 \right)^{1/2}$.
\end{lemma}
\begin{proof} The result expresses the equivalence of norms in finite dimensional spaces. If the triangulation ${\cal T}_h$ is quasi uniform, then one can prove by a standard reference-element argument that $c_h \simeq h^{1-n/2}$ whereas $C_h \simeq h^{-n/2}$.
\end{proof}
We are now able to bound the quantity $({\rm VI})$ in terms of the loss function introduced in \eqref{eq:loss-function}, as follows.
\begin{lemma}\label{lem:bound-VI}
The quantity $({\rm VI})$ defined in \eqref{eq:split-V} satisfies
\begin{equation}\label{eq:bound-VI}
\vert ( {\rm VI} ) \vert \lesssim \eta_{{\rm loss}} \vert v \vert_{1,\Omega}\,,
\end{equation}
where
\begin{equation}
\eta_{{\rm loss}} = C_h R_h(u^\NN)
\end{equation}
and the constant $C_h$ is defined in \eqref{eq:norm-equiv-Vh}.
\end{lemma}
\begin{proof}
Writing $v_h = \sum_{i \in I_h} v_i \varphi_i$, it holds
$$
({\rm VI}) = \sum_{i \in I_h} r_{h,i}(u^\NN) v_i \,,
$$
whence
$$
\vert ({\rm VI}) \vert \lesssim R_h(u^\NN) \Vert \boldsymbol{v} \Vert_2 \,,
$$
We conclude by using \eqref{eq:norm-equiv-Vh} and observing that
\begin{equation}\label{eq:clem-bound}
\vert v_h \vert_{1, \Omega} \lesssim \vert v \vert_{1, \Omega} \,,
\end{equation}
since we have chosen $v_h=I_h^C v$ and \eqref{eq:clement} holds.
\end{proof}
We are left with the problem of bounding the terms $({\rm VII})$ and $({\rm VIII})$ in \eqref{eq:split-V}. They are similar to the terms $({\rm I})$ and $({\rm II})$, respectively, but reflect the presence of the quadrature formula introduced in \eqref{eq:def-ah} and \eqref{eq:def-Fh}.
In the forthcoming analysis, it will be useful to introduce the following notation for the quadrature-based discrete (semi-)norm on $C^0 (E)$:
\begin{equation}\label{eq:discr-norm}
\Vert \varphi \Vert_{0, E, \omega}= \left(\sum_{\iota \in I^E} \varphi^2(\xi^E_\iota) \,\omega^E_\iota \right)^{1/2} \,.
\end{equation}
Let us start with the quantity $({\rm VII})$. Recalling that the adopted quadrature rule has precision $q$ and test functions $v_h$ are piecewise linear polynomials, it holds
\begin{equation}\label{eq:split-VII}
\begin{split}
({\rm VII}) &= \sum_{E \in {\cal T}_h} \left( \int_E (\Pi_{E,q-1} f) v_h - \sum_{\iota \in I^E} f(\xi^E_\iota) v_h(\xi^E_\iota) \,\omega^E_\iota \right) \\
&= \sum_{E \in {\cal T}_h} \left( \sum_{\iota \in I^E} (\Pi_{E,q-1} f - f) (\xi^E_\iota) v_h(\xi^E_\iota) \,\omega^E_\iota \right) \\
&= \underbrace{\sum_{E \in {\cal T}_h} \left( \sum_{\iota \in I^E} (\Pi_{E,q-1} f - f) (\xi^E_\iota) (v_h - m_E(v_h))(\xi^E_\iota) \,\omega^E_\iota \right)}_{(\text{VIIa})} \\
& \qquad + \underbrace{\sum_{E \in {\cal T}_h} \left( \sum_{\iota \in I^E} (\Pi_{E,q-1} f - f) (\xi^E_\iota) \,\omega^E_\iota m_E(v_h) \right)}_{(\text{VIIb})} \,.
\end{split}
\end{equation}
On the one hand, recalling the assumption $q \geq 2$ and inequality \eqref{eq:clem-bound} one has
\begin{equation}
\begin{split}
\vert ({\rm VIIa}) \vert & \leq \sum_{E \in {\cal T}_h} \Vert f-\Pi_{E,q-1} f \Vert_{0, E, \omega} \Vert v_h - m_E(v_h) \Vert_{0, E, \omega} \\
& = \sum_{E \in {\cal T}_h} \Vert f-\Pi_{E,q-1} f \Vert_{0, E, \omega} \Vert v_h - m_E(v_h) \Vert_{0, E} \\
& \lesssim \sum_{E \in {\cal T}_h} h_E \Vert f-\Pi_{E,q-1} f \Vert_{0, E, \omega} \vert v_h \vert_{1,E} \\
& \lesssim \left( \sum_{E \in {\cal T}_h} h_E^2 \Vert f-\Pi_{E,q-1} f \Vert_{0, E, \omega}^2 \right)^{1/2} \vert v \vert_{1,\Omega} \,.
\end{split}
\end{equation}
On the other hand, we first observe that, by the exactness of the quadrature rule and \eqref{eq:mean}, we get
$$
\sum_{\iota \in I^E} (\Pi_{E,q-1} f )(\xi^E_\iota) \,\omega^E_\iota = \int_E \Pi_{E,q-1} f = \int_E f = \int_E \Pi_{E,q} f =
\sum_{\iota \in I^E} (\Pi_{E,q} f )(\xi^E_\iota) \,\omega^E_\iota.
$$
Hence,
\begin{equation}
\begin{split}
\vert ({\rm VIIb}) \vert & \leq \sum_{E \in {\cal T}_h} \Vert f-\Pi_{E,q} f \Vert_{0, E, \omega} \Vert m_E(v_h) \Vert_{0, E} \\
& \leq \sum_{E \in {\cal T}_h} \Vert f-\Pi_{E,q} f \Vert_{0, E, \omega} \Vert v_h \Vert_{0,E} \\
& \lesssim \left( \sum_{E \in {\cal T}_h} \Vert f-\Pi_{E,q} f \Vert_{0, E, \omega}^2 \right)^{1/2} \vert v \vert_{1,\Omega} \,.
\end{split}
\end{equation}
Summarizing, we obtain the following result, which is anologous to that in Lemma \ref{lem:bound-I}.
\begin{lemma}\label{lem:bound-VII}
The quantity $({\rm VII})$ defined in \eqref{eq:split-V} satisfies
\begin{equation}\label{eq:bound-VII}
\vert ( {\rm VII} ) \vert \lesssim \Big(\sum_{E \in {\cal T}_h} \eta_{{\rm rhs},2}^2(E) \Big)^{1/2} \vert v \vert_{1,\Omega}\,,
\end{equation}
with
\begin{equation}\label{eq:eta-f-2}
\eta_{{\rm rhs},2}(E) = h_E \Vert f - \Pi_{E,q-1} f \Vert_{0,E,\omega} + \Vert f - \Pi_{E,q} f \Vert_{0,E,\omega} \,.
\end{equation}
\end{lemma}
The last term in \eqref{eq:split-V}, $({\rm VIII})$, can be written as
\begin{equation}\label{eq:split-VIII}
\begin{split}
({\rm VIII}) &= \underbrace{\sum_{E \in {\cal T}_h} \left( \sum_{\iota \in I^E} (\mu \nabla u^\NN)(\xi^E_\iota) \cdot \nabla v_h \,\omega^E_\iota - \int_E \Pi_{E,q}(\mu \nabla u^\NN)\cdot \nabla v_h \right)}_{(\text{VIIIa})} \\
& \ \ + \underbrace{\sum_{E \in {\cal T}_h} \left( \sum_{\iota \in I^E} ( \boldsymbol{\beta}\cdot \nabla u^\NN)(\xi^E_\iota) \, v_h(\xi^E_\iota) \,\omega^E_\iota - \int_E \Pi_{E,q-1}(\boldsymbol{\beta}\cdot \nabla u^\NN) \, v_h \right)}_{(\text{VIIIb})} \\
& \ \ + \underbrace{\sum_{E \in {\cal T}_h} \left( \sum_{\iota \in I^E} (\sigma u^\NN)(\xi^E_\iota)\, v_h (\xi^E_\iota) \,\omega^E_\iota - \int_E \Pi_{E,q-1}(\sigma u^\NN) \, v_h \right)}_{(\text{VIIIc})} \,.
\end{split}
\end{equation}
Concerning $(\text{VIIIa})$, by the exactness of the quadrature rule and the fact that $\nabla v_h$ is piecewise constant, one has
$$
(\text{VIIIa}) = \sum_{E \in {\cal T}_h} \sum_{\iota \in I^E} \big(\mu \nabla u^\NN - \Pi_{E,q}(\mu \nabla u^\NN)\big)(\xi^E_\iota) \cdot \nabla v_h \,\omega^E_\iota \,,
$$
which easily gives
$$
\vert ( {\rm VIIIa} ) \vert \lesssim \left( \sum_{E \in {\cal T}_h} \Vert \mu \nabla u^\NN - \Pi_{E,q}(\mu \nabla u^\NN) \Vert_{0,E,\omega}^2 \right)^{1/2} \vert v \vert_{1,\Omega} \,.
$$
The terms $(\text{VIIIb})$ and $(\text{VIIIc})$ are similar to the term $(\text{VII})$ above, in which $f$ is replaced by $\boldsymbol{\beta}\cdot \nabla u^\NN$ and $\sigma u^\NN$, respectively. Hence, they can be bounded as done for $(\text{VII})$. Summarizing, we obtain the following result, which is anologous to that in Lemma \ref{lem:bound-III}.
\begin{lemma}\label{lem:bound-VIII}
The quantity $({\rm VIII})$ defined in \eqref{eq:split-V} satisfies
\begin{equation}\label{eq:bound-VIII}
\vert ( {\rm VIII} ) \vert \lesssim \Big( \sum_{E \in {\cal T}_h} \big( \eta_{{\rm coef},4}^2(E) + \eta_{{\rm coef},5}^2(E) + \eta_{{\rm coef},6}^2(E) \big)
\Big)^{1/2} \vert v \vert_{1,\Omega}\,,
\end{equation}
with
\begin{equation}\label{eq:eta-coef-46}
\begin{split}
\eta_{{\rm coef},4}(E) &= \Vert \mu \nabla u^\NN - \Pi_{E,q} (\mu \nabla u^\NN) \Vert_{0,E,\omega} \,, \\[3pt]
\eta_{{\rm coef},5}(E) &= h_E \Vert \boldsymbol{\beta}\cdot \nabla u^\NN - \Pi_{E,q-1}( \boldsymbol{\beta}\cdot \nabla u^\NN) \Vert_{0,E,\omega} \,, \\[3pt]
& \qquad \qquad \qquad + \Vert \boldsymbol{\beta}\cdot \nabla u^\NN - \Pi_{E,q}( \boldsymbol{\beta}\cdot \nabla u^\NN) \Vert_{0,E,\omega} \\[3pt]
\eta_{{\rm coef},6}(E) &= h_E \Vert \sigma u^\NN - \Pi_{E,q-1}( \sigma u^\NN) \Vert_{0,E,\omega} \\[3pt]
& \qquad \qquad \qquad + \Vert \sigma u^\NN - \Pi_{E,q}( \sigma u^\NN) \Vert_{0,E,\omega}\,.
\end{split}
\end{equation}
\end{lemma}
At this point, we are ready to derive the announced a posteriori error estimates. In order to get an upper bound of the error, we concatenate \eqref{eq:inf-sup}, \eqref{eq:split-a}, \eqref{eq:split-III}, \eqref{eq:split-V}, and use the bounds given in Lemmas \ref{lem:bound-I} to \ref{lem:bound-VIII}, arriving at the following result.
\begin{theorem}[a posteriori upper bound of the error]\label{theo:aposteriori-up}
Let $u^{\cal N\!N} \in V^{\cal N\!N}$ satisfy \eqref{eq:min-prob}. Then, the error $u-u^{\cal N\!N} $ can be estimated from above as follows:
\begin{equation}\label{eq:aposteriori1}
\vert u - u^\NN \vert_{1,\Omega} \lesssim \left( \eta_{\rm res} + \eta_{\rm loss} + \eta_{\rm coef} + \eta_{\rm rhs} \right) \,,
\end{equation}
where
\begin{equation}
\begin{split}
\eta_{\rm res}^2 &= \sum_{E \in {\cal T}_h} \eta_{\rm res}^2(E) \,, \quad \eta_{\rm coef}^2 = \sum_{E \in {\cal T}_h} \sum_{k=1}^6\eta_{{\rm coef},k}^2(E) \,, \quad \eta_{\rm rhs}^2 = \sum_{E \in {\cal T}_h} \sum_{k=1}^2\eta_{{\rm rhs},k}^2(E)\,.
\end{split}
\end{equation}
\end{theorem}
We realize that the global estimator $\eta= \eta_{\rm res} + \eta_{\rm loss} + \eta_{\rm coeff} + \eta_{\rm rhs}$ is the sum of four contributions: $\eta_{\rm res}$ is the classical residual-based estimator, $\eta_{\rm loss}$ measures how small the minimized loss function is, i.e., how well the discrete variational equations \eqref{eq:PGproblem} are fulfilled, whereas $\eta_{\rm coef}$ and $\eta_{\rm rhs}$ reflect the error in approximating elementwise the coefficients of the operator and the right-hand side by polynomials of degrees related to the precision of the quadrature formula.
\medskip
It is possible to derive from \eqref{eq:aposteriori1} an element-based a posteriori error estimator, which can be used to design an adaptive strategy of mesh refinement (see, e.g. \cite{NochettoCIME2012}). To this end, from now on we assume that the basis $\{\varphi_i : i\in I_h\}$ of $V_h$, introduced to define \eqref{eq:residuals}, is the canonical Lagrange basis associated with the nodes of the triangulation ${{\cal T}_h}$.
Given any $E \in {{\cal T}_h}$, we introduce the elemental index set $I_h^E =\{ i \in I_h : E \subset {\rm supp}\, \varphi_i\}$, where
${\rm supp}\, \varphi_i$ is the support of $\varphi_i$, and we define a local contribution to the term $\eta_{\rm loss}$ as follows:
\begin{equation}\label{eq:eta-local-loss}
\eta_{\rm loss}^2(E) = C_h^2 \sum_{i \in I_h^E} r_{h,i}^2(u^\NN) \,,
\end{equation}
which satisfies
$$
\eta_{\rm loss}^2 \leq \sum_{E \in {\cal T}_h} \eta_{\rm loss}^2(E) \,.
$$
With this definition at hand, we can introduce the following elemental error estimator.
\begin{definition}[elemental error estimator]\label{def:error-est}
For any $E \in {\cal T}_h$, let us set
\begin{equation}\label{eq:aposteriori3}
\eta^2(E) = \eta_{\rm res}^2(E) + \eta_{\rm loss}^2(E) + \sum_{k=1}^6\eta_{{\rm coef},k}^2(E) + \sum_{k=1}^2\eta_{{\rm rhs},k}^2(E) \,,
\end{equation}
where the addends in this sum are defined, respectively, in \eqref{eq:eta-res}, \eqref{eq:eta-local-loss}, \eqref{eq:eta-coef-13} and \eqref{eq:eta-coef-46}, \eqref{eq:eta-f} and \eqref{eq:eta-f-2}.
\end{definition}
Then, Theorem \ref{theo:aposteriori-up} can be re-formulated in terms of these quantities.
\begin{corollary}[localized a posteriori error estimator]\label{cor:aposteriori}
The error $u-u^{\cal N\!N} $ can be estimated as follows:
\begin{equation}\label{eq:aposteriori2}
\vert u - u^\NN \vert_{1,\Omega} \lesssim \Big( \sum_{E \in {\cal T}_h} \eta^2(E) \Big)^{1/2} \,.
\end{equation}
\end{corollary}
Inequality \eqref{eq:aposteriori2} guarantees the {\em reliability} of the proposed error estimator, namely the estimator does provide a computable upper bound of the discretization error. Next result assures that the estimator is also {\em efficient}, namely it does not overestimate the error.
\begin{theorem}[a posteriori lower bound of the error]\label{theo:aposteriori-down}
Let $u^{\cal N\!N} \in V^{\cal N\!N}$ satisfy \eqref{eq:min-prob}. Then, the error $u-u^{\cal N\!N} $ can be locally estimated from below as follows: for any $E \in {\cal T}_h$ it holds
\begin{eqnarray}\label{eq:aposteriori3a}
\eta_{{\rm res}}(E) &\lesssim& \vert u - u^\NN \vert_{1,D_E} + \sum_{E' \subset D_E} \left(
\sum_{k=1}^3 \eta_{{\rm coef},k}^2(E') + \eta_{{\rm rhs},1}^2(E') \right)^{1/2} \!\!\!\!\!\!\!\!\,, \\ \label{eq:aposteriori3}
\frac{c_h}{C_h} \, \eta_{{\rm loss}}(E) &\lesssim & \vert u - u^\NN \vert_{1,D_E} + \sum_{E' \subset D_E} \left(
\sum_{k=1}^6 \eta_{{\rm coef},k}^2(E') + \sum_{k=1}^2\eta_{{\rm rhs},k}^2(E') \right)^{1/2} \label{eq:aposteriori3b}
\end{eqnarray}
\end{theorem}
\proof
To derive \eqref{eq:aposteriori3a}, let us first consider the bulk contribution to the estimator. We apply a classical argument in a posteriori analysis, namely we introduce a non-negative bubble function $b_E \in V$ with support in $E$ and such that $\Vert \phi \Vert_{0,E} \simeq \Vert b_E^{1/2} \phi \Vert_{0,E} $ and $\Vert \phi \Vert_{0,E} \simeq (\Vert b_E \phi \Vert_{0,E} + h_E\vert b_E \phi \vert_{1,E})$ for all $\phi \in \mathbb{P}_q(E)$.
Let us set $w_E={\rm bulk}_E(u^\NN) b_E \in V$. Then,
$$
\Vert {\rm bulk}_E(u^\NN) \Vert_{0,E}^2 \lesssim \int_E {\rm bulk}_E(u^\NN)^2 b_E = \int_E {\rm bulk}_E(u^\NN) \, w_E
$$
Writing
\begin{equation*}
\begin{split}
{\rm bulk}_E(u^\NN) &= (f-Lu^\NN) + \nabla(\Pi_{E,q} (\mu \nabla u^\NN)-\mu \nabla u^\NN) \\
& \ + \ \Pi_{E,q-1}( \boldsymbol{\beta}\cdot \nabla u^\NN) - \boldsymbol{\beta}\cdot \nabla u^\NN
\ + \ \Pi_{E,q-1}( \sigma u^\NN) - \sigma u^\NN \\
& \ + \ \Pi_{q-1,E}f-f \,, \\
\end{split}
\end{equation*}
we obtain
\begin{equation*}
\begin{split}
\int_E {\rm bulk}_E(u^\NN) \, w_E &= a(u-u^\NN, w_E) - \int_E (\Pi_{E,q} (\mu \nabla u^\NN)-\mu \nabla u^\NN)\cdot \nabla w_E \\
& \ + \ \int_E (\Pi_{E,q-1}( \boldsymbol{\beta}\cdot \nabla u^\NN) - \boldsymbol{\beta}\cdot \nabla u^\NN )(w_E - m(w_E)) \\
& \ + \ \int_E (\Pi_{E,q-1}( \sigma u^\NN) - \sigma u^\NN) (w_E - m(w_E)) \\
& \ + \ \int_E (\Pi_{q-1,E}f-f)(w_E - m(w_E)) \,, \\
\end{split}
\end{equation*}
whence
$$
\Vert {\rm bulk}_E(u^\NN) \Vert_{0,E}^2 \lesssim \left( \vert u - u^\NN \vert_{1,E} + \sum_{k=1}^3 \eta_{{\rm coef},k}(E) + \eta_{{\rm rhs},1}(E) \right) \vert w_E \vert_{1,E} \,.
$$
Using $ \vert w_E \vert_{1,E} \lesssim h_E^{-1} \Vert {\rm bulk}_E(u^\NN) \Vert_{0,E}$, we arrive at
\begin{equation}\label{eq:aposteriori4}
h_E \Vert {\rm bulk}_E(u^\NN) \Vert_{0,E} \lesssim \vert u - u^\NN \vert_{1,E} + \sum_{k=1}^3 \eta_{{\rm coef},k}(E) + \eta_{{\rm rhs},1}(E) \,.
\end{equation}
Let us now turn to the jump contribution to the estimator. Given an edge $e \subset \partial E$ shared with the element $E'$, we introduce a non-negative bubble function $b_e \in V$, with support in $E \cup E'$ and such that $\Vert \phi \Vert_{0,e} \simeq \Vert b_e^{1/2} \phi \Vert_{0,e} $ and $ (h_E^{-1/2} \Vert b_e \phi \Vert_{0,E} + h_E^{1/2}\vert b_e \phi \vert_{1,E}) \lesssim \Vert \phi \Vert_{0,e}$ for all $\phi \in \mathbb{P}_q(E)$.
Let us extend the function ${\rm jump_e(u^\NN)}$ onto $E \cup E'$ to be constant in the normal direction to $e$, obtaining a polynomial of degree $q$ in each element. Let us set $w_e = {\rm jump_e(u^\NN)} b_e \in V$. Then, writing $E_1=E$ and $E_2=E'$, one has
\begin{equation*}
\begin{split}
\Vert {\rm jump}_e(u^\NN) \Vert_{0,e}^2 &\lesssim \int_e {\rm jump}_e(u^\NN)^2 b_e = \int_e {\rm jump}_e(u^\NN) \, w_e \\
& \ = \ \int_e {\rm jump}_e(u^\NN - u) \, w_e \,, \\
& \ = \ \sum_{i=1}^2 \int_{E_i} \nabla \cdot [ (\Pi_{E_1,q}(\mu \nabla u^\NN) - \mu \nabla u) \, w_e ] \\
& \ = \ \sum_{i=1}^2 \int_{E_i} [ \nabla \cdot \Pi_{E_1,q}(\mu \nabla u^\NN) - \nabla \cdot (\mu \nabla u) ] w_e \\
& \ \quad + \ \sum_{i=1}^2 \int_{E_i} [ \Pi_{E_1,q}(\mu \nabla u^\NN) - \mu \nabla u ] \cdot \nabla w_e \,.
\end{split}
\end{equation*}
We now recall that
$$
\nabla \cdot \Pi_{E_i,q} (\mu \nabla u^\NN) = {\rm bulk}_{E_i}(u^\NN) - \Pi_{E_i,q-1}f + \Pi_{E_i,q-1}( \boldsymbol{\beta}\cdot \nabla u^\NN + \sigma u^\NN) \,,
$$
as well as $\nabla \cdot (\mu \nabla u) = - f + \boldsymbol{\beta}\cdot \nabla u + \sigma u$. We write $u= u^\NN + (u-u^\NN)$ and we proceed as in the proof of \eqref{eq:aposteriori4}, using now the bounds $ \Vert w_e \Vert_{0,E_i} \lesssim h_{E_i}^{1/2} \Vert {\rm jump}_e (u^\NN) \Vert_{0,e}$ and $ \vert w_e \vert_{1,E_i} \lesssim h_{E_i}^{-1/2} \Vert {\rm jump}_e (u^\NN) \Vert_{0,e}$, arriving at the bound
\begin{equation}\label{eq:aposteriori5}
\begin{split}
h_E^{1/2} \sum_{e \subset \partial E} \Vert \,{\rm jump}_e(u^\NN) \, \Vert_{0,e} &\lesssim \vert u - u^\NN \vert_{1,D_E} +
\sum_{E' \subset D_E} h_{E'} \Vert {\rm bulk}_{E'}(u^\NN) \Vert_{0,E'} \\
& \ \ \qquad + \sum_{E' \subset D_E} \left(
\sum_{k=1}^3 \eta_{{\rm coef},k}(E') + \eta_{{\rm rhs},1}(E') \right) \,.
\end{split}
\end{equation}
Together with \eqref{eq:aposteriori4}, this gives the bound \eqref{eq:aposteriori3a}.
In order to derive \eqref{eq:aposteriori3b}, we write \eqref{eq:eta-local-loss} as
$$
C_h^{-1} \eta_{\rm loss}(E) = \left( \sum_{i \in I_h^E} r_{h,i}^2(u^\NN) \right)^{1/2} = \quad
\sup_{\boldsymbol{v}} \frac1{\Vert \boldsymbol{v}\Vert_2} \sum_{i \in I_h^E} r_{h,i}(u^\NN) v_i
$$
where $\boldsymbol{v} = (v_i) \in \mathbb{R}^{{\rm card} I_h^E }$. Defining the function $v_h^E = \sum_{i \in I_h^E} v_i \varphi_i \in V_h$, which is supported in $D_E$, and recalling \eqref{eq:residuals}, we have
$$
\sum_{i \in I_h^E} r_{h,i}(u^\NN) v_i = F_h(v_h^E) - a_h(u^\NN,v_h^E)\,.
$$
By the left-hand inequality in \eqref{eq:norm-equiv-Vh}, we obtain
$$
\frac{c_h}{C_h} \, \eta_{{\rm loss}}(E) \ \leq \ \sup_{v_h^E} \frac{F_h(v_h^E) - a_h(u^\NN,v_h^E)}{\vert v_h^E \vert_{1,D_E}}\,.
$$
Now we write
\begin{equation*}
\begin{split}
F_h(v_h^E) - a_h(u^\NN,v_h^E) & = F_h(v_h^E) - F(v_h^E) \\
& \quad + f(v_h^E) - a(u^\NN,v_h^E) \\
& \quad + a(u^\NN,v_h^E) - a_h(u^\NN,v_h^E) \,.
\end{split}
\end{equation*}
The term $ F_h(v_h^E) - F(v_h^E) =[F_h(v_h^E) - F_\pi(v_h^E)] + [F_\pi(v_h^E) - F(v_h^E)]$ can be bounded as done for the terms (I) and (VII) above, yielding
$$
\vert F_h(v_h^E) - F(v_h^E) \vert \lesssim \sum_{E' \subset D_E} \left(\eta_{{\rm rhs},1}(E')+ \eta_{{\rm rhs},2}(E') \right) \vert v_h^E \vert_{1,E'} \,.
$$
Similarly, the term $a(u^\NN,v_h^E) - a_h(u^\NN,v_h^E)$ can be handled as done for the terms (III) and (VIII) above, obtaining
$$
\vert a(u^\NN,v_h^E) - a_h(u^\NN,v_h^E) \vert \lesssim \sum_{E' \subset D_E} \left(\sum_{k=1}^6\eta_{{\rm coeff},k}(E') \right) \vert v_h^E \vert_{1,E'} \,.
$$
Finally, one has $\vert f(v_h^E) - a(u^\NN,v_h^E) \vert \lesssim \vert u-u^\NN \vert_{1,D_E} \vert v_h^E \vert_{1,D_E}$, thereby concluding the proof of \eqref{eq:aposteriori3b}.
\endproof
\section{Numerical results}\label{sec:numerics}
Let us consider the two-dimensional domain $\Omega=(0,1)^2$ and the Poisson problem:
\begin{equation}\label{eq:model-pb-poisson}
\begin{cases}
-\Delta u = f & \text{in \ } \Omega\,, \\
\ \ \, u=g & \text{on \ } \Gamma \,, \end{cases}
\end{equation}
with the functions $f$ and $g$ such that the exact solution, represented in Fig. \ref{fig:solution6}, is
\begin{equation}\label{eq:sol6}
u(x,y) = \tanh\left[2\left(x^3 - y^4\right)\right].
\end{equation}
\begin{figure}[t!]
\centering
\captionsetup{justification=centering}
\includegraphics[width=0.75\linewidth]{figure1}
\caption{Graphical representation of the exact solution $u(x,y)$ in \eqref{eq:sol6}}
\label{fig:solution6}
\end{figure}
Problem \eqref{eq:model-pb-poisson} is numerically solved by the VPINN discretization described in Section \ref{sec:sub_discretization}, extended to handle non-homogeneous Dirichlet condition as mentioned in Remark \ref{rem:other-bcs}. The used VPINN is a feed-forward fully connected neural network comprised by an input layer with input dimension $n=2$, three hidden layers with 50 neurons each and an output layer with a single output variable; it thus contains 7851 trainable weights; furthermore, in all the layers except the output one the activation function is the hyperbolic tangent. The VPINN output is modified as described in \cite{BeCaPi2021} to exactly impose the Dirichlet boundary conditions. Gaussian quadrature rules of order $q=3$ are used in the definition of the loss function.
For ease of implementation, the orthogonal projection operators $\Pi_{E,k}$, defined in Section \ref{sec:aposteriori-theory}, are mimiked by interpolation operators as follows. Let us initially consider the elemental Lagrange interpolation operator ${\cal I}_{E,k}:C^0(E)\rightarrow \mathbb P_k(E)$; then, to guarantee orthogonality to constants, the projection operator $\tilde{\Pi}_{E,k}:C^0(E)\rightarrow \mathbb P_k(E)$ is defined by setting
\[
\tilde{\Pi}_{E,k}\varphi := {\cal I}_{E,k}\varphi + \dfrac{\int_E \left(\varphi - {\cal I}_{E,k}\varphi\right)}{\vert E\vert}, \hspace{0.5cm}\forall \varphi \in C^0(E),
\]
where, in practice, the integral $\int_E \left(\varphi - {\cal I}_{E,k}\varphi\right)$ can be computed with quadrature rules that are more accurate than the ones used in the other operations. In this work we use quadrature rules of order 7 in each element.
The VPINN is trained on different meshes and the corresponding error estimators $\left(\sum_{E \in {\cal T}_h} \eta^2(E) \right)^{1/2}$ are computed. Once more, when exact integrals are involved, they are approximated with higher order quadrature rules. The obtained results are shown in Fig. \ref{fig:decays}, where the values of the $H^1$-error and the a posteriori estimator are displayed for several meshes of stepsize $h$. Remarkably, the error estimators (red dots) behave very similarly to the corresponding energy errors (blue dots). Moreover, coherently with the results discussed in \cite{BeCaPi2021}, after an initial preasymptotic phase all dots are aligned on straight lines with slopes very close to 4 (the slope of the red line is 3.81, the slope of the blue line is 3.92).\\
\begin{figure}[t!]
\centering
\captionsetup{justification=centering}
\includegraphics[width=0.6\linewidth]{figure2}
\caption{$H^1$ errors (blue dots) obtained by training the same VPINN on different meshes, and corresponding error estimators (red dots)}
\label{fig:decays}
\end{figure}
It is also interesting to note that the terms appearing in the a posteriori estimator (recall \eqref{eq:aposteriori1}) exhibit different behaviors during the training of a single VPINN. This phenomenon is highlighted in Fig. \ref{fig:during_training}, where one can observe the evolution of the quantities $\eta_{\rm rhs}$, $\eta_{\rm coef}$, $\eta_{\rm res}$, $\eta_{\rm loss}$, $\eta$ and $\vert u - u^\NN \vert_{1,\Omega}$, where $\eta_. = \left(\sum_{E \in {\cal T}_h} \eta_.^2(E) \right)^{1/2}$. It can be observed that, during this training, while the value of the loss function decreases, the accuracy remains almost constant because other sources of error, independent of the neural network, prevail.
\begin{figure}[t!]
\centering
\captionsetup{justification=centering}
\includegraphics[width=0.6\linewidth]{figure3}
\caption{Evolution of the addends of the error estimator $\eta$ during training}
\label{fig:during_training}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
We considered the discretization of a model elliptic boundary-value problem by variational physics-informed neural networks (VPINNs), in which test functions are continuous, piecewise linear functions on a triangulation of the domain. The scheme can be viewed as an instance of a least-square/Petrov-Galerkin method.
We introduced an a posteriori error estimator, which sums-up four contributions: the equation residual (measuring the elemental bulk residuals and the edge jump terms, for approximated coefficients and right-hand side), the coefficients' oscillation, the right-hand side's oscillation, and a scaled value of the loss-function. The latter term corresponds to an inexact solve of the algebraic system arising from the discretization of the variational equations.
The main result of the paper is the proof that the estimator provides a global upper bound and a local lower bound for the energy norm of the error between the exact and VPINN solutions. In other words, the a posteriori estimator is both reliable and efficient.
Numerical results show an excellent agreement with the theoretical predictions.
In a forthcoming paper, we will investigate the use of the proposed estimator to design an adaptive strategy of discretization.
\bigskip
\noindent
{\bf Acknowledgements.} The authors performed this research in the framework of the Italian MIUR Award ``Dipartimenti di Eccellenza 2018-2022" granted to the Department of Mathematical Sciences, Politecnico di Torino (CUP: E11G18000350001). The research leading to this paper has also been partially supported by the SmartData@PoliTO center for Big Data and Machine Learning technologies.
SB was supported by the Italian MIUR PRIN Project 201744KLJL-004, CC was supported by the Italian MIUR PRIN Project 201752HKH8-003.
The authors are members of the Italian INdAM-GNCS research group.
\bibliographystyle{siam}
| {
"timestamp": "2022-05-03T02:37:59",
"yymm": "2205",
"arxiv_id": "2205.00786",
"language": "en",
"url": "https://arxiv.org/abs/2205.00786",
"abstract": "We consider the discretization of elliptic boundary-value problems by variational physics-informed neural networks (VPINNs), in which test functions are continuous, piecewise linear functions on a triangulation of the domain. We define an a posteriori error estimator, made of a residual-type term, a loss-function term, and data oscillation terms. We prove that the estimator is both reliable and efficient in controlling the energy norm of the error between the exact and VPINN solutions. Numerical results are in excellent agreement with the theoretical predictions.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Solving PDEs by Variational Physics-Informed Neural Networks: an a posteriori error analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109489630493,
"lm_q2_score": 0.803173801068221,
"lm_q1q2_score": 0.7909743532122542
} |
https://arxiv.org/abs/2011.08364 | On Integer Balancing of Digraphs | A weighted digraph is balanced if the sums of the weights of the incoming and of the outgoing edges are equal at each vertex. We show that if these sums are integers, then the edge weights can be integers as well. | \section{Introduction}
Let $G = (V, E)$ be a strongly connected digraph on $n$ vertices, with vertex set $V$ and edge set $E$. We use $v_iv_j$ to denote an edge from $v_i$ to $v_j$. The digraph $G$ can have self-arcs. For a vertex $v_i$, let $N^-(v_i):=\{v_j \in V \mid v_iv_j\in E\}$ and $N^+(v_i):=\{v_k \in V\mid v_kv_i\in E\}$ be the sets of out-neighbors and in-neighbors of $v_i$, respectively.
We let $\mathbb{R}_+$ (resp. $\mathbb{Z}_+$) be the set of nonnegative real numbers (resp. nonnegative integers).
We assign $w_{ij}\in \mathbb{R}_+$ to edges $v_iv_j$, for $v_iv_j\in E$, and denote by ${\bm w}\in \mathbb{R}^{|E|}_+$ the collection of these $w_{ij}$. We call $(G,{\bm w})$ a weighted digraph.
\begin{Definition}\label{def:label}
The weighted digraph $(G,{\bm w})$ is said to be {\em balanced} if, for each vertex, the inflow equals to the outflow:
\begin{equation}\label{eq:balance}
u_i:=\sum_{v_j \in N^-(v_i)} w_{ij} = \sum_{v_k\in N^+(v_i)} w_{ki}, \quad \forall v_i\in V.
\end{equation}
We call $u_i$ the {\em weight} of vertex $v_i$.
\end{Definition}
The vector ${\bm u}:=(u_1,\ldots,u_n)\in \mathbb{R}^n_+$ is said to be {\em feasible}
if there exists ${\bm w}\in \mathbb{R}^{|E|}_+$ such that~\eqref{eq:balance} holds.
Balanced digraphs have a host of applications in engineering and applied sciences, including the study of flocking behaviors~\cite{jadbabaie2003coordination}, sensor networks and distributed estimation~\cite{carli2008distributed}. While balancing over the real numbers is acceptable in some scenarios, others such as traffic management and fractional packing, require integer balancing~\cite{garg2007faster,plotkin1995fast,bertsekas1998network, hooi1970class,rikos2017distributed}.
If all the $w_{ij}$ are integers, then clearly every $u_i$ is an integer.
The question we are interested in is: given a feasible integer-valued ${\bm u}$, can we find an integer-valued ${\bm w}$? We show that the answer is affirmative:
\begin{Theorem}\label{th:main1}
Let $G$ be a strongly connected digraph and ${\bm u} \in \mathbb{Z}^n$ be any feasible vector. Then, there exist nonnegative integers $w_{ij}$ such that~\eqref{eq:balance} holds.
\end{Theorem}
The result also applies to the case of weakly connected digraph $G$, based upon the fact that $(G,{\bm w})$ is balanced if and only if every strongly connected component of $(G, {\bm w})$ is balanced~\cite{hooi1970class}.
We provide below a constructive proof of Theorem~\ref{th:main1}.
\section{Algorithm, Propositions, and Proofs}
To proceed, we associate to the digraph $G = (V, E)$ an undirected bipartite graph $B=(X,Y,F)$ on $2n$ vertices, where $X\sqcup Y$ is the vertex set and $F$ is the edge set. Each of the two sets $X$ and $Y$ comprises $n$ vertices.
The edge set $F$ is defined as follows: there is an edge $(x_i, y_j)$ in $B$ if $v_iv_j$ is an edge of $G$. See Fig.~\ref{fig:onlyfig} for an illustration.
\begin{figure}
\centering
\subfloat[\label{sfig1:g}]{
\includegraphics{main-figure0.pdf}
}\qquad \qquad
\subfloat[\label{sfig2:g}]{
\includegraphics{main-figure1.pdf}
}
\caption{{\em Left}: A digraph $G$. {\em Right}: Its bipartite counterpart $B$. A cycle in $B$, and the corresponding edges in $G$, are marked in blue.}
\label{fig:onlyfig}
\end{figure}
Note that the directed edges in $G$ are in one-to-one correspondence with the undirected edges in $B$. Thus, we can assign the edge weights $w_{ij}$, for $v_iv_j\in E$, to the edges $(x_i, y_j)$ in $B$.
The balance relation~\eqref{eq:balance}, when applied to the bipartite representation of $G$, is now turned into
\begin{equation}\label{eq:balanceBipart}
u_i = \sum_{y_j\in N(x_i)} w_{ij} = \sum_{x_k\in N(y_i)} w_{ki}, \quad \forall i = 1,\ldots, n.
\end{equation}
If the above relations hold for some nonnegative real numbers $u_i$, then $(B,{\bm w})$ is said to be balanced with vertex weights $u_i$ for both $x_i$ and $y_i$. The following result is an immediate consequence of the above construction of $(B,{\bm w})$:
\begin{Lemma}
The digraph $(G,{\bm w})$ is balanced if and only if $(B,{\bm w})$ is balanced.
\end{Lemma}
Now, let ${\bm u}\in \mathbb{Z}^n$ be a feasible vector and ${\bm w}\in \mathbb{R}^{|E|}_+$ be such that~\eqref{eq:balanceBipart} is satisfied. In the sequel, we refer to elements of $\mathbb{R}_+\backslash \mathbb{Z}_+$ as {\it decimal} numbers.
Every cycle in $B$ has an {even} number of edges, and the number is at least 4.
A cycle in $B$ does not correspond to a (directed) cycle in $G$, as illustrated in Fig.~\ref{fig:onlyfig}.
Instead, if $x_{\alpha_1}y_{\beta_1}\cdots x_{\alpha_p}y_{\beta_p}x_{\alpha_1}$ is a cycle in $B$,
then each vertex $v_{\alpha_i}$ in $G$ has two {\em outgoing} edges $v_{\alpha_i}v_{\beta_i}$ and $v_{\alpha_i}v_{\beta_{i-1}}$ (with $\beta_0$ identified with $\beta_p$) while each vertex $v_{\beta_i}$ has two {\em incoming} edges $v_{\alpha_i}v_{\beta_i}$ and $v_{\alpha_{i+1}}v_{\beta_i}$ (with $\alpha_{p+1}$ identified with $\alpha_1$).
We next introduce the following definition:
\begin{Definition}\label{def:CPcycles}
An edge in $(B,{\bm w})$ is called {\em decimal} if its weight is a decimal number. A cycle $C$ in $(B,{\bm w})$ is {\em completely decimal} if all its edges are decimal.
\end{Definition}
Given a balanced $(B,{\bm w})$, we aim to obtain a set of integer edge weights $w^*_{ij}\in \mathbb{Z}_+$ that satisfy~\eqref{eq:balance}. We present below an algorithm that does so in a finite number of steps:
\vspace{.2cm}
\noindent{\bf Algorithm 1:}
\begin{enumerate}
\item If $(B,{\bm w})$ does not contain a completely decimal cycle, then the algorithm is terminated. Otherwise, select a completely decimal cycle in $B$: $$C=x_{\alpha_1}y_{\beta_1}\cdots x_{\alpha_p}y_{\beta_p}x_{\alpha_1}.$$
\item For the selected cycle $C$, find an edge whose weight has the smallest decimal part. Without loss of generality, we assume that the edge is $x_{\alpha_1}y_{\beta_1}$ and the decimal part is $\epsilon := w_{\alpha_1\beta_1} - \lfloor w_{\alpha_1\beta_1} \rfloor$. Update the weights along the cycle as follows:
\begin{equation}\label{eq:updaterule}
\begin{array}{rcl}
w_{{\alpha_i}{\beta_i}} & \leftarrow & w_{{\alpha_i}{\beta_i}} - \epsilon \\
w_{{\beta_i}{\alpha_{i+1}}} & \leftarrow & w_{{\beta_i}{\alpha_{i+1}}} + \epsilon
\end{array}
\quad \mbox{ for } 1 \leq i \leq p,
\end{equation}
where we identify $x_{\alpha_{p+1}}$ with $x_{\alpha_1}$.
All the other edge weights remain unchanged.
\end{enumerate}
Note that one can easily obtain a decimal cycle in $(B,{\bm w})$, if one exists. Let $x_i$ be a vertex incident to a decimal edge. Denote by $\lambda(x_i)$ the number of decimal edges incident to $x_i$. Since the vertex weight $u_i$ of $x_i$ is integer-valued, then clearly $\lambda(x_i) \geq 2$.
Now fix a decimal edge $(x_i,y_j)$ in $(B,{\bm w})$. By the above arguments, $\lambda(y_j) \geq 2$. Thus, there exists another decimal edge incident to $y_j$, say $(y_j,x_k)$ and, similarly, $\lambda(x_k)\geq 2$. Iterating this procedure, we will return to some previously encountered vertex $x_\ell$, since $B$ is finite. By construction, the vertices obtained in the process yield a completely decimal cycle.
Theorem~\ref{th:main1} is then a direct consequence of the following result:
\begin{Theorem}\label{thm:algorithm}
Let $(B, {\bm w})$ be a balanced bipartite graph with integer-valued vertex weights ${\bm u}$ satisfying~\eqref{eq:balanceBipart}. Then, Algorithm 1 terminates in a finite number of steps and returns a nonnegative integer-valued solution ${\bm w}^*$ to~\eqref{eq:balanceBipart}, with ${\bm u}^* = {\bm u}$.
\end{Theorem}
We establish below Theorem~\ref{thm:algorithm} and start with the following proposition:
\begin{Proposition}\label{prop:1}
Let $(B,{\bm w})$ be balanced with vertex weights ${\bm u}$ and $C$ be a completely decimal cycle in $(B,{\bm w})$. Denote by $(B,{\bm w}')$ the bipartite graph obtained after a one-step update on ${\bm w}$ described by~\eqref{eq:updaterule}. Let ${\bm u}'$ be the vertex weights associated with $(B,{\bm w}')$. Then, $(B,{\bm w}')$ is balanced with ${\bm u}' = {\bm u}$.
\end{Proposition}
\begin{proof}
If a vertex $x_i$ does not belong to $C$, then none of the edges incident to it are updated. Hence, the summation $\sum_{y_j \in N(x_i)}w_{ij}$ is unchanged. The same argument applies to any vertex $y_j$ that does not belong to $C$.
Next, denote the cycle by $C=x_{\alpha_1}y_{\beta_1}\cdots x_{\alpha_p}y_{\beta_p}x_{\alpha_1}$. Every vertex in $C$ is incident to exactly two consecutive edges in $C$. For any vertex $x_{\alpha_i}$ in the cycle, we have that
\begin{equation}\label{eq:gimmeabreak}
\sum_{\gamma\in N(x_{\alpha_i})} (w'_{\alpha_i \gamma} - w_{\alpha_i \gamma}) = (w'_{\alpha_i\beta_{i-1}} + w'_{\alpha_{i} \beta_{i}}) -
(w_{\alpha_i\beta_{i-1}} + w_{{\alpha_{i}} {\beta_{i}}}).
\end{equation}
We identify $\beta_0$ with $\beta_p$ for the case $i = 1$.
By~\eqref{eq:updaterule}, the two expressions in parentheses on the right hand side of~\eqref{eq:gimmeabreak} are equal, so the difference is $0$.
The same arguments can be applied to vertices $y_{\beta_i}$.
Finally, because $\epsilon$ is the smallest decimal part of the weights on the edges along the cycle $C$, the updated edge weights are nonnegative.
\end{proof}
We next have the following proposition:
\begin{Proposition}\label{prop:2}
Let $(B,{\bm w})$ be a balanced bipartite graph, with integer-valued vertex weights. Then, the following statements are equivalent:
\begin{enumerate}
\item There is no decimal edge;
\item There is no completely decimal cycle;
\item The vector ${\bm w}$ is integer-valued.
\end{enumerate}
\end{Proposition}
\begin{proof}
From the discussion after Algorithm 1, we see that 2 implies 1. Furthermore, it is clear that 1 implies 2 and that 2 implies 3. We show below that 2 implies 3.
Assuming 2 holds, let $F'\subseteq F$ be the collection of decimal edges.
We show that $F'$ is an empty set. Suppose, to the contrary, that $F'$ is nonempty; then, we let $X'\subseteq X$ and $Y'\subseteq Y$ be the collections of vertices incident to the edges in $F'$. Consider the subgraph $B' = (X',Y',F')$ induced by $X'\sqcup Y'$. Because $(B,{\bm w})$ does not have a completely decimal cycle, $B'$ is acyclic.
Denote by $B'_1,\ldots, B'_m$ the connected components of $B'$, each of which is a tree. Pick an arbitrary tree $B'_k$ and a leaf of $B'_k$, say $x_i$. On the one hand, there exists one and only one edge $(x_i, y_j)$ in $B'_k$ such that the weight $w_{ij}$ is decimal. By construction, this edge is also the only decimal edge in $B$ incident to $x_i$. On the other hand, since $(B,{\bm w})$ is balanced, we have that
$$
w_{ij} = u_i - \sum_{y_{j'} \in N(x_i)\backslash\{y_j\}} w_{ij'}.
$$
The right hand side of the above expression is integer-valued, which is a contradiction.
\end{proof}
In fact, more can be said about decimal edges and completely decimal cycles:
\begin{Corollary}
Let $(B,{\bm w})$ be a balanced bipartite graph, with integer-valued vertex weights. Then, every decimal edge belongs to a completely decimal cycle.
\end{Corollary}
\begin{proof}
Suppose that $(x_i,y_j)$ is a decimal edge that is not contained in any decimal cycle; then, the weight $w_{ij}$ will not be affected by executing Algorithm 1. On the other hand, when Algorithm 1 is terminated, there is no completely decimal cycle. By Prop.~\ref{prop:2}, there is no decimal edge, which is a contradiction.
\end{proof}
Finally, note that every one-step operation of Algorithm 1 reduces the number of completely decimal cycles by at least one. Thus,
Theorem~\ref{thm:algorithm} follows as an immediate consequence of Propositions~\ref{prop:1} and~\ref{prop:2}.
\bibliographystyle{plain}
| {
"timestamp": "2020-11-20T02:07:34",
"yymm": "2011",
"arxiv_id": "2011.08364",
"language": "en",
"url": "https://arxiv.org/abs/2011.08364",
"abstract": "A weighted digraph is balanced if the sums of the weights of the incoming and of the outgoing edges are equal at each vertex. We show that if these sums are integers, then the edge weights can be integers as well.",
"subjects": "Optimization and Control (math.OC); Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "On Integer Balancing of Digraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109489630492,
"lm_q2_score": 0.803173801068221,
"lm_q1q2_score": 0.7909743532122541
} |
https://arxiv.org/abs/2109.09006 | Enumeration of self-reciprocal irreducible monic polynomials with prescribed leading coefficients over a finite field | A polynomial is called self-reciprocal (or palindromic) if the sequence of its coefficients is palindromic. In this paper we enumerate self-reciprocal irreducible monic polynomials over a finite field with prescribed leading coefficients. Asymptotic expression with explicit error bound is derived, which is used to show that such polynomials with degree $2n$ always exist provided that the number of prescribed leading coefficients is slightly less than $n/4$. Exact expressions are also obtained for fields with two or three elements and up to two prescribed leading coefficients. | \section{ Introduction}
In this paper we use our recent results from \cite{GKW21b} to enumerate self-reciprocal irreducible monic polynomials with prescribed leading coefficients. The following is a list of notations which will be used throughout the paper.
\begin{itemize}
\item ${\mathbb F}_{q}$ denotes the finite field with $q$ elements, where $q=p^r$ for some prime $p$ and positive integer $r$.
\item ${\cal M}_q$ denotes the set of monic polynomials over ${\mathbb F}_q$.
\item $\left[x^j\right]f(x)$ denotes the coefficient of $x^j$ in the polynomial $f(x)$.
\item For a polynomial $f$, $\deg(f)$ denotes the degree of $f$, and \\
$\displaystyle f^*(x)=x^{\deg(f)}f(1/x)$ is the {\em reciprocal} of $f$.
\item ${\cal P}_q$ denotes the set of polynomials in ${\cal M}_q$ with $f^*=f$. Polynomials in ${\cal P}_q$ are called {\em self-reciprocal} or {\em palindromic}.
\item ${\cal I}_q\subseteq {\cal M}_q$ denotes the set of irreducible monic polynomials.
\item ${\cal S}_q={\cal I}_q\cap {\cal P}_q$ denotes the set of self-reciprocal irreducible monic polynomials over ${\mathbb F}_q$.
\item ${\cal S}_q(d)=\{f: f\in {\cal S}_q, \deg(f)=d\}$.
\item ${\cal I}_q(d)=\{f: f\in {\cal I}_q, \deg(f)=d\}$.
\end{itemize}
Given non-negative integers $\ell,t$, and vectors $\vec{a}=(a_1,\ldots,a_{\ell})$ and $\vec{b}=(b_0,b_1,\ldots,b_{t-1})$, we also define
\begin{align*}
{\cal I}_q(d;\vec{a},\vec{b})&=\{f:f\in {\cal I}_q(d), \left[x^{d-j}\right]f(x)=a_{j}, 1\le j\le \ell, [x^j]f(x)=b_j, 0\le j\le t-1\},\\
{\cal S}_q(d;\vec{a})&=\{f:f\in {\cal S}_q(d), \left[x^{d-j}\right]f(x)=a_{j}, 1\le j\le \ell\}.
\end{align*}
Thus the vector $\vec{a}$ gives the $\ell$ {\em leading coefficients} of $f$, and $a_j$ is also called the $j$th {\em trace} of $f$. The vector $\vec{b}$ gives the $t$ {\em ending coefficients} of $f$ and $b_0$ is the {\em norm} of $f$.
There has been considerable interest in the study of ${\cal I}_q(d;\vec{a},\vec{b})$ and ${\cal S}_q(d;\vec{a})$, partly due to their applications in coding theory. Perhaps the most famous one is the Hansen-Mullen Conjecture \cite{HanMul92}, which says that irreducible polynomials with one coefficient prescribed (at a general position) always exist except for the two trivial forbidden cases. This conjecture was proved by Wan \cite{Wan97} for $d\ge 36$ or $q>19$ using character sums and Weil's bound. Similar questions were studied by Garefalakis and Kapetanakis \cite{GarKap12} for self-reciprocal irreducible monic polynomials.
In this paper, our main focus is on the cardinality of ${\cal S}_q(d;\vec{a})$, which can be expressed in terms of $I_q(d;\vec{a},\vec{b}):=|{\cal I}_q(d;\vec{a},\vec{b})|$ as shown in the next section.
It is clear $I_q(d;\vec{a},\vec{b})=0$ when $d>1$ and $b_0=0$. Hence we assume $b_0\ne 0$ when $t\ge 1$. It is also easy to see that if $f(x)\in {\cal P}_q$ has odd degree then $f(-1)=0$, and consequently ${\cal S}_q(d)=\emptyset$ when $d>1$ is odd. Thus we shall focus on self-reciprocal irreducible monic polynomials of even degrees, and we set
\[
S_q(n;\vec{a}):=|{\cal S}_q(2n;\vec{a})|.
\]
The rest of the paper is organized as follows. In Section~\ref{main} we review some known results about self-reciprocal polynomials and use it to derive a formula for $S_q(n;\vec{a})$ in terms of $I_q(d;\vec{a},\vec{b})$. We also state a formula for $I_q(d;\vec{a},\vec{b})$, which was obtained in the recent paper \cite{GKW21b}. These results will be used in Section~3 to derive asymptotic estimate for
$S_q(n;\vec{a})$. In Sections~4 and 5 we obtain exact expressions for $S_q(n;\vec{a})$ with $q\in \{2,3\}$ and up to two prescribed leading coefficients. Section~6 concludes the paper. Some tables of numerical values are provided in the last section as an appendix.
\section{Properties of self-reciprocal irreducible monic polynomials} \label{main}
The following results about self-reciprocal polynomials can be found in \cite{Mey90}.
\begin{prop}\label{prop:facts}
Let $f\in{\cal P}_q$. Then
\begin{itemize}
\item[(a)] $f(x)$ has odd degree and is irreducible if and only if $f(x)=x+1$.
\item[(b)] $f$ has degree $2d$ if and only if there is a monic polynomial $g$ of degree $d$ with $g(0)=1$ such that
\begin{align}\label{eq:Quad}
f(x)=x^dg\left(x+\frac{1}{x}\right).
\end{align}
Moreover, when $d>1$, $g$ is irreducible if and only if either $f$ is irreducible or \begin{align}\label{eq:Quad1}
f(x)=\frac{1}{h(0)}h(x)h^*(x)
\end{align}
for some $h\in {\cal I}_q(d)\setminus {\cal S}_q(d)$.
\end{itemize}
\end{prop}
Comparing the coefficients of both sides of \eqref{eq:Quad}, we obtain
\begin{align}\label{eq:phi1}
f_k=\sum_{j\le k/2}{d+2j-k\choose j}g_{k-2j},~~0\le k\le d.
\end{align}
We shall use $\phi_d:{\mathbb F}_{q}^{\ell} \mapsto {\mathbb F}_{q}^{\ell}$ to denote the mapping defined by \eqref{eq:phi1}
from $(g_1,\ldots,g_{\ell})$ to $(f_1,\ldots, f_{\ell})$.
It is easy to see that $\phi_d$ is one-to-one as $g_k$ can be computed from $f_k$ recursively:
\begin{align}\label{eq:phiinverse}
g_k=f_k-\sum_{0<j\le k/2}{d+2j-k\choose j}g_{k-2j}.
\end{align}
Equation~\eqref{eq:Quad1} corresponds to
\begin{align}\label{eq:psi}
f_k=h_k+h_d^{-1}h_{d-k}+h_d^{-1}\sum_{j=1}^{k-1} h_jh_{d+j-k},\quad h_d=h(0)\ne 0.
\end{align}
We note that the $\ell$ leading coefficients of $f(x)$ are determined by the $\ell$ leading coefficients and the $\ell+1$ ending coefficients of $h(x)$. Also for each given vector $\vec{b}:=(h_d,h_{d-1},\ldots,h_{d-\ell})$, system \eqref{eq:psi} gives a one-to-one mapping between $(f_1,\ldots,f_{\ell})$ and $(h_1,\ldots,h_{\ell})$.
We shall use $\psi_{\vec{b}}$ to denote this mapping, that is, $\psi_{(b_0,b_1,\ldots,b_{\ell})}$ is a bijection from ${\mathbb F}_{q}^{\ell}$ to itself such that $\psi_{(b_0,b_1,\ldots,b_{\ell})}(\vec{a})=\vec{c}$, where
\begin{align}
c_k=a_k+b_0^{-1}b_k+b_0^{-1}\sum_{j=1}^{k-1} a_jb_{k-j},~~1\le k\le \ell.\label{eq:psi1}
\end{align}
In the rest of the paper, we shall use the Iverson bracket $\llbracket P \rrbracket$ which has value 1 if the predicate $P$ is true and 0 otherwise.
We now prove the following
\begin{thm} \label{thm:main} Suppose $n>1$. Let $\vec{c}=(c_1,\ldots,c_{\ell})\in {\mathbb F}_{q}^{\ell}$ and $\vec{b}=(b_0,b_1,\ldots,b_{\ell})\in {\mathbb F}_{q}^{\ell+1}$ with $b_0\ne 0$. Let $\phi_n$ and $\psi_{\vec{b}}$ be define above. Then
\begin{align}
S_{q}(n;\vec{c} )
&=\frac{1}{2}\sum_{\vec{a}\in {\mathbb F}_{q}^{\ell} }\llbracket \psi_{(1,\vec{a})}(\vec{a})=\vec{c}\rrbracket S_q(n/2;\vec{a}) \nonumber\\
&~~~+I_q(n;\phi_n^{-1}(\vec{c}))-\frac{1}{2}\sum_{\vec{b}\in {\mathbb F}_{q}^{\ell+1}}
I_q(n;\psi^{-1}_{\vec{b}}(\vec{c}),\vec{b}).\label{eq:main}
\end{align}
\end{thm}
\noindent {\bf Proof } Proposition~\ref{prop:facts}(b) implies
\begin{align*}
S_{q}(n;\vec{c} )
&=I_q(n;\phi_n^{-1}(\vec{c}))\nonumber\\
&~~-\frac{1}{2}\left(\sum_{\vec{b}}
I_q(n;\psi^{-1}_{\vec{b}}(\vec{c}),\vec{b})-\sum_{\vec{a}\in {\mathbb F}_{q}^{\ell} }\llbracket \psi_{(1,\vec{a})}(\vec{a})=\vec{c}\rrbracket S_q(n/2;\vec{a})\right),
\end{align*}
where $I_q(n;\phi^{-1}(\vec{c}))$ counts all polynomials formed by (1) and the second line corresponds to all polynomials formed by (2). ~~\vrule height8pt width4pt depth0pt
Equation~\eqref{eq:main} immediately implies the following bounds
\begin{align}
I_q(n;\phi_n^{-1}(\vec{c}))-\frac{1}{2}\sum_{\vec{b}}
I_q(n;\psi^{-1}_{\vec{b}}(\vec{c}),\vec{b})\le S_{q}(n;\vec{c})\le I_q(n;\phi_n^{-1}(\vec{c})). \label{eq:Sbound0}
\end{align}
Theorem~\ref{thm:main} enables us to obtain expressions for $S_{q}(n;\vec{c})$ using known results about
$I_q(d;\vec{a},\vec{b})$.
For instance, if we take $\vec{c}$ to be the empty vector, we obtain the following formula for the total number of self-reciprocal irreducible monic polynomials of degree $2n$. This formula was first obtained by Carlitz \cite{Car67}; see also \cite{Coh69,Mey90}.
\begin{cor} We have
\begin{align}\label{eq:Sq2}
S_q(n)&=\left\{\begin{array}{ll}
\frac{1}{2n}\left(q^n+\llbracket 2\mid q\rrbracket-1\right),&\hbox{$n$ is a power of 2}, \\
\frac{1}{2n}\sum_{j\mid n}\llbracket2\nmid j\rrbracket\mu(j)q^{n/j},&\hbox{otherwise}.
\end{array}\right.
\end{align}
\end{cor}
\noindent {\bf Proof } For self-completeness and illustration purpose, we include a short proof here. A polynomial $f(x)=x^2+ax+1$ is reducible iff
$f(x)=(x+{\alpha})(x+1/{\alpha})$, that is, $a={\alpha}+1/{\alpha}$ for some ${\alpha}\in {\mathbb F}_{q}^{*}$. This implies
$S_q(1)=(q-1)/2$ when $q$ is odd, and $S_q(1)=q/2$ when $q$ is even. This gives \eqref{eq:Sq2} for $n=1$.
Now we assume $n\ge 2$ and write $n=2^ks$ with $k\ge 0$ and $s$ being an odd integer.
Taking $\vec{c}$ to be the empty vector in \eqref{eq:main} (that is, $\ell=0$), we obtain
\begin{align}\label{eq:L0}
S_{q}(n)&=\frac{1}{2}S_q(n/2)+\frac{1}{2}I_q(n),~~~n\ge 2,
\end{align}
which is \cite[(6.1)]{Coh69}.
Consequently,
\begin{align}\label{eq:SI1}
S_{q}(2^ks)&=2^{-k}S_q(s)+ \sum_{j=1}^{k} 2^{-j}I_q(n2^{1-j}).
\end{align}
Substituting the well-known formula (see, e.g.,\cite{LidNie97}; or apply our Theorem~\ref{thm:thm2} in the next section by taking $\ell=t=0$.)
\begin{align}
I_q(n)&=\frac{1}{n}\sum_{d\mid n}\mu(d)q^{n/d} \label{eq:Id}
\end{align}
into \eqref{eq:SI1}, and using
the property $\mu(2d)=-\mu(d)$ for any odd integer $d$, we obtain
\begin{align}
S_{q}(n)&=2^{-k}S_q(s)+ \sum_{j=1}^{k}\sum_{d\mid s2^{k+1-j}} \frac{1}{2n}I_q(n2^{1-j})\nonumber\\
&=\frac{s}{n}S_q(s)+\frac{1}{2n}\sum_{d\mid s}\sum_{j=1}^k\left(\mu(d)q^{n2^{1-j}/d}+\mu(2d)q^{n2^{-j}/d}\right)\nonumber\\
&=\frac{s}{n}S_q(s)+\frac{1}{2n}\sum_{d\mid s}\mu(d)\sum_{j=1}^k\left(q^{n2^{1-j}/d}-q^{n2^{-j}/d}\right)\nonumber\\
&=\frac{s}{n}S_q(s)+\frac{1}{2n}\sum_{d\mid s}\mu(d)\left(q^{n/d}-q^{s/d}\right).\label{eq:SI2}
\end{align}
When $s=1$, we have
\begin{align*}
S_{q}(n)&=\frac{1}{2n}\left(q+\llbracket 2\mid q\rrbracket-1\right)+\frac{1}{2n}\left(q^{n/d}-q\right),
\end{align*}
which gives the first line in \eqref{eq:Sq2}.
When $s>1$, we obtain from \eqref{eq:L0} and \eqref{eq:SI2}
\begin{align*}
S_q(n)&=\frac{s}{2n}I_q(s)+\frac{1}{2n}\sum_{d\mid s}\mu(d)\left(q^{n/d}-q^{s/d}\right).
\end{align*}
The second line in \eqref{eq:Sq2} follows immediately from \eqref{eq:Id}. ~~\vrule height8pt width4pt depth0pt
We need a few more notations before stating the formula from \cite{GKW21b} for $I_q(n;\vec{a},\vec{b})$.
We shall call two polynomials $f,g\in {\cal M}_q$ {\em equivalent} with respect to $\ell,t$ if
\begin{align*}
\left[x^{\deg(f)-j}\right]f(x)&=\left[x^{\deg(g)-j}\right]g(x), 1\le j\le \ell, \\
\left[x^j\right]f(x)&=\left[x^j\right]g(x), 0\le j\le t-1.
\end{align*}
Thus polynomials are partitioned into equivalence classes. Let $\langle f\rangle$ denote the equivalence class represented by $f$. It is known \cite{GKW21b,Hay65,Hsu96} that the set ${\cal E}^{\ell,t}$ of all equivalence classes forms an abelian group under the multiplication
\[\langle f\rangle \langle g\rangle=\langle fg\rangle.
\]
(When $t>0$, it is assumed that the constant term is nonzero.) It is also easy to see \cite{GKW21b,Hsu96} that
\[
\left|{\cal E}^{\ell,t}\right|=(q-\llbracket t>0\rrbracket)q^{\ell+t-1}.
\]
Since ${\cal E}^{\ell,t}$ is abelian, it is isomorphic to a direct product of cyclic groups. Let $\xi_1,\ldots,\xi_f$ be the generators of these cyclic groups and denote their orders by $r_1,\ldots, r_f$, respectively. Thus each ${\varepsilon}\in {\cal E}^{\ell,t}$ can be written uniquely as
\begin{align}
{\varepsilon}=\prod_{j=1}^f\xi_j^{e_j({\varepsilon})}. \label{eq:exp}
\end{align}
Let ${\omega}_r=\exp(2\pi i/r)$ and ${\varepsilon}\in {\cal E}^{\ell,t}$. Define
\begin{align}
{\cal E}^{\ell,t}(d)&=\{\langle f \rangle: f\in {\cal M}_q(d)\},\nonumber\\
c(d;{\varepsilon}) &= \sum_{{\varepsilon}'\in {\cal E}^{\ell,t}(d)}\prod_{j=1}^{f} {\omega}_{r_j}^{e_j({\varepsilon})e_j({\varepsilon}')},\label{eq:cdj}\\
P(z;{\varepsilon})&=1+\sum_{d=1}^{\ell+t-1}c(d;{\varepsilon})z^d,\label{eq:Pj}\\
P(z)&=\prod_{{\varepsilon}'\in {\cal E}^{\ell,t} \setminus \{\langle 1\rangle\}} P(z;{\varepsilon}'). \label{eq:Pz}
\end{align}
Following Granger's notation \cite{Gra19}, we set
\[
\rho_n(g):=\sum_{\rho} \rho^{-n},
\]
where the sum is over all the nonzero roots (with multiplicity) of the polynomial $g\in {\mathbb C}[z]$.
Under the above notations, we can restate \cite[Theorem~3]{GKW21b} as follows.
\begin{thm} \label{thm:thm2} Let ${\cal E}$ denote ${\cal E}^{\ell,t}$ and ${\varepsilon}\in {\cal E}$. We have
\begin{align}
I_q\left(n;{\varepsilon}\right)&=\frac{1}{n}\sum_{k|n}\mu(k)\sum_{\delta\in {\cal E}}\llbracket \delta^k={\varepsilon}\rrbracket
F_q(n/k;\delta),\label{eq:IF}
\end{align}
where
\begin{align}
F_q(n;{\varepsilon})&=\frac{q^n-\llbracket t>0 \rrbracket}{|{\cal E}|}+\frac{n}{|{\cal E}|}\sum_{{\varepsilon}'\in {\cal E} \setminus \{\langle 1\rangle\}}\prod_{j=1}^{f} {\omega}_{r_j}^{-e_j({\varepsilon})e_j({\varepsilon}')}
[z^n]\ln P(z;{\varepsilon}')\nonumber\\
&=\frac{q^n-\llbracket t>0 \rrbracket}{|{\cal E}|}-\frac{1}{|{\cal E}|}\sum_{{\varepsilon}'\in {\cal E} \setminus \{\langle 1\rangle\}}
\prod_{j=1}^{f}{\omega}_{r_j}^{-e_j({\varepsilon})e_j({\varepsilon}')}\rho_n(P(z;{\varepsilon}')).\label{eq:Froot}
\end{align}
In particular, we have
\begin{align}
F_q(n;\langle 1\rangle)
&= \frac{1}{|{\cal E}|}\left(q^n-\llbracket t>0\rrbracket\right)-\frac{1}{|{\cal E}|}
\rho_n(P(z)). \label{eq:trivial}
\end{align}
\end{thm}
\section{Asymptotic results}
In this section we use Theorem~\ref{main} to prove the following bounds.
\begin{thm}\label{thm:thm3} Let $\vec{c}\in {\mathbb F}_{q}^{\ell}$ and assume $1\le \ell\le n/2$. We have
\begin{align}
-\frac{\ell}{n}q^{\ell+1}q^{n/2}< S_q(n;\vec{c})-\frac{1}{2n}q^{n-\ell}
< \frac{\ell+1}{n}q^{\ell+1}q^{n/2}.\label{eq:Sbound}
\end{align}
Consequently, $S_q(n;\vec{c})>0$ whenever
\begin{align}\label{eq:L}
\ell\le \frac{n}{4}-\frac{\log_q (qn/2)}{2}.
\end{align}
\end{thm}
\noindent {\bf Proof } Let $\vec{a}\in {\mathbb F}_{q}^{\ell}$ and $\vec{b}\in {\mathbb F}_{q}^{\ell+1}$ with $b_0\ne 0$. The following bounds follow immediately from \cite[Theoerm~2.1]{Coh05}):
\begin{align}
-\frac{\ell+1}{n} q^{n/2}&\le I_q(n;\vec{a})-\frac{q^{n-\ell}}{n}
\le \frac{\ell-1}{n} q^{n/2},\label{eq:Ibound0}\\
-\frac{2\ell+2}{n} q^{n/2}&\le I_q(n;\vec{a},\vec{b})-\frac{q^{-2\ell}}{n(q-1)}
\left(q^n-1\right)
\le \frac{2\ell}{n} q^{n/2}.\label{eq:Ibound1}
\end{align}
Using \eqref{eq:main}, \eqref{eq:Ibound0} and \eqref{eq:Ibound1}, we obtain
\begin{align*}
S_q(n;\vec{c})&\le \frac{1}{n}\sum_{\vec{a}\in {\mathbb F}_{q}^{\ell}}
\left(q^{n/2-\ell}+(\ell-1)q^{n/4}\right)+\frac{1}{n}q^{n-\ell}+\frac{\ell-1}{n}q^{n/2} \nonumber \\
&~~ -\frac{1}{2n}\sum_{\vec{b}\in {\mathbb F}_{q}^{\ell+1},b_0\ne 0}
\left(\frac{q^{-2\ell}}{q-1}\left(q^n-1\right)-(2\ell+2)q^{n/2} \right)\nonumber\\
&\le \frac{1}{2n}q^{n-\ell}+\frac{1}{n}\left(\ell-1+(\ell+1)(q-1)q^{\ell}\right)q^{n/2}
+\frac{\ell-1}{n}q^{\ell}q^{n/4}+\frac{q^{-\ell}}{2n}\nonumber\\
&< \frac{1}{2n}q^{n-\ell}+\frac{\ell+1}{n}q^{\ell+1}q^{n/2},
\end{align*}
which gives the upper bound in \eqref{eq:Sbound}.
For the lower bound, we use \eqref{eq:main}, \eqref{eq:Ibound0} and \eqref{eq:Ibound1} to obtain
\begin{align*}
S_q(n;\vec{c})&\ge \frac{1}{n}q^{n-\ell}-\frac{\ell+1}{n}q^{n/2} \nonumber \\
&~~ -\frac{1}{2n}\sum_{\vec{b}\in {\mathbb F}_{q}^{\ell+1},b_0\ne 0}
\left(\frac{q^{-2\ell}}{q-1}\left(q^n-1\right)+2\ell q^{n/2} \right)\nonumber\\
&> \frac{1}{2n}q^{n-\ell}-\frac{1}{n}\left(\ell+1+\ell(q-1)q^{\ell}\right)q^{n/2}\nonumber\\
&\ge \frac{1}{2n}q^{n-\ell}-\frac{\ell}{n}q^{\ell+1}q^{n/2}.
\end{align*}
It follows that $S_q(n;\vec{c})>0$ when
\[
q^{n/2}\ge 2\ell q^{2\ell+1}.
\]
So we may assume $2\ell\le n/2$. Taking $\log_q$ on both sides, we complete the proof.
~~\vrule height8pt width4pt depth0pt
\medskip
We note that Garefalakis and Kapetanakis \cite{GarKap12} used character sums and Weil bound to derive estimate for the number of self-reciprocal irreducible monic polynomials with one coefficient prescribed at a general position.
\section{Exact results for $S_2(n;a)$ and $S_3(n;a)$}
In this section, we carry out more detailed calculations to obtain some explicit formulas for $S_2(n;a)$
and $S_3(n;a)$.
Substituting $\ell=1$ into \eqref{eq:phi1} and \eqref{eq:psi1}, we obtain
\begin{align*}
\phi_d(a)&=a,\\
\psi_{(b_0,b_1)}(a)&= a+b_0^{-1}b_1.
\end{align*}
It follows from Theorem~\ref{thm:main} that, for $n>1$,
\begin{align}
S_q(n;c)&=I_q(n;c)-\frac{1}{2}\sum_{b_1\in {\mathbb F}_{q},b_0\in {\mathbb F}_{q}^{*}} I_q(n;c-b_0^{-1}b_1,(b_0,b_1))\nonumber\\
&~~+\frac{1}{2}\sum_{a\in {\mathbb F}_{q}}\llbracket 2a=c\rrbracket S_q(n/2;a)\nonumber\\
&=I_q(n;c)-\frac{1}{2}\sum_{b_1\in {\mathbb F}_{q},b_0\in {\mathbb F}_{q}^{*}} I_q(n;c-b_0^{-1}b_1,(b_0,b_1))\nonumber\\
&~~+\frac{1}{2}\llbracket 2\nmid q\rrbracket S_q(n/2;c/2)+\frac{1}{2}\llbracket 2\mid q, c=0\rrbracket S_q(n/2).\label{eq:S1}
\end{align}
The expression of $I_q(n;c)$ was first obtained by Carlitz \cite{Car52}. For self-completeness, we derive the result using Theorem~\ref{thm:thm2} again. In this case we note that $\ell=1,t=0$, each polynomial $P(z;{\varepsilon})$ is equal to 1, and hence the sum in \eqref{eq:Froot} is equal to 0. Thus
\[
F_q(n;\langle x+a\rangle)=q^{n-1}, ~~a\in {\mathbb F}_{q}.
\]
Substituting this into \eqref{eq:IF}, we obtain
\begin{align}
I_q(n;c)&=\frac{1}{n}\sum_{j\mid n}\mu(j) q^{n/j-1}\sum_{a\in {\mathbb F}_{q}}\llbracket \langle x+a\rangle^j=\langle x+c\rangle\rrbracket\nonumber\\
&=\frac{1}{qn}\sum_{j\mid n}\mu(j)q^{n/j}\sum_{a\in {\mathbb F}_{q}}\llbracket ja=c\rrbracket.
\end{align}
Noting
\begin{align*}
\sum_{a\in {\mathbb F}_{q}}\llbracket ja=0\rrbracket&=\llbracket p\nmid j\rrbracket+q\llbracket p\mid j\rrbracket
=1+(q-1)\llbracket p\mid j\rrbracket ,\\
\sum_{a\in {\mathbb F}_{q}}\llbracket ja=c\rrbracket&=\llbracket p\nmid j\rrbracket,~~c\ne 0,
\end{align*}
we obtain
\begin{align}
I_q(n;c)&=\frac{\llbracket c\ne 0\rrbracket}{qn}\sum_{j\mid n}\llbracket p\nmid j \rrbracket\mu(j)q^{n/j}\nonumber\\
&~~~+\frac{\llbracket c=0 \rrbracket}{qn}\sum_{j\mid n}(1+(q-1)\llbracket p\mid j \rrbracket)\mu(j)q^{n/j}.\label{eq:Iq1}
\end{align}
Next we derive a simple explicit expression for $S_2(n;1)$ and $S_2(n;0)$.
\begin{thm}\label{thm:S21} Let $\theta=\cos^{-1}\left(1/2\sqrt{2}\right)$. For $n\ge 1$, we have
\begin{align}
S_2(n;1)&=\frac{1}{4n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)
\left(2^{n/j}+1-(-1)^{n/j}2^{(n/2j)+1}\cos(n\theta/j)\right),\label{eq:S21s}\\
S_2(n;0)&=\frac{1}{4n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)
\left(2^{n/j}-1+(-1)^{n/j}2^{(n/2j)+1}\cos(n\theta/j)\right).\label{eq:S20s}
\end{align}
\end{thm}
\noindent {\bf Proof } Since $x^2+x+1$ is irreducible over ${\mathbb F}_2$, we have $S_2(1;1)=1$ which agrees with the value obtained from \eqref{eq:S21s}.
For $n>1$, setting $q=2$ and $c=1$ in \eqref{eq:S1}, we obtain
\begin{align}
S_2(n;1)&=I_2(n;1)-\frac{1}{2}\left(I_2(n;1,(1,0))+I_2(n;0,(1,1)) \right).\label{eq:S21}
\end{align}
To find $I_2(n;a,(1,b))$, we first note that the corresponding group ${\cal E}^{1,2}$ is generated by $\xi_1=\langle x+1\rangle$ and $\xi_2=\langle x^3+x+1\rangle$, and both $\xi_1$ and $\xi_2$ have order 2. We also have
\begin{align*}
{\cal E}^{1,2}(1)=\{\xi_1\},\quad
{\cal E}^{1,2}(2)=\{\langle x^2+1\rangle,\langle x^2+x+1\rangle\}=\{\langle 1\rangle,\xi_1\}.
\end{align*}
Using \eqref{eq:cdj} and \eqref{eq:Pj}, we obtain
\begin{align*}
c(1;\xi_1^{j_1}\xi_2^{j_2})&=(-1)^{j_1},\\
c(2;\xi_1^{j_1}\xi_2^{j_2})&=1+(-1)^{j_1},\\
P(z;\xi_1\xi_2^{j_2})&=1-z,\\
P(z;\xi_1^2\xi_2)&=1+z+2z^2\\
&=\left(1+\sqrt{2}e^{i\theta}z\right)\left(1+\sqrt{2}e^{-i\theta}z\right),
~~\theta=\cos^{-1}(1/2\sqrt{2}).
\end{align*}
It follows from \eqref{eq:Froot} that
\begin{align*}
F_2\left(n;\xi_1^{s_1}\xi_2^{s_2}\right)&=\frac{2^n-1}{4}-\frac{1}{4}\left((-1)^{s_1+s_2}+(-1)^{s_1}
+(-1)^{s_2+n}2^{n/2}\left(e^{in\theta}+e^{-in\theta}\right) \right)\nonumber\\
&=\frac{2^n-1}{4}-\frac{1}{4}\left((-1)^{s_1+s_2}+(-1)^{s_1}
+(-1)^{s_2+n}2^{(n/2)+1}\cos(n\theta) \right).
\end{align*}
More explicitly, we have
\begin{align}
F_2(n;\langle 1\rangle)&=\frac{2^n-3}{4}-\frac{(-1)^n}{2}2^{n/2}\cos(n\theta),\label{eq:F200}\\
F_2(n;\xi_1)&=\frac{2^n+1}{4}-\frac{(-1)^n}{2}2^{n/2}\cos(n\theta),\label{eq:F210} \\
F_2(n;\xi_2)&= \frac{2^d-1}{4}+\frac{(-1)^n}{2}2^{n/2}\cos(n\theta),\label{eq:F201} \\
F_2(n;\xi_1\xi_2)&= \frac{2^n-1}{4}+\frac{(-1)^n}{2}2^{n/2}\cos(n\theta).\label{eq:F211}
\end{align}
It follows from \eqref{eq:IF} that
\begin{align}
I_2(n;\langle 1\rangle)&=\frac{1}{n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)F_2(n/j;\langle 1\rangle)\nonumber\\
&~~~+\frac{1}{n}\sum_{j\mid n}\llbracket 2\mid j\rrbracket\mu(j)\sum_{j_1,j_2\in \{0,1\}}F_2\left(n/j;\xi_1^{j_1}\xi_2^{j_2}\right)\nonumber\\
&=\frac{1}{4n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)\left(2^{n/j}-3-(-1)^{n/j}2^{(n/2j)+1}\cos(n\theta/j)\right)\nonumber\\
&~~~+\frac{1}{n}\sum_{j\mid n}\llbracket 2\mid j\rrbracket\mu(j)\left(2^{n/j}-1\right),\label{eq:I200}\\
I_2(n;\xi_1)&=\frac{1}{n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)F_2(n/j;\xi_1) \nonumber\\
&=\frac{1}{4n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)\left(2^{n/j}+1-(-1)^{d/j}2^{(n/2j)+1}\cos(n\theta/j)\right),\label{eq:I210}\\
I_2(n;\xi_2)&=I_2(n;\xi_1\xi_2)=\frac{1}{n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)F_2(n/j;\xi_2) \nonumber\\
&=\frac{1}{4n}\sum_{j\mid n}\llbracket 2\nmid j\rrbracket\mu(j)\left(2^{n/j}-1+(-1)^{n/j}2^{(n/2j)+1}\cos(n\theta/j)\right).\label{eq:I211}
\end{align}
Noting
\[
\xi_2=\langle x^3+x+1\rangle ,\quad \xi_1\xi_2=\langle x^3+x^2+1\rangle,
\]
and substituting \eqref{eq:I211} into \eqref{eq:S21}, we obtain \eqref{eq:S21s}.
Using \eqref{eq:Sq2} and
\begin{align}
S_2(n;0)&=S_2(n)-S_2(n;1),
\end{align}
we obtain \eqref{eq:S20s}. ~~\vrule height8pt width4pt depth0pt
\medskip
The expression for $S_3(n;a)$ is quite messy, and we will work out the expression of $I_3(n;a,(b_0,b_1))$ which can then be used to compute the numerical values of $S_3(n;a)$.
Substituting $q=3$ into \eqref{eq:S1}, (noting $2^{-1}=2$ in ${\mathbb F}_3$) we obtain, for $n>1$,
\begin{align}
S_3(n;0)&=\frac{1}{2}S_3(n/2;0)+I_3(n;0)\nonumber\\
&~~~ -\frac{1}{2}\left(I_3(n;0,(1,0))+I_3(n;0,(2,0)) \right)\nonumber\\
&~~~ -\frac{1}{2}\left(I_3(n;2,(1,1))+I_3(n;1,(2,1)) \right)\label{eq:S30}\\
&~~~ -\frac{1}{2}\left(I_3(n;1,(1,2))+I_3(n;2,(2,2)) \right).\nonumber
\end{align}
To obtain $I_3(n;a,(b_0,b_1))$, we first note that the corresponding group ${\cal E}^{1,2}$ is generated by $\xi_1=\langle x+1\rangle$ and $\xi_2=\langle x^3+x+2\rangle$, which have orders 3 and 6, respectively. We also find
\begin{align*}
{\cal E}^{1,2}(1)&=\{\langle x+1\rangle,\langle x+2\rangle\}=\{\xi_1,\xi_1^2\xi_2^3\},\\
{\cal E}^{1,2}(2)&=\{\langle x^2+ax+b\rangle, 0\le a\le 2,1\le b\le 2\}=\{\langle 1\rangle,\xi_2^3,\xi_1,\xi_1\xi_2^5,\xi_1^2,\xi_1^2\xi_2\}.
\end{align*}
It follows from \eqref{eq:cdj} and \eqref{eq:Pj} that
\begin{align*}
c\left(1;\xi_1^{j_1}\xi_2^{j_2}\right)&={\omega}_3^{j_1}+{\omega}_3^{2j_1}(-1)^{j_2},\\
c\left(2;\xi_1^{j_1}\xi_2^{j_2}\right)&=1+(-1)^{j_2}+{\omega}_3^{j_1}+{\omega}_3^{j_1}{\omega}_6^{5j_2}+{\omega}_3^{2j_1}+{\omega}_3^{2j_1}{\omega}_6^{j_2},\\
P\left(z;\xi_1^{j_1}\xi_2^{j_2}\right)&=1+c\left(1;\xi_1^{j_1}\xi_2^{j_2}\right)z+c\left(2;\xi_1^{j_1}\xi_2^{j_2}\right)z^2.
\end{align*}
If we use ${\bar g}(z)$ to denote the polynomial obtained from $g(z)\in {\mathbb C}[z]$ by taking the complex conjugate of the coefficients of $g(z)$, then we have
\begin{align*}
P\left(z;\xi_1^2\xi_2^{j_2}\right)={\bar P}\left(z;\xi_1\xi_2^{6-j_2}\right),\quad
P\left(z;\xi_1^3\xi_2^{j_2}\right)={\bar P}\left(z;\xi_1^3\xi_2^{6-j_2}\right).
\end{align*}
We also have
\begin{align*}
P\left(z;\xi_1\xi_2\right)&=1+i\sqrt{3}z,\\
P\left(z;\xi_1\xi_2^{2}\right)&=1-z+3z^2=\left(1-\frac{1+i\sqrt{11}}{2}z\right)\left(1-\frac{1-i\sqrt{11}}{2}z\right)\\
&=\left(1-\sqrt{3}e^{i\theta_1}z\right)\left(1-\sqrt{3}e^{-i\theta_1}z\right),
~~\theta_1=\cos^{-1}(1/2\sqrt{3}),\\
P\left(z;\xi_1\xi_2^3\right)&=1+i\sqrt{3}z,\\
P\left(z;\xi_1\xi_2^4\right)&=1-z,\\
P\left(z;\xi_1\xi_2^5\right)&=1+i\sqrt{3}z-3z^2=(1-i\sqrt{3}{\omega}_3z)(1-i\sqrt{3}{\omega}_3^2z),\\
P\left(z;\xi_1\xi_2^6\right)&=1-z ,\\
P\left(z;\xi_1^3\xi_2\right)&=1+3z^2=(1+i\sqrt{3}z)(1-i\sqrt{3}z),\\
P\left(z;\xi_1^3\xi_2^2\right)&=1+ 2z+3z^2=\left(1+(1+i\sqrt{2})z\right)\left(1+(1-i\sqrt{2})z\right),\\
&=\left(1+\sqrt{3}e^{i\theta_2}z\right)\left(1+\sqrt{3}e^{-i\theta_2}z\right),
~~\theta_2=\cos^{-1}(1/\sqrt{3}),\\
P\left(z;\xi_1^3\xi_2^3\right)&=1,\\
P(z)&=\prod_{(j_1,j_2)\ne (3,6)}P\left(z;\xi_1^{j_1}\xi_2^{j_2}\right)\\
&=(1-z)^4(1+3z^2)^4(1-z+3z^2)^2(1+2z+3z^2)^2\\
&~~~\times(1-3z+3z^2)(1+3z+3z^2).
\end{align*}
Using \eqref{eq:Froot} and combining the conjugate pairs, we obtain
\begin{align}
F_3\left(n;\xi_1^{t_1}\xi_2^{t_2}\right)&=\frac{3^n-1}{18}
-\frac{1}{9}\left(\cos(2\pi(t_1+2t_2)/3)+\cos(2\pi t_1/3)\right)\nonumber\\
&-\frac{1}{9}3^{n/2}\cos((2t_1+t_2)\pi/3+n\pi/2)\nonumber\\
&-\frac{2}{9}3^{n/2}\cos(2(t_1+t_2)\pi/3)\cos(n\theta_1) \label{eq:F312f}\\
&-\frac{1}{9}3^{n/2}(-1)^{t_2}\cos(2t_1\pi/3+n\pi/2)\nonumber\\
&-\frac{1}{9}3^{n/2}\cos((2n-2t_1+t_2)\pi/3+n\pi/2)\nonumber\\
&-\frac{1}{9}3^{n/2}\cos((-2n-2t_1+t_2)\pi/3+n\pi/2)\nonumber\\
&-\frac{2}{9}3^{n/2}(-1)^{n/2}\llbracket 2\mid n\rrbracket\cos(t_2\pi/3)\nonumber\\
&-\frac{2}{9}3^{n/2}(-1)^n\cos(2t_2\pi/3)\cos(n\theta_2).\nonumber
\end{align}
Substituting $q=3$, $r_1=3$ and $r_2=6$ into \eqref{eq:IF}, and separating the six residue classes of $k$ (modulo 6), we obtain
\begin{align}
&~~I_3\left(n;\xi_1^{e_1}\xi_2^{e_2}\right)\nonumber\\
&=\frac{\llbracket e_1=e_2=0\rrbracket}{n} \sum_{k\mid n}\llbracket 6\mid k\rrbracket \mu(k)\left(3^{n/k}-1\right)\nonumber\\
&+\frac{1}{n}\sum_{k\mid n}\llbracket 6\mid k-1\rrbracket \mu(k)F_3\left(n/k;\xi_1^{e_1}\xi_2^{e_2}\right)\nonumber\\
&+\frac{1}{n}\sum_{k\mid n}\llbracket 6\mid k+1\rrbracket \mu(k)F_3\left(n/k;\xi_1^{-e_1}\xi_2^{-e_2}\right)\label{eq:I312f}\\
&+\frac{\llbracket 2\mid e_2\rrbracket}{n} \sum_{k\mid n}\llbracket 6\mid k-2\rrbracket \mu(k) \sum_{s_2\in \{0,3\}}F_3\left(n/k;\xi_1^{-e_1}\xi_2^{e_2/2+s_2}\right)\nonumber\\
&+\frac{\llbracket 2\mid e_2\rrbracket}{n} \sum_{k\mid d}\llbracket 6\mid k+2\rrbracket \mu(k) \sum_{s_2\in \{0,3\}}F_3\left(n/k;\xi_1^{e_1}\xi_2^{e_2/2+s_2}\right)\nonumber\\
&+\frac{\llbracket e_1=0,3\mid e_2\rrbracket}{n} \sum_{k\mid n}\llbracket 6\mid k+3\rrbracket \mu(k)\sum_{s_1,s_2\in \{0,1,2\}}F_3\left(n/k;\xi_1^{s_1}\xi_2^{e_2/3+2s_2}\right).\nonumber
\end{align}
We note that some cancellations in \eqref{eq:F312f} can be used to simplify the following sums:
\begin{align*}
&~~~\sum_{s_2\in \{0,3\}}F_3\left(n;\xi_1^{e_1}\xi_2^{e_2/2+s_2}\right)\\
&=\frac{3^{n}-1}{9}-\frac{2}{9}\left(\cos(2\pi (e_1+e_2)/3)+\cos(2\pi e_1/3) \right)\\
&~~~-\frac{4}{9}3^{n/2}\cos((2e_1+e_2)\pi /3)\cos(n\theta_1) \\
&~~~-\frac{4}{9}3^{n/2}(-1)^{n}\cos(e_2\pi/3)\cos(n\theta_2), \\
&~~~\sum_{s_1,s_2\in \{0,1,2\}}F_3\left(n;\xi_1^{s_1}\xi_2^{e_2/3+2s_2}\right)\\
&=\frac{3^{n}-1}{2}.
\end{align*}
In terms of the generators, we may rewrite \eqref{eq:S30} as
\begin{align}
S_3(n;0)&=\frac{1}{2}S_3(n/2;0)+I_3(n;0)-\frac{1}{2}\left(I_3(n;\langle 1\rangle)+I_3(n;\xi_2^3) \right)\nonumber\\
&~~~ -\frac{1}{2}\left(I_3(n;\xi_1^2\xi_2^4)+I_3(n;\xi_1\xi_2^5)+I_3(n;\xi_1\xi_2^2)+I_3(n;\xi_1^2\xi_2) \right)\label{eq:S30g},
\end{align}
which, together with \eqref{eq:F312f} and \eqref{eq:I312f}, gives a recursive way of computing $S_3(n;0)$.
Over ${\mathbb F}_3$, we have $2=-1$, and hence $x\mapsto -x$ gives a bijection between self-reciprocal irreducible polynomials with trace 1 and those with trace $2$. Consequently
\begin{align}
S_3(n;1)&=S_3(n;2)=\frac{1}{2}(S_3(n)-S_3(n,0)). \label{eq:S31f}
\end{align}
Some numerical values of $I_3(n;a,(b_0,b_1))$ and $S_3(n;a)$ are calculated using \eqref{eq:F312f}--\eqref{eq:S31f}, which are given in Tables~1--4.
\section{Exact result for $S_2(n;a_1,a_2)$}
In this section, we derive some explicit expressions for computing $S_2(n;a_1,a_2)$.
Substituting $\ell=2$ and $q=2$ into \eqref{eq:phi1} and \eqref{eq:psi}, we obtain
\begin{align*}
\phi_d(a_1,a_2)&=(a_1,a_2+d),\\
\psi_{(1,b_1,b_2)}(a_1,a_2)&=\left(a_1+b_1, a_2+b_2+b_1a_1\right).
\end{align*}
It follows from Theorem~\ref{thm:main} that, for $n>1$,
\begin{align}
S_2(n;c_1,c_2)&=\frac{1}{2}\sum_{a_1,a_2\in \{0,1\}}\llbracket c_1=0,a_1^2=c_2\rrbracket S_2(n/2;a_1,a_2)+I_2(n;c_1,n+c_2)\nonumber\\
&~~-\frac{1}{2}\sum_{b_1,b_2\in \{0,1\}} I_2(n;(c_1+b_1,c_2+b_2+b_1c_1+b_1),(1,b_1,b_2))\nonumber\\
&=\frac{\llbracket c_1=0\rrbracket }{2} S_2(n/2;c_2)+I_2(n;c_1,n+c_2)\label{eq:S22} \\
&~~-\frac{1}{2}\sum_{b_1,b_2\in \{0,1\}} I_2(n;(c_1+b_1,c_2+b_2+b_1c_1+b_1),(1,b_1,b_2)). \nonumber
\end{align}
An explicit expression for $I_2(n;a_1,a_2)$ was first obtained by Kuzmin \cite{Kuz90}. For self-completeness, we apply Theorem~\ref{thm:thm2} to obtain the following different looking expressions.
\begin{prop}\label{ex:Q2L1} Let $\xi=\langle x+1\rangle$ be the generator of the group ${\cal E}^{2,0}$. We have, for $0\le t\le 3$,
\begin{align}
&~~I_2\left(n;\xi^t\right)\nonumber\\
&=\frac{\llbracket t=0\rrbracket}{n} \sum_{k\mid n}\llbracket 4\mid k\rrbracket \mu(k)2^{n/k}
+\frac{\llbracket 2\mid t\rrbracket}{n} \sum_{k\mid n}\llbracket 4\mid k-2\rrbracket \mu(k)2^{(n/k)-1}\label{eq:I223f} \\
&+\frac{1}{n}\sum_{k\mid n}\llbracket 4\mid k-1\rrbracket \mu(k)\left(2^{(n/k)-2}-(-1)^{n/k}2^{(n/2k)-1}\cos\left(\frac{(n/k)-2t}{4}\pi\right)\right)\nonumber\\
&+\frac{1}{n}\sum_{k\mid n}\llbracket 4\mid k+1\rrbracket \mu(k)\left(2^{(n/k)-2}-(-1)^{n/k}2^{(n/2k)-1}\cos\left(\frac{(n/k)+2t}{4}\pi\right)\right).\nonumber
\end{align}
\end{prop}
\noindent {\bf Proof } The group ${\cal E}^{2,0}$ here is generated by $\xi=\langle x+1\rangle$ which has order 4. We have
\begin{align*}
{\cal E}^{2,0}(1)&=\{\langle 1\rangle,\xi\},\\
c(1;\xi^j)&=1+i^j,\\
P(z;\xi^j)&=1+(1+i^j)z.
\end{align*}
Applying Theorem~\ref{thm:thm2}, we obtain
\begin{align}
F_2(n,\xi^t)&=2^{n-2}-\frac{1}{4}\sum_{j=1}^3i^{-jt}(-(1+i^j))^{n}\nonumber\\
&=2^{n-2}-\frac{1}{4}(-1)^n\left(i^{-t}(1+i)^n+i^{-3t}(1-i)^n \right)\nonumber \\
&=2^{n-2}-\frac{1}{2}(-1)^n\Re\left(i^{-t}(1+i)^n\right)\nonumber\\
&=2^{n-2}-(-1)^{n}2^{(n/2)-1}\cos\left(\frac{(n-2t)\pi}{4}\right).
\end{align}
Substituting $q=2$, $r=4$ into \eqref{eq:IF}, and separating the four residue classes of $k$ (modulo 4), we complete the proof. ~~\vrule height8pt width4pt depth0pt
To compute $I_2(n;(a_1,a_2),(1,b_1,b_2))$, we need to find the generators of ${\cal E}^{2,3}$. Using the computer algebra system {\em Maple}, we find that
${\cal E}^{2,3}$ is generated by $\xi_1=\langle x+1\rangle$ and $\xi_2=\langle x^5+x+1\rangle$. Both $\xi_1$ and $\xi_2$ have order 4.
We have
\begin{align*}
{\cal E}^{2,3}(1)&=\{\xi_1\},\\
{\cal E}^{2,3}(2)&=\{\langle x^2+ax+1\rangle: a\in {\mathbb F}_2 \}=\{\xi_1^2,\xi_1^3\},\\
{\cal E}^{2,3}(3)&=\{\langle x^3+ax^2+bx+1\rangle:a,b\in {\mathbb F}_2\}=\{\langle 1\rangle,\xi_1\xi_2,\xi_1^2\xi_2^3,\xi_1^3\},\\
{\cal E}^{2,3}(4)&=\{\langle x^4+ax^3+bx^2+cx+1\rangle: a,b,c\in {\mathbb F}_2\}\\
&=\{\langle 1\rangle,\xi_1\xi_2^3,\xi_1,\xi_1^2,\xi_1^2\xi_2,\xi_1^3\xi_2^3,\xi_1^3,\xi_2\},
\end{align*}
and consequently,
\begin{align*}
c\left(1;\xi_1^{j_1}\xi_2^{j_2}\right)&=i^{j_1},\\
c\left(2;\xi_1^{j_1}\xi_2^{j_2}\right)&=(-1)^{j_1}+i^{3j_1},\\
c\left(3;\xi_1^{j_1}\xi_2^{j_2}\right)&=1+i^{j_1+j_2}+(-1)^{j_1}i^{3j_2}+i^{3j_1},\\
c\left(4;\xi_1^{j_1}\xi_2^{j_2}\right)&=1+i^{j_1+3j_2}+i^{j_1}+(-1)^{j_1}+(-1)^{j_1}i^{j_2}+i^{3j_1+3j_2}+i^{3j_1}+i^{j_2},\\
P\left(z;\xi_1^{j_1}\xi_2^{j_2}\right)&=1+c(1;\xi_1^{j_1}\xi_2^{j_2})z+c(2;\xi_1^{j_1}\xi_2^{j_2})z^2
+c(3;\xi_1^{j_1}\xi_2^{j_2})z^3+c(4;\xi_1^{j_1}\xi_2^{j_2})z^4,\\
P\left(z;\xi_1^{j_1}\xi_2^{j_2}\right)&={\bar P}\left(z;\xi_1^{4-j_1}\xi_2^{4-j_2}\right).
\end{align*}
Thus
\begin{align*}
P_{1,1}(z)&=P_{1,4}(z)=1+iz-(1+i)z^2,\\
P_{1,2}(z)&=P_{1,3}(z)=1+iz-(1+i)z^2+2(1-i)z^3 ,\\
P_{2,1}(z)&=1-z-2iz^3+4iz^4 ,\\
P_{2,2}(z)&=P_{2,4}(z)=1-z ,\\
P_{4,1}(z)&=1+z+2z^2+2z^3+4z^4,\\
P_{4,2}(z)&=1+z+2z^2,\\
P(z)&=\prod_{(j_1,j_2)\ne (4,4)}P_{j_1,j_2}(z)\\
&=(1-z)^2(1-z^2-2z^3+2z^4)^2(1-z^2+2z^3-2z^4+8z^6)^2\\
&~~~\times(1+z+2z^2+2z^3+4z^4)^2\\
&~~~\times(1-2z+4z^6-16z^7+16z^8)(1+z+2z^2).
\end{align*}
Applying Theorem~\ref{thm:thm2} and combining conjugate pairs, we obtain
\begin{align}
F_2\left(n;\xi_1^{s_1}\xi_2^{s_2}\right)&=\frac{1}{16}\left(2^n-1\right)
-\frac{1}{16}\left((-1)^{s_1+s_2}+(-1)^{s_1}\right) \nonumber \\
&~~~+\frac{n}{8}\Re\left(\left(i^{-s_1-s_2}+i^{-s_1}\right)[z^n]\ln(1+iz-(1+i)z^2)\right)\nonumber \\
&~~~+\frac{n}{8}\Re\left(\left(i^{-s_1-2s_2}+i^{-s_1-3s_2}\right)[z^n]\ln(1+iz-(1+i)z^2+2(1-i)z^3))\right)
\nonumber \\
&~~~+\frac{n}{8}\Re\left((-1)^{-s_1}i^{-s_2}[z^n]\ln(1-z-2iz^3+4iz^4)\right)\nonumber \\
&~~~+\frac{n}{8}\cos(\pi s_2/2)[z^n]\ln(1+z+2z^2+2z^3+4z^4) \nonumber\\
&~~~+\frac{n}{16}(-1)^{s_2}[z^n]\ln(1+z+2z^2).\label{eq:F2n}
\end{align}
Substituting $q=2$, $r_1=r_2=4$ into \eqref{eq:IF}, and separating the four residue classes of $k$ (modulo 4), we obtain
\begin{align}
&~~I_2\left(n;\xi_1^{e_1}\xi_2^{e_2}\right)\nonumber\\
&=\frac{\llbracket e_1=e_2=0\rrbracket}{n} \sum_{k\mid n}\llbracket 4\mid k\rrbracket \mu(k)\left(2^{n/k}-1\right)\nonumber\\
&+\frac{1}{n}\sum_{k\mid n}\llbracket 4\mid k-1\rrbracket \mu(k)F_2\left(n/k;\xi_1^{e_1}\xi_2^{e_2}\right)\nonumber\\
&+\frac{1}{n}\sum_{k\mid n}\llbracket 4\mid k+1\rrbracket \mu(k)F_2\left(n/k;\xi_1^{-e_1}\xi_2^{-e_2}\right)\label{eq:I223f}\\
&+\frac{\llbracket 2\mid e_1,2\mid e_2\rrbracket}{n} \sum_{k\mid n}\llbracket 4\mid k-2\rrbracket \mu(k) \sum_{s_1,s_2\in \{0,2\}}F_2\left(n/k;\xi_1^{e_1/2+s_1}\xi_2^{e_2/2+s_2}\right).\nonumber
\end{align}
As before, we may use some cancellations in \eqref{eq:F2n} to simplify the following sum:
\begin{align}
&~~\sum_{s_1,s_2\in \{0,2\}}F_2\left(n;\xi_1^{e_1/2+s_1}\xi_2^{e_2/2+s_2}\right)\nonumber\\
&=\frac{1}{4}\left(2^n-1
-(-1)^{(e_1+e_2)/2}-(-1)^{e_1/2}\right)\nonumber\\
&~~~+\frac{n}{2}\cos(\pi e_2/4)[z^n]\ln(1+z+2z^2+2z^3+4z^4) \nonumber\\
&~~~+\frac{n}{4}(-1)^{e_2/2}[z^n]\ln(1+z+2z^2) .
\end{align}
In terms of the generators $\xi$, $\xi_1$ and $\xi_2$, we may rewrite \eqref{eq:S22} as
\begin{align}
S_2(n;0,0)
&=\frac{1}{2} S_2(n/2;0)+I_2(n;0,n)\nonumber\\
&~~-\frac{1}{2}\sum_{b_1,b_2\in \{0,1\}} I_2(n;(b_1,b_1+b_2),(1,b_1,b_2))\nonumber\\
&=\frac{\llbracket 2\mid n\rrbracket}{2} S_2(n/2;0)+\llbracket 2\mid n\rrbracket I_2(n;\xi^0)+\llbracket 2\nmid n\rrbracket I_2(n;\xi^2)\nonumber\\
&~~-\frac{1}{2}\left(I_2(n;\xi_1^0)+I_2(n;\xi_1^2)+I_2(n;\xi_1^3\xi_2^2)+I_2(n;\xi_1\xi_2^2)\right), \label{eq:S200}\\
S_2(n;0,1)&=\frac{1}{2} S_2(n/2;1)+I_2(n;0,n+1)\nonumber\\
&~~-\frac{1}{2}\sum_{b_1,b_2\in \{0,1\}} I_2(n;(b_1,1+b_1+b_2),(1,b_1,b_2))\nonumber\\
&=\frac{\llbracket 2\mid n\rrbracket}{2} S_2(n/2;1)+\llbracket 2\mid n\rrbracket I_2(n;\xi^2)+\llbracket 2\nmid n\rrbracket I_2(n;\xi^0)\nonumber\\
&~~-\frac{1}{2}\left(I_2(n;\xi_2^2)+I_2(n;\xi_1^2\xi_2^2)+I_2(n;\xi_1)+I_2(n;\xi_1^3)\right), \label{eq:S201}\\
S_2(n;1,0)
&= I_2(n;1,n) \nonumber\\
&~~-\frac{1}{2}\sum_{b_1,b_2\in \{0,1\}} I_2(n;(1+b_1,b_2),(1,b_1,b_2))+\llbracket 2\nmid n\rrbracket I_2(n;\xi^2)\nonumber\\
&=\llbracket 2\mid n\rrbracket I_2(n;\xi)+\llbracket 2\nmid n\rrbracket I_2(n;\xi^3)\nonumber\\
&~~-\frac{1}{2}\left(I_2(n;\xi_1\xi_2^3)+I_2(n;\xi_1^3\xi_2^3)+I_2(n;\xi_2)+I_2(n;\xi_1^2\xi_2)\right), \label{eq:S210} \\
S_2(n;1,1)
&=I_2(n;1,n+1)\nonumber\\
&~~-\frac{1}{2}\sum_{b_1,b_2\in \{0,1\}} I_2(n;(1+b_1,1+b_2),(1,b_1,b_2))\nonumber\\
&=\llbracket 2\mid n\rrbracket I_2(n;\xi^3) +\llbracket 2\nmid n\rrbracket I_2(n;\xi) \nonumber\\
&~~-\frac{1}{2}\left(I_2(n;\xi_1^3\xi_2)+I_2(n;\xi_1\xi_2)+I_2(n;\xi_1^2\xi_2^3)+I_2(n;\xi_2^3)\right).\label{eq:S211}
\end{align}
Some numerical values of $I_2(n;\xi_1^{e_1}\xi_2^{e_2})$ and $S_2(n,a_1,a_2)$ are computed using \eqref{eq:F2n}--\eqref{eq:S211}, which are given in tables~\ref{table5}--\ref{table7}.
For typographical convenience, the column headed by $\xi_1^{e_1}\xi_2^{e_2}$ in tables~\ref{table5} and \ref{table6} gives the values of $I_2(n;\xi_1^{e_1}\xi_2^{e_2})$.
\section{Conclusion} \label{conclusion}
We derived a general formula for the number $S_q(n;a_1,\ldots,a_{\ell})$ of self-reciprocal irreducible monic polynomials of degree $n$ over ${\mathbb F}_{q}$ with leading coefficients $a_1,\ldots,a_{\ell}$. The general formula is expressed in terms of the number of irreducible monic polynomials with prescribed leading and ending coefficients. Using the general formula, we derived
exact expressions for $S_2(n;a)$, $S_3(n;a)$ and $S_2(n;a_1,a_2)$. Explicit error bounds were also obtained for $S_q(n;a_1,\ldots,a_{\ell})$ which imply that $S_q(n;a_1,\ldots,a_{\ell})>0$ when $\ell$ is slightly less than $n/4$.
In another paper \cite{Gao21}, we used Theorem~2 to improve the bounds for $I_q(n;{\varepsilon})$ given in \cite{Coh05,Hsu96}. Using the improved bounds for $I_q(n;{\varepsilon})$ and cancellations in the sums appeared in \eqref{eq:main}, we were able to significantly improve the bounds for $S_q(n;{\varepsilon})$.
\newpage
\section{Tables of numerical values}
\begin{table}[h]
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|}
\hline
$n$&$I_3(n;\xi_2^0)$&$I_3(n;\xi_2)$&$I_3(n;\xi_2^2)$&$I_3(n;\xi_2^3)$&$I_3(n;\xi_2^4)$&$I_3(n;\xi_2^5)$\\
\hline
1 & 0& 0&0 &0&0 &0 \\
\hline
2 &1 &0 & 0& 0& 0& 0\\
\hline
3 & 0& 0 & 0& 0&1&1\\
\hline
4 & 0& 1 & 1& 2&1&1\\
\hline
5 &2& 3 & 3& 2&3&3\\
\hline
6&8& 8 & 6& 2&6&8\\
\hline
7 & 20&16 &16 & 20&16&16\\
\hline
8 &42 & 45& 44& 50&44&45 \\
\hline
9 &116&128 &128&116&119&119\\
\hline
10 & 334&328 &325 &320&325&328\\
\hline
11 &890&897&897&890&897&897\\
\hline
12 &2418&2447&2460&2504&2460&2447\\
\hline
13 &6848&6796&6796&6848&6796&6796\\
\hline
14 &18968&19002&18968&18884&18968&19002\\
\hline
15 &53072& 53103& 53103& 53072& 53249& 53249\\
\hline
16 &149370&149425&149380&149690&149380&149425\\
\hline
17 &422042&422019&422019&422042&422019&422019\\
\hline
18&1196484& 1195850& 1195362& 1195142& 1195362& 1195850\\
\hline
19 & 3398468& 3398404& 3398404& 3398468& 3398404& 3398404\\
\hline
20 &9682968& 9686128& 9685800& 9685264&9685800& 9686128\\
\hline
\end{tabular}
\end{center}
\caption{Values of $I_3(n;\xi_2^{s})$.}
\label{table1}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|}
\hline
$n$&$I_3(n;\xi_1)$&$I_3(n;\xi_1\xi_2)$&$I_3(n;\xi_1\xi_2^2)$&$I_3(n;\xi_1\xi_2^3)$&$I_3(n;\xi_1\xi_2^4)$&$I_3(n;\xi_1\xi_2^5)$\\
\hline
1 & 1& 0&0 &0&0 &0 \\
\hline
2 &0 &0 & 0& 0& 0&1\\
\hline
3 & 0& 1&1& 0&0&1\\
\hline
4 & 1& 1 & 1& 2&1&0\\
\hline
5 &4& 3 & 2& 2&3&2\\
\hline
6&8& 8 & 6&6&6&7\\
\hline
7 &16&16 &18& 20&16&18\\
\hline
8 &41&45&50&46&44&44 \\
\hline
9 &120&119&121&120&128&121\\
\hline
10 & 337&328&310&332&325&328\\
\hline
11 &880&897&896&902&897&896\\
\hline
12 &2442&2447&2469&2448&2460&2476\\
\hline
13 &6856& 6796& 6816& 6800& 6796& 6816\\
\hline
14 &18974& 19002& 18902& 18940& 18986&19024\\
\hline
15 &53160& 53249& 53096& 53160& 53103& 53096\\
\hline
16 &149437& 149425& 149518& 149618& 149380& 149292\\
\hline
17 &422020& 422019& 422234& 421634& 422019& 422234\\
\hline
18&1195578& 1195850& 1195740& 1195698& 1195362& 1195861\\
\hline
19 &3398344& 3398404& 3398010& 3399380& 3398404& 3398010\\
\hline
20 &9684348& 9686128& 9685896& 9685668& 9685800& 9684248\\
\hline
\end{tabular}
\end{center}
\caption{Values of $I_3(n;\xi_1\xi_2^{s})$.}
\label{table2}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|}
\hline
$n$&$I_3(n;\xi_1^2)$&$I_3(n;\xi_1^2\xi_2)$&$I_3(n;\xi_1^2\xi_2^2)$&$I_3(n;\xi_1^2\xi_2^3)$&$I_3(n;\xi_1^2\xi_2^4)$&$I_3(n;\xi_1^2\xi_2^5)$\\
\hline
1 & 0& 0&0 &1&0 &0 \\
\hline
2 &0 &1 & 0& 0& 0& 0\\
\hline
3 & 0& 1 &1& 0&1&0\\
\hline
4 & 1& 0 & 1& 2&1&1\\
\hline
5 &2&2 & 3&4&2&3\\
\hline
6&6& 7 & 6& 6&6&8\\
\hline
7 & 20&18 &16 & 16&18&16\\
\hline
8 &41 & 44& 44& 46&50&45 \\
\hline
9 &120& 121& 119& 120& 121& 128\\
\hline
10 & 337&328 &325 &332&310&328\\
\hline
11 &902&896&897&880&896&897\\
\hline
12 &2442&2476&2460&2448&2469&2447\\
\hline
13 &6800&6816&6796&6856&6816&6796\\
\hline
14 &18974&19024&18986&18940&18902&19002\\
\hline
15 &53160& 53096& 53249& 53160& 53096& 53103\\
\hline
16 &149437& 149292& 149380& 149618& 149518& 149425\\
\hline
17 &421634& 422234& 422019& 422020& 422234& 422019\\
\hline
18&1195578& 1195861& 1195362& 1195698& 1195740& 1195850\\
\hline
19 & 3399380& 3398010& 3398404& 3398344& 3398010& 3398404\\
\hline
20 &9684348& 9684248& 9685800& 9685668& 9685896& 9686128\\
\hline
\end{tabular}
\end{center}
\caption{Values of $I_3(n;\xi_1^2\xi_2^{s})$.}
\label{table3}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|r|r|r|}
\hline
$n$&$S_3(n;0)$&$S_3(n;1)$\\
\hline
1 & 1& 0\\
\hline
2 &0 &1\\
\hline
3 & 0& 2\\
\hline
4 & 4& 3\\
\hline
5 &10&7\\
\hline
6&20&208\\
\hline
7 &48&54\\
\hline
8 &132&139 \\
\hline
9 &368&362\\
\hline
10 &1000&976\\
\hline
11 &2686&2683\\
\hline
12 &7340&7400\\
\hline
13 &20400& 20460\\
\hline
14 &57000& 56910\\
\hline
15 &159584& 159352\\
\hline
16 &448396& 448407\\
\hline
17 &1265650& 1266295\\
\hline
18&3586820& 3587420\\
\hline
19 &10196064& 10194882\\
\hline
20 &29058328& 29055640\\
\hline
\end{tabular}
\end{center}
\caption{Values of $S_3(n;0)$ and $S_3(n;1)=S_3(n;2)$.}
\label{table4}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|}
\hline
$n$&$\xi_2^0$&$\xi_2$&$\xi_2^2$&$\xi_2^3$&$\xi_1$
&$\xi_1\xi_2$&$\xi_1\xi_2^2$&$\xi_1\xi_2^3$\\
\hline
1 & 0& 0& 0& 0& 1& 0& 0& 0\\
\hline
2 &0 &0& 0& 0& 0& 0&0& 0\\
\hline
3 &0 &0& 0& 0& 0& 1&0& 0\\
\hline
4 &0 &1& 0& 0& 0& 0&0& 1\\
\hline
5 &0 &0& 1& 0& 0& 0&1& 0\\
\hline
6&1 &1& 0& 0& 0& 1&1& 1\\
\hline
7 &2 &1& 1& 1& 2& 1&0& 1\\
\hline
8 &1 &2&2& 1& 2& 2&2& 2\\
\hline
9 &4 &4& 3& 4& 6& 2&3& 4\\
\hline
10 &5 &5&6& 8& 5& 7&7& 5\\
\hline
11 &10 &11& 13& 11& 12& 11&14& 11\\
\hline
12 &23 &24& 18& 20& 21& 20&20&24\\
\hline
13 &36 &42&35&42&36&36&41&42\\
\hline
14 &73&75&70&70&76&73&73&75\\
\hline
15 &138&137&138&137&134&133&137&137\\
\hline
16 &243&262&258&245&238&262&262&262\\
\hline
17 &484&488&475&488&482&486&479&488\\
\hline
18&930&889&894&913&913&912&912&889\\
\hline
19 &1722&1719&1725&1719&1728&1743&1722&1719\\
\hline
20 &3327 &3271&3234&3275&3260&3288&3288&3271\\
\hline
\end{tabular}
\end{center}
\caption{Values of $I_2(n;\xi_2^s)$ and $I_2(n;\xi_1\xi_2^s)$, $0\le s\le 3$.}
\label{table5}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|}
\hline
$n$&$\xi_1^2$&$\xi_1^2\xi_2$&$\xi_1^2\xi_2^2$&$\xi_1^2\xi_2^3$&$\xi_1^3$
&$\xi_1^3\xi_2$&$\xi_1^3\xi_2^2$&$\xi_1^3\xi_2^3$\\
\hline
1 & 0& 0& 0& 0& 0& 0& 0& 0\\
\hline
2 &0 &0& 0& 0& 1& 0&0& 0\\
\hline
3 &0 &0& 0& 1& 0& 0&0& 0\\
\hline
4 &0 &0& 0& 0&1& 0&0& 0\\
\hline
5 &0 &1& 1& 0& 0& 0&1& 1\\
\hline
6&0 &1& 0&1& 0&0&1& 1\\
\hline
7 &0 &2& 1& 1& 2& 1&0&2\\
\hline
8 &2 &2&2&2&3&1&2& 2\\
\hline
9 &4 &4& 3&2&2&4&3& 4\\
\hline
10 &4 &7&6&7& 5&8&7&7\\
\hline
11 &12 &12& 13& 11& 8& 11&14& 12\\
\hline
12 &22 &20& 18& 20& 25& 20&20&20\\
\hline
13&48&41&35&36&36&42&41&41\\
\hline
14 &72&73&70&73&72&70&73&73\\
\hline
15 &136&134&138&133&142&137&137&134\\
\hline
16 &242&262&258&262&255&245&262&262\\
\hline
17 &492&467&475&486&486&488&479&467\\
\hline
18&908&912&894&912&917&913&912&912\\
\hline
19 &1716&1728&1725&1743&1716&1719&1722&1728\\
\hline
20 &3246 &3288&3234&3288&3256&3275&3288&3288\\
\hline
\end{tabular}
\end{center}
\caption{Values of $I_2(n;\xi_1^2\xi_2^s)$ and $I_2(n;\xi_1^3\xi_2^s)$, $0\le s\le 3$.}
\label{table6}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|r|r|r|r|r|}
\hline
$n$&$S_2(n;0,0)$&$S_2(n;0,1)$&$S_2(n;1,0)$&$S_2(n;1,1)$\\
\hline
1 & 0& 0 &0 & 1\\
\hline
2 & 0& 0 & 0& 1\\
\hline
3 & 1& 0 & 0& 0\\
\hline
4 & 1& 0 & 0& 1\\
\hline
5 & 1& 0 & 1& 1\\
\hline
6& 1& 2 & 1& 1\\
\hline
7 & 3& 2 & 2& 2\\
\hline
8 &3&4 &4&5\\
\hline
9 & 6&8 &5&9\\
\hline
10 & 13&14 & 12&12\\
\hline
11 &23&22&22&26\\
\hline
12 &44&40 &41&45\\
\hline
13 &77&84 &77&77\\
\hline
14 & 145&146 & 149&145\\
\hline
15 &267&274 &279&271\\
\hline
16 &507&524 &500&517\\
\hline
17 &953&976 &965&961\\
\hline
18& 1802&1824& 1825&1829\\
\hline
19 &3471&3438 &3438& 3450\\
\hline
20 &6546&6576&6548&6544\\
\hline
\end{tabular}
\end{center}
\caption{Values of $S_2(n;0,0),S_2(n;0,1),S_2(n;1,0),S_2(n;1,1)$.}
\label{table7}
\end{table}
\newpage
| {
"timestamp": "2021-10-14T02:05:33",
"yymm": "2109",
"arxiv_id": "2109.09006",
"language": "en",
"url": "https://arxiv.org/abs/2109.09006",
"abstract": "A polynomial is called self-reciprocal (or palindromic) if the sequence of its coefficients is palindromic. In this paper we enumerate self-reciprocal irreducible monic polynomials over a finite field with prescribed leading coefficients. Asymptotic expression with explicit error bound is derived, which is used to show that such polynomials with degree $2n$ always exist provided that the number of prescribed leading coefficients is slightly less than $n/4$. Exact expressions are also obtained for fields with two or three elements and up to two prescribed leading coefficients.",
"subjects": "Combinatorics (math.CO)",
"title": "Enumeration of self-reciprocal irreducible monic polynomials with prescribed leading coefficients over a finite field",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810951192016,
"lm_q2_score": 0.803173791645582,
"lm_q1q2_score": 0.7909743457229836
} |
https://arxiv.org/abs/1703.08834 | On connectedness of power graphs of finite groups | The power graph of a group $G$ is the graph whose vertex set is $G$ and two distinct vertices are adjacent if one is a power of the other. This paper investigates the minimal separating sets of power graphs of finite groups. For power graphs of finite cyclic groups, certain minimal separating sets are obtained. Consequently, a sharp upper bound for their connectivity is supplied. Further, the components of proper power graphs of $p$-groups are studied. In particular, the number of components of that of abelian $p$-groups are determined. | \section*{Introduction}
Study of graphs associated to algebraic structures has a long history. There are various graphs constructed from groups and semigroups, e.g., Cayley graphs \cite{cayley1878desiderata, budden1985cayley}, intersection graphs \cite{MR3323326, zelinka1975intersection}, and commuting graphs \cite{bates2003commuting}.
Kelarev and Quinn \cite{kelarev2000combinatorial, kelarevDirectedSemigr} introduced the notion of \emph{directed power graph} of a semigroup $S$ as the directed graph $\overrightarrow{\mathcal{G}}(S)$ with vertex set $S$ and there is an arc from a vertex $u$ to another vertex $v$ if $v=u^\alpha$ for some natural number $\alpha \in \mathbb{N}$. Followed by this, Chakrabarty et al. \cite{GhoshSensemigroups} defined (\emph{undirected}) \emph{power graph} $\mathcal{G}(S)$ of a semigroup $S$ as the (undirected) graph with vertex set $S$ and distinct vertices $u$ and $v$ are adjacent if $v=u^\alpha$ for some $\alpha \in \mathbb{N}$ or $u=v^\beta$ for some $\beta \in \mathbb{N}$.
Several authors studied power graphs and proved many interesting results. Some of them even exhibited the properties of groups from the viewpoint of power graphs. Chakrabarty \cite{GhoshSensemigroups} et al. proved that the power graph of a finite group is always connected. They also showed that the power graph of a finite group $G$ is complete if and only if $G$ is a cyclic group of order 1 or $p^k$, for some prime $p$ and $k \in \mathbb{N}$. Cameron and Ghosh observed isomorphism properties of groups based on power graphs. In \cite{Ghosh}, they showed that two finite abelian groups with isomorphic power graphs are isomorphic. Further, if two finite groups have isomorphic directed power graphs, then they have same numbers of elements of each order. Cameron \cite{Cameron} proved that if two finite groups have isomorphic power graphs, then their directed power graphs are also isomorphic. It was shown by Curtin and Pourgholi that among all finite groups of a given order, the cyclic group of that order has the maximum number of edges and has the largest clique in its power graph \cite{curtin2014edge,curtin2016euler}. It was observed in \cite{doostabadi2013some} and \cite{MR3266285} that the power graph of a group is perfect. Perfect graphs are those with the same chromatic number and clique number for each of their induced subgraphs. Shitov \cite{MR3612206} showed that for any group $G$, the chromatic number of $\mathcal{G}(G)$ is at most countable. A \emph{proper power graph}, denoted by $\mathcal{G}^*(G)$, is obtained by removing the identity element from the power graph $\mathcal{G}(G)$ of a given group $G$. In \cite{MR3200118}, Moghaddamfar et al. obtained necessary and sufficient conditions for a proper power graph to be a strongly regular graph, a bipartite graph or a planar graph.
Connectedness of power graphs of various groups were also considered in the literature. Doostabadi et. al. \cite{doostabadi2015connectivity} focused on proper power graphs. They obtained the number of connected components of proper power graphs of nilpotent groups, groups with a nontrivial partition, symmetric groups and alternating groups. They also showed that the proper power graph of a nilpotent group, symmetric group, or an alternating group has diameter at most $4$, $26$, or $22$, respectively. Chattopadhyay and Panigrahi explored on the connectivity $\kappa(\mathcal{G}(G))$ for a cyclic group $G$ of order $n$. In \cite{ChattopadhyayConnectivity}, they showed that $\kappa(\mathcal{G}(G)) = n-1$ when $n$ is a prime power; otherwise, $\kappa(\mathcal{G}(G))$ is bounded below by $\phi(n)+1$, where $\phi$ is the Euler's function. Further, in \cite{chattopadhyay2015laplacian}, Chattopadhyay and Panigrahi supplied upper bounds for the following cases. If $n = p^\alpha q^\beta$, where $p$ and $q$ are distinct primes and $\alpha, \beta \in \mathbb{N}$, then $\kappa(\mathcal{G}(G))$ is bounded above by $\phi(n)+p^{\alpha-1}q^{\beta-1}$. If $n = pqr$, where $p,q$ and $r$ are distinct primes with $p< q <r$, then $\kappa(\mathcal{G}(G))$ is bounded above by $\phi(n)+p+q-1$.
This paper investigates connectedness of power graphs. In \Cref{sec-cyclic}, we characterize the minimal separating sets of power graphs of finite groups in terms of certain equivalence classes. Further, we obtain different minimal separating sets of power graphs of finite cyclic groups; using which we provide two upper bounds for their connectivity. While one of them is a sharp upper bound, we shall characterize the parameters where the second upper bound is an improvement over the first one. We also give actual values for the connectivity of power graphs of cyclic groups of order $n$, when $n$ has two prime factors or $n$ is a product of three primes; this improves above-cited results from \cite{chattopadhyay2015laplacian}. Followed by this, in \Cref{sec-ablpgrp}, we study some properties of components of proper power graphs of $p$-groups, and find the number of components of proper power graph of an abelian $p$-group.
We now present some basic definitions, mainly from graph theory, and fix the notation in \Cref{sec-prelim}. We will also include the results from literature which are required in the paper.
\section{Preliminaries and related works}
\label{sec-prelim}
The set of vertices and the set of edges of a graph $\Gamma$ are always denoted by $V(\Gamma)$ and $E(\Gamma)$, respectively. A graph with no loops or parallel edges is called a \emph{simple graph}. The graph with no vertices (and hence no edges) is called the \emph{null graph}. On the other hand, a graph with at least one vertex is called a \emph{non-null graph}. A graph with one vertex and no edges is called a \emph{trivial graph}. If $\Gamma_1$ and $\Gamma_2$ are two graphs such that $V(\Gamma_1) \subseteq V(\Gamma_2)$ and $E(\Gamma_1) \subseteq E(\Gamma_2)$, then we say $\Gamma_1$ is a subgraph of $\Gamma_2$. A \emph{path} in a graph is sequence of distinct vertices such that two vertices are adjacent if they are consecutive in the sequence. If $P$ is a path with the sequence $v_0 ,v_1 ,\ldots ,v_n$ of vertices, then $P$ is called a $v_0,v_n$-path and it is of length $n$. If $\Gamma$ is a finite graph with vertices $u$ and $v$, then the \emph{distance} from $u$ to $v$, denoted by $d_\Gamma(u, v)$ or simply $d(u, v)$, is the least length of a $u,v$-path. If there is no $u,v$-path in $\Gamma$, we take $d(u, v) =\infty$. The \emph{diameter} of $\Gamma$, denoted by diam$(\Gamma)$, is $\displaystyle\max_{u,v \in V(\Gamma)}d(u,v)$.
A graph is said to be \emph{connected} if there is a path between every pair of vertices; otherwise, we say it is \emph{disconnected}. A \emph{component} of a graph $\Gamma$ is a maximal connected subgraph of $\Gamma$.
If $U \subseteq V(\Gamma)$, then the subgraph obtained by deleting $U$ from the graph $\Gamma$ will be denoted by $\Gamma-U$. For singleton sets, $\Gamma - \{u\}$ is simply written as $\Gamma - u$. A \emph{separating set} of $\Gamma$ is a set of vertices whose removal increases the number of components of $\Gamma$. A separating set is \emph{minimal} if none of its non-empty proper subsets disconnects $\Gamma$. A separating set of $\Gamma$ with least cardinality is called a \emph{minimum separating set} of $\Gamma$.
The \emph{vertex connectivity} (or simply \emph{connectivity}) of a graph $\Gamma$, denoted by $\kappa({\Gamma})$, is the minimum number of vertices whose removal results in a disconnected or trivial graph. So, the connectivity of disconnected graphs or the trivial graph are always $0$.
For a positive integer $n$, the number of positive integers that do not exceed $n$ and are relatively prime to $n$ is denoted by $\phi(n)$. The function $\phi$ is known as \emph{Euler's phi function}. If an integer $n>1$ has the prime factorization $p_1^{\alpha_1} p_2^{\alpha_2}\ldots p_r^{\alpha_r}$, then $\phi(n)=\displaystyle\prod_{i=1}^r (p_i^{\alpha_i}-p_i^{\alpha_i-1})=n \prod_{i=1}^r \left(1- \dfrac{1}{p_i}\right)$ (cf. \cite[Theorem 7.3]{burton2006elementary}).
We now recall some results on power graphs of finite groups and their connectivity.
\begin{theorem}[\cite{GhoshSensemigroups}]\label{CompleteCond}
Let $G$ be a finite group.
\begin{enumerate}[\rm(i)]
\item The power graph $\mathcal{G}(G)$ is always connected.
\item The power graph $\mathcal{G}(G)$ is complete if and only if $G$ is a
cyclic group of order $1$ or $p^m$, for some prime number $p$ and for some $m \in \mathbb{N}$.
\end{enumerate}
\end{theorem}
\begin{theorem}[\cite{chattopadhyay2015laplacian}]\label{Chattopadhyay2015Connectivity1}
Let $G$ be a finite cyclic group of order $n$. If $n = p_1^{\alpha_1} p_2^{\alpha_2}$ for some primes $p_1, p_2$, and $\alpha_1, \alpha_2 \in \mathbb{N}$, then $\kappa(\mathcal{G}(G))\leq \phi(n)+p_1^{\alpha_1-1}p_2^{\alpha_2-1}$.
\end{theorem}
\begin{theorem}[\cite{chattopadhyay2015laplacian}]\label{Chattopadhyay2015Connectivity2}
Let $G$ be a finite cyclic group of order $n$. If $n = p_1 p_2 p_3$ for some primes $p_1 < p_2 < p_3$, then $\kappa(\mathcal{G}(G))\leq \phi(n)+p_1+p_2-1$.
\end{theorem}
If two finite groups are isomorphic, their corresponding power graphs are isomorphic and hence share the same properties. Since a cyclic group of order $n$ is isomorphic to the additive group of integers modulo $n$, written $\mathbb{Z}_n = \{\overline{0},\overline{1}, \ldots, \overline{n-1}\}$, we prove the results for $\mathbb{Z}_n$ in this paper.
\section{Minimal separating sets of $\mathcal{G}(\mathbb{Z}_n)$}
\label{sec-cyclic}
In this section, we study minimal separating sets of power graphs of finite cyclic groups and utilize them to explore their connectivity.
Let $G$ be a group with identity element $e$. For any $A \subseteq G$, we write $A^*=A-\{e\}$. If $H$ is a subgroup of $G$, then $\mathcal{G}(H)$ is an induced subgraph of $\mathcal{G}(G)$ \cite{GhoshSensemigroups}. More generally, for $A \subseteq G$, we denote the subgraph of $\mathcal{G}(G)$ induced by $A$ as $\mathcal{G}_G(A)$. Also, if the underlying group $G$ is clear from the context, we simply write $\mathcal{G}(A)$ instead of $\mathcal{G}_G(A)$. For $x \in G$, the cyclic subgroup generated by $x$ in $G$ is denoted by $\langle x \rangle$.
\begin{remark}\label{remark2}
Let $G$ be a finite group with identity element $e$ and $\Gamma$ a connected subgraph of $\mathcal{G}(G)$ with $e \in V(\Gamma)$. Since $e$ is adjacent to all other vertices in $\Gamma$, we have $\kappa(\Gamma-e)=\kappa(\Gamma)-1$.
\end{remark}
For $n \in \mathbb{N}$, let $\mathcal{S}(\mathbb{Z}_n)$ consist of the identity element and generators of $\mathbb{Z}_n$, i.e., $\mathcal{S}(\mathbb{Z}_n)=\left \{\overline{a}\in \mathbb{Z}_n :1 \leq a<n, \gcd(a, n)=1 \right \} \cup \{\overline{0} \}$. We further write $\widetilde{\mathbb{Z}}_n=\mathbb{Z}_n-\mathcal{S}(\mathbb{Z}_n)$ and $\mathcal{\widetilde{G}}(\mathbb{Z}_n)=\mathcal{G}(\mathbb{Z}_n)-\mathcal{S}(\mathbb{Z}_n)$ so that $V(\widetilde{\mathcal{G}}(\mathbb{Z}_n))=\widetilde{\mathbb{Z}}_n$.
\begin{remark}\label{szn-adj}
For $n \in \mathbb{N}$, each element of $\mathcal{S}(\mathbb{Z}_n)$ is adjacent to every other element of $\mathcal{G}(\mathbb{Z}_n)$.
\end{remark}
From \Cref{szn-adj}, every (minimal) separating set of $\mathcal{G}(\mathbb{Z}_n)$ can be written as union of $\mathcal{S}(\mathbb{Z}_n)$ and a (minimal) separating set of $\mathcal{\widetilde{G}}(\mathbb{Z}_n)$. Thus, in what follows, we focus on separating sets of the latter.
\begin{lemma}\label{ImpLemma}
If $n > 1$ is not a prime number, then the following statements hold:
\begin{enumerate}[\rm(i)]
\item If $n$ is not a prime power, then every separating set of $\mathcal{G}(\mathbb{Z}_n)$ contains $\mathcal{S}(\mathbb{Z}_n)$.
\item $\kappa(\mathcal{G}(\mathbb{Z}_n))=\phi(n)+1+\kappa(\mathcal{\widetilde{G}}(\mathbb{Z}_n))$.
\item If $p_1 < p_2 < \cdots < p_r$ are the prime factors of $n$, then $\widetilde{\mathbb{Z}}_n=\displaystyle\bigcup _{i=1}^{r}\langle \overline{p_i}\rangle^*$.
\end{enumerate}
\end{lemma}
\begin{proof}
When $n$ is not a prime power, $\mathcal{G}(\mathbb{Z}_n)$ is not complete (cf. \Cref{CompleteCond}). So (i) follows from \Cref{szn-adj}. Since $n$ is not a prime number, $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$ is a non-null graph. So (ii) follows from \Cref{szn-adj} and the fact that $|\mathcal{S}(\mathbb{Z}_n)|=\phi(n)+1$. Moreover, if $\overline{a} \in \widetilde{\mathbb{Z}}_n$, then $a$ is divided by at least one prime factor of $n$. Hence (iii) follows.
\end{proof}
If $n$ is a product of two primes, then $\kappa(\mathcal{G}(\mathbb{Z}_n))=\phi(n)+1$ (see \cite[Theorem 3]{ChattopadhyayConnectivity}). We now show that the converse holds as well.
\begin{proposition}\label{minsepsetPhi}
For $n \in \mathbb{N}$, the following statements are equivalent.
\begin{enumerate}[\rm(i)]
\item $n$ is a product of two distinct primes.
\item $\mathcal{S}(\mathbb{Z}_n)$ is a separating set of $\mathcal{G}(\mathbb{Z}_n)$.
\item $\kappa(\mathcal{G}(\mathbb{Z}_n))=\phi(n)+1$.
\end{enumerate}
\end{proposition}
\begin{proof}
As stated above, by \cite[Theorem 3]{ChattopadhyayConnectivity}, (i) implies (iii). Let (iii) holds. Since $|\mathcal{S}(\mathbb{Z}_n)|=\phi(n)+1$, (ii) follows from \Cref{ImpLemma}(i).
Now we prove that (ii) implies (i). Suppose $\mathcal{S}(\mathbb{Z}_n)$ is a separating set of $\mathcal{G}(\mathbb{Z}_n)$. Then $\mathcal{G}(\mathbb{Z}_n)$ is not a complete graph and hence by \Cref{CompleteCond}(ii), $n$ has at least two distinct prime factors. Since $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$ is disconnected, let $\overline{a},\overline{b} \in V(\widetilde{\mathcal{G}}(\mathbb{Z}_n))$ such that there no path from $\overline{a}$ to $\overline{b}$ in $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$. In view of \Cref{ImpLemma}(iii), every element of $V(\widetilde{\mathcal{G}}(\mathbb{Z}_n))$ is in $\cy{\overline{p}}^*$ for some prime factor $p$ of $n$. Let $\overline{a},\overline{b} \in \cy{\overline{p}}$ for some prime factor $p$ of $n$. Then as $\overline{a}, \overline{p},\overline{b}$ is an $\overline{a},\overline{b}$-path, which is not possible. Hence, $\overline{a} \in \cy{\overline{p}}$ and $\overline{b} \in \cy{\overline{q}}$ for some distinct prime factors $p$ and $q$ of $n$. If possible let $pq < n$, that implies $\overline{pq} \in V(\widetilde{\mathcal{G}}(\mathbb{Z}_n))$. Consequently, $\overline{a}$ and $\overline{b}$ are connected by the path $\overline{a}, \overline{p},\overline{pq},\overline{q},\overline{b}$; which is a contradiction. Hence $n = pq$.
\end{proof}
Let $G$ be a group. Define a relation $\approx$ on $G$ by $x \approx y$ if $\langle x \rangle = \langle y \rangle$. Observe that $\approx$ is an equivalence relation on $G$. We denote the equivalence class of an element $x \in G$ under $\approx$ by $[x]$ and for any $A \subseteq G$, $[A] = \{[x] : x \in A\}$. If $C$ is an equivalence class under $\approx$, we say that $C$ is a $\approx$-class. If $x, y \in G$ are not related by $\approx$, we write $x \not \approx y$.
\begin{remark}
Given a group $G$ and $x \in G$, $[x]$ is a clique in the power graph $\mathcal{G}(G)$.
\end{remark}
\begin{lemma}\label{ClassLemma}
For $n \in \mathbb{N}$, we have the following with respect to $\approx$-classes of $\mathbb{Z}_n$.
\begin{enumerate}[\rm(i)]
\item For each $\overline{a} \in \mathbb{Z}_n^*$, there exists a positive divisor $d$ of $n$ such that $\overline{a} \approx \overline{d}$.
\item For $\overline{a} \in \mathbb{Z}_n^*$, $\left| [\overline{a}] \right|=\phi \left( \dfrac{n}{\gcd(n,a)} \right)$.
\item If $a|n$, $b|n$ and $a \neq b$, then $\overline{a} \not\approx \overline{b}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[\rm(i)]
\item Consider $d = \gcd(a, n)$. Then $\cz{a}=\cz{d}$ so that $\overline{a} \approx \overline{d}$.
\item Note that $o(\overline{a})= \displaystyle \frac{n}{\gcd(n,a)}$. Moreover, for $c \ne a$, $\cy{\overline{c}}=\cy{\overline{a}}$ if and only if $\overline{c}= \alpha\overline{a}$ for some $1 < \alpha < o(\overline{a})$ and $\gcd(\alpha, o(\overline{a}))=1$. Hence, (ii) follows.
\item Suppose $\cy{\overline{a}}=\cy{\overline{b}}$. Then $o(\overline{a})=o(\overline{b})$. That is, $\displaystyle \frac{n}{\gcd(n,a)} = \frac{n}{\gcd(n,b)}$. That implies $a = b$; a contradiction. Hence, $\overline{a} \not\approx \overline{b}$.
\end{enumerate}
\end{proof}
Since each divisor of $n \in \mathbb{N}$ forms a distinct $\approx$-class of $\mathbb{Z}_n$, we have the following corollary of \Cref{ClassLemma}.
\begin{corollary}\label{sizeofclassZn}
For $n \in \mathbb{N}$, $[\mathbb{Z}_n]= \left\{\big [\overline{c}\big ] : c|n, 1 \leq c < n\right\} \cup \{\overline{0}\}$. In fact, $|[\mathbb{Z}_n]|$ is the number of divisors of $n$.
\end{corollary}
\begin{lemma}\label{adjall1}
Let $G$ be a group, $x, y \in G$ and $x \not \approx y$. Then in $\mathcal{G}(G)$, $x$ is adjacent to every element of $[y]$ if and only if $x$ is adjacent to $y$.
\end{lemma}
\begin{proof}
Let $x$ be adjacent to $y$. So there exists $\alpha \in \mathbb{N}$ such that $y=x^\alpha$ or $x=y^\alpha$. Suppose $y=x^\alpha $. Let $z \in [y]$. Then there exists $\beta \in \mathbb{N}$ such that $z=y^\beta $. Then $z=x^{\alpha \beta}$, so that $x$ is adjacent to $z$. If $x=y^\alpha$, we can similarly show that $x$ is adjacent elements of $[y]$. Converse is obvious.
\end{proof}
\begin{corollary}\label{AdjAllNone}
If $C_1$ and $C_2$ are two $\approx$-classes of a group $G$, then either each element of $C_1$ is adjacent to every element $C_2$ or no element of $C_1$ is adjacent to any element $C_2$.
\end{corollary}
\begin{theorem}\label{minsepunion}
Let $G$ be a group and $T$ be a minimal separating set of $\mathcal{G}(G)$. Then for any $x \in G$, either $[x] \subseteq T$ or $[x] \cap T = \emptyset$. Hence, $T$ is a union of some $\approx$-classes.
\end{theorem}
\begin{proof}
Let $x \in G$ and $[x] \not \subseteq T$. We show that $[x] \cap T = \emptyset$. If possible, let there exists $z \in [x] \cap T$. As $[x] \not \subseteq T$, there exists $y \in (G-T) \cap [x]$.
Since $\mathcal{G}(G)-T$ is disconnected, there exists $a,b \in \mathcal{G}(G)-T$ such that there is no $a,b$-path in $\mathcal{G}(G)-T$. But, since $T$ is a minimal separating set, $\mathcal{G}(G)-(T-\{ z\})$ is connected. Therefore there exists an $a,b$-path $P$ in $\mathcal{G}(G)-(T-\{ z\})$. Note that $z \in V(P)$; otherwise, as $P$ is a subgraph of $\mathcal{G}(G)-(T-\{ z\})$, $P$ is also a subgraph of $\mathcal{G}(G)-T$ which is not the case. To conclude the result we shall now construct an $a, b$-path in $\mathcal{G}(G)-T$.
Let $z_1$ and $z_2$ be the vertices adjacent to $z$ in $P$, i.e., $P = a, \ldots, z_1, z, z_2, \ldots, b$. Let $P_1$ be the $a, z_1$-path in $P$ and $P_2$ be the $z_2, b$-path in $P$. Since $y,z \in [x]$, by \Cref{adjall1}, $y$ is adjacent to $z_1, z_2$. Clearly, the path traversing through $P_1$, $y$ and $P_2$ is an $a,b$-path in $\mathcal{G}(G)-T$. This gives us a contradiction as there is no $a,b$-path in $\mathcal{G}(G)-T$. Hence $[x] \cap T = \emptyset$.
\end{proof}
\begin{theorem}\label{MinSepSetZn}
Suppose $n \in \mathbb{N}$ is not a product of two primes and has prime factors $p_1 < p_2 < \cdots < p_r$ with $r\geq 2$. Then, for any $1 \leq k \leq r$, $\displaystyle\bigcup_{\substack{i=1\\ i \neq k}}^{r}\langle \overline{p_ip_k}\rangle^*$ is a minimal separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
\end{theorem}
\begin{proof}
Write $\Gamma = \widetilde{\mathcal{G}}(\mathbb{Z}_n)$. Then by \Cref{minsepsetPhi}, $\Gamma$ is connected and by \Cref{ImpLemma}(iii), $V(\Gamma)=\displaystyle\bigcup _{i=1}^{r}\langle \overline{p_i}\rangle^*$.
Let $T= \displaystyle\bigcup_{\substack{i=1\\ i \neq k}}^{r}\langle \overline{p_ip_k}\rangle^*$. For $i = 1, 2,\ldots, r$, let $T_i=\langle \overline{p_i} \rangle^* - T$ and $T' = \displaystyle\bigcup_{\substack{i=1\\ i \neq k}}^{r}T_i$. Then $V(\Gamma-T)=\displaystyle\bigcup_{i=1}^{r}T_i= T_k \cup T'$. We prove that $\Gamma - T$ is disconnected by showing that no element of $T_k$ is adjacent to any element of $T'$.
If possible, let there exist $x\in T_k$ and $y \in T'$ such that they are adjacent. So $y \in T_l$ for some $1 \leq l \leq r$, $l \neq k$. Then there exist non-zero integers $a$ and $b$ such that $x=a \overline{p_k}$ and $y=b\overline{p_l}$. Since $x$ is adjacent to $y$, one of them is a multiple of the other. Let $a\overline{p_k} = cb\overline{p_l}$ for some non-zero integer $c$. Then $ap_k = cbp_l+ c'n$ for some non-zero integer $c'$. This implies that $p_l|a p_k$. Since $p_l \centernot| p_k$, we have $p_l|a$. Consequently $a\overline{p_k} \in \langle \overline{p_k p_l}\rangle \subseteq T$. This is a contradiction, as $T_k \cap T= \emptyset$. Similarly, if $b \overline{p_l}$ is a multiple of $a \overline{p_k}$, then also we get a contradiction. Hence $\Gamma - T$ is disconnected. Consequently, $T$ is a separating set of $\Gamma$.
We now show the minimality of $T$. First of all, $\mathcal{G}_{\mathbb{Z}_n}(T_k)$ is connected because all of its vertices are adjacent to $\overline{p_k}$. Also note that $\mathcal{G}_{\mathbb{Z}_n}(T')$ is connected. For instance, let $u, v \in T'$. Then $u \in T_{i}$, $ v \in T_{j}$ for some $1 \leq i,j \leq r$. If $i = j$, then $u$ and $v$ are connected by the path $u, \overline{p_{i}}, v$, and if $i \neq j$, then $u$ and $v$ are connected by the path $u, \overline{p_{i}}, \overline{p_{i}p_{j}}, \overline{p_{j}}, v$.
Now let $z \in T$. So $z = d\overline{p_kp_l}$ for some non-zero integer $d$ and $1 \leq l \leq r$, $l \neq k$. Since both $\overline{p_k} \in T_k$ and $\overline{p_l} \in T'$ are adjacent to $z$, $\Gamma -(T-\{z\})$ is connected. Consequently, $T$ is a minimal separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
\end{proof}
\begin{lemma}\label{MinSepSetCard}
Suppose $n$ is not a product of two primes and $n=p_1^{\alpha_1}p_2^{\alpha_2}\ldots p_r^{\alpha_r}$, where $r \geq 2$, $p_1 < p_2 < \cdots < p_r$ are primes and $\alpha_i \in \mathbb{N}$ for all $1 \leq i \leq r$. Then for any $1 \leq k \leq r$,
$$\left| \bigcup_{\substack{i=1\\ i \neq k}}^{r} \langle \overline{p_ip_k} \rangle \right| = \displaystyle\dfrac{n}{p_k} - p_k^{\alpha_k-1} \phi\left(\dfrac{n}{p_k^{\alpha_k}}\right) $$
and for $1 \leq j < k \leq r$,
\begin{equation}\label{MinSepSetCardEq}
\left | \bigcup_{\substack{i=1 \\ i \neq k}}^{r}\langle \overline{p_ip_k}\rangle \right | \leq \left | \bigcup_{\substack{i=1 \\ i \neq j}}^{r}\langle \overline{p_ip_j}\rangle \right |
\end{equation}
\end{lemma}
\begin{proof}
Observe that $\displaystyle\bigcup _{\substack{i=1\\ i \neq i}}^{r} \langle \overline{p_ip_k} \rangle$ consists of those elements of $\langle \overline{p_k}\rangle$ which are divisible by some $\overline{p_i}$, $1 \leq i \leq r, i \neq k$. Therefore we get $\displaystyle\bigcup _{\substack{i=1\\ i \neq i}}^{r} \langle \overline{p_ip_k} \rangle$ by deleting those elements from $\langle \overline{p_k}\rangle$ which are relatively prime to $\overline{p_i}$, $1 \leq i \leq r, i \neq k$. Hence
$\displaystyle\bigcup _{\substack{i=1\\ i \neq k}}^{r} \langle \overline{p_ip_k} \rangle =A-B$, where $A=\left \{a\overline{p_k} : a \in \mathbb{N}, 0 \leq a < \dfrac{n}{p_k}\right \}$, \\ and $B=\left \{a\overline{p_k} : a \in \mathbb{N}, 0 \leq a < \dfrac{n}{p_k}, (a,p_i)=1 \hspace{3pt} \forall \hspace{3pt} 1 \leq i \leq r, i \neq k \right \}$.
Take $n_1= \prod_{j=1,j \neq k}^r p_j$ and $n_2= \dfrac{n}{p_k n_1}=\dfrac{n}{p_1 \ldots p_r}$. For $0 \leq m \leq n_2-1$, let $P_m=\left \{ a \overline{p_k} :a \in \mathbb{N}, m n_1 \leq a < (m+1) n_1, (a,n_1)=1 \right \}$. Trivially $P_l \cap P_m= \emptyset$ for $l \neq m$ and
\begin{equation}\label{Lem1Eq1}
A = \bigcup _{m=0}^{n_2-1} P_m
\end{equation}
Observe that $a \overline{p_k} \in P_m$ if and only if $(a - m n_1) \overline{p_k} \in P_0$. Thus $\left| P_m \right|=|P_0|$ for all $0 \leq m \leq n_2-1$. Further, $|P_0|=\left|\left \{ a \overline{p_k} :a \in \mathbb{N}, 0 \leq a < n_1, (a,n_1)=1 \right \} \right|=\phi(n_1)$. So for all $0 \leq m \leq n_2-1$,
\begin{equation}\label{Lem1Eq2}
\left| P_m \right|=\phi(n_1).
\end{equation}
From \eqref{Lem1Eq1} and \eqref{Lem1Eq2}, we have
$$|B|= \sum \limits_{m=0}^{n_2} |P_m| =n_2 \phi(n_1)=\dfrac{n}{p_k}\enspace \prod \limits_{\substack{i=1\\ i \neq k}}^r \left(1- \dfrac{1}{p_i}\right) = p_k^{\alpha_k-1} \phi\left(\dfrac{n}{p_k^{\alpha_k}}\right)$$
As $|A|=\dfrac{n}{p_k}$ and $B \subseteq A$, we finally have\\
$$\left| \displaystyle\bigcup _{\substack{i=1\\ i \neq k}}^{r} \langle \overline{p_ip_k} \rangle \right| = |A|-|B| = \displaystyle\dfrac{n}{p_k} - p_k^{\alpha_k-1} \phi\left(\dfrac{n}{p_k^{\alpha_k}}\right) .$$\\
Now we prove \eqref{MinSepSetCardEq}.
\begin{align*}
\left | \bigcup_{\substack{i=1\\ i \neq j}}^{r}\langle \overline{p_ip_j}\rangle \right | - \left | \bigcup_{\substack{i=1\\ i \neq k}}^{r}\langle \overline{p_ip_k}\rangle \right |
& = \dfrac{n}{p_j} - p_j^{\alpha_j-1} \phi\left(\dfrac{n}{p_j^{\alpha_j}}\right) - \left \{ \dfrac{n}{p_k} - p_k^{\alpha_k-1} \phi\left(\dfrac{n}{p_k^{\alpha_k}}\right) \right \}\\
& = \dfrac{n}{p_j} - \dfrac{n}{p_j} \prod \limits_{\substack{i=1\\ i \neq j}}^{r} {\left(1 - \dfrac{1}{p_i} \right)} - \left \{ \dfrac{n}{p_k} - \dfrac{n}{p_k} \prod \limits_{\substack{i=1\\ i \neq k}}^{r} {\left(1 - \dfrac{1}{p_i} \right)} \right \} \\
& = \dfrac{n}{p_jp_k} \left [ p_k - p_k \prod \limits_{\substack{i=1\\ i \neq j}}^{r} {\left(1 - \dfrac{1}{p_i} \right)} - \left \{ p_j - p_j \prod \limits_{\substack{i=1\\ i \neq k}}^{r} {\left(1 - \dfrac{1}{p_i} \right)} \right \} \right ]\\
\end{align*}
\begin{align*}
& = \dfrac{n}{p_jp_k} \left \{ p_k - p_j -\{p_k-1 - (p_j-1)\} \prod \limits_{\substack{i=1\\ i \neq j,k}}^{r} {\left(1 - \dfrac{1}{p_i} \right)} \right \}\\
& = \dfrac{n(p_k - p_j)}{p_jp_k} \left \{ 1 - \prod \limits_{\substack{i=1\\ i \neq j,k}}^{r} {\left(1 - \dfrac{1}{p_i} \right)} \right \} \geq 0\\
\end{align*}
\end{proof}
In the following theorem we shall give an upper bound for the connectivity of power graphs of cyclic groups. This generalizes \Cref{Chattopadhyay2015Connectivity1} and \Cref{Chattopadhyay2015Connectivity2}, and covers all other cases.
\begin{theorem}\label{Conbound1}
If $n=p_1^{\alpha_1}p_2^{\alpha_2}\ldots p_r^{\alpha_r}$, where $r \geq 2$, $p_1 < p_2 < \cdots < p_r$ are primes and $\alpha_j \in \mathbb{N}$ for $1 \leq j \leq r$, then
\begin{equation}\label{VerConUB1}
\kappa(\mathcal{G}(\mathbb{Z}_n)) \leq \phi(n)+\displaystyle\dfrac{n}{p_r} - p_r^{\alpha_r-1} \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right).
\end{equation}
\end{theorem}
\begin{proof}
If $n$ is a product of two primes, the inequality follows from \Cref{minsepsetPhi}. Now suppose $n$ is not a product of two primes. By \Cref{MinSepSetZn} and \Cref{MinSepSetCard}, we have
\begin{equation*}
\kappa(\widetilde{\mathcal{G}}(\mathbb{Z}_n)) \leq \left | \bigcup_{i=1}^{r-1}\langle \overline{p_ip_r}\rangle^* \right | =\displaystyle\dfrac{n}{p_r} - p_r^{\alpha_r-1} \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) -1.
\end{equation*}
Hence, the result follows from \Cref{ImpLemma}(ii).
\end{proof}
\begin{remark}
The upper bound given in \Cref{Conbound1} is tight. In fact, through \Cref{ConnValue2} and \Cref{VerConEq2} we will prove that the equality of \eqref{VerConUB1} holds if $n$ has exactly two prime factors or $n$ is a product of three distinct primes.
\end{remark}
For $n \in \mathbb{Z}$, we now investigate some other minimal separating sets of $\mathcal{G}(\mathbb{Z}_n)$ and obtain alternative upper bound for $\kappa(\mathcal{G}(\mathbb{Z}_n))$. We will also ascertain the conditions on $n$ for which the alternative bound is an improvement to that of \Cref{Conbound1}.
Let $\Gamma$ be a simple graph and $x \in V(\Gamma)$. Then the \emph{neighbourhood} $N(x)$ of $x$ is the set of all vertices which are adjacent to $x$. More generally, the \emph{neighbourhood} $N(A)$ of a set $A \subset V(\Gamma)$ is the set of all vertices which are adjacent to some element of $A$, but do not belong to $A$, i.e., $N(A)= \bigcup_{x \in A}N(x)-A$.
\begin{remark}\label{NbdClassElement}
For any group $G$ and $x \in G$, $N(x) = N([x]) \cup \left([x] - \{x\}\right)$ in $\mathcal{G}(G)$.
\end{remark}
Notice that for any $\overline{a} \in \widetilde{\mathbb{Z}}_n$, we have $\mathcal{S}(\mathbb{Z}_n) \subseteq N(\overline{a})$ and $\mathcal{S}(\mathbb{Z}_n) \subseteq N([\overline{a}])$. We denote $\widetilde{N}(\overline{a})=N(\overline{a})-\mathcal{S}(\mathbb{Z}_n)$ and $\widetilde{N}([\overline{a}])=N([\overline{a}])-\mathcal{S}(\mathbb{Z}_n)$.
\begin{lemma}\label{SepSetNx}
Let $G$ be a finite group and $x \in G$. Then the following are equivalent:
\begin{enumerate}[\rm(i)]
\item $N(x)$ is a separating set of $\mathcal{G}(G)$.
\item $N([x])$ is a separating set of $\mathcal{G}(G)$.
\item There exists some $y \in G$ such that $x$ is not adjacent to $y$.
\end{enumerate}
\end{lemma}
\begin{proof}
Observe that $\mathcal{G}(G)-N(x)$ is disconnected if and only if there exists $y \in G$, such that $x$ is not adjacent to $y$. Hence (i) and (iii) are equivalent.
We now prove that (ii) and (iii) are equivalent. Let $N([x])$ be a separating set of $\mathcal{G}(G)$. So $\mathcal{G}(G)-N([x])$ has at least two components and $[x]$ being a clique, it is in one of the components. Thus in $\mathcal{G}(G)-N([x])$ there exists $y \notin [x]$ such that there is no path from $x$ to $y$. So in particular, $x$ is not adjacent to $y$.
Conversely, let $x$ is not adjacent to some $y$ in $\mathcal{G}(G)$. Then $y \notin [x]$ and by \Cref{adjall1}, $y$ is not adjacent to any element of $[x]$, so that $y \notin N([x])$. Thus $y \in V(\mathcal{G}(G)-N([x]))$ and there is no path from any element of $[x]$ to $y$ in $\mathcal{G}(G)-N([x])$. Hence (ii) follows.
\end{proof}
\begin{remark}
If $\overline{a}$ is a generator or the identity element of $\mathbb{Z}_n$, then $N(\overline{a})$ and $N([\overline{a}])$ are not separating sets of $\mathcal{G}(\mathbb{Z}_n)$.
\end{remark}
\begin{lemma}\label{o3}
Let $G$ be a finite group and $x \in G$ with $o(x)\geq 3$. Then $N(x)$ is not a minimal separating set of $\mathcal{G}(G)$.
\end{lemma}
\begin{proof}
Since $o(x)\geq 3$, we have $|[x]|=\phi(o(x))\geq 2$. So there exists $y \in [x]-\{x\}$, and hence $y \in N(x) \cap [x]$. Further $[x] \not \subset N(x)$, thus by \Cref{minsepunion}, $N(x)$ is not a minimal separating set of $\mathcal{G}(G)$.
\end{proof}
\begin{remark}\label{o12}
If $G$ is a finite group with $x \in G$ and $o(x) = 1$, then $N(x) = N(e) = G-\{e\}$; which is not a separating set of $\mathcal{G}(G)$. In case $o(x)=2$, note that $N(x)=N([x])$.
\end{remark}
In view of \Cref{o3} and \Cref{o12} we shall now focus on neighbourhoods of $\approx$-classes and study the connectivity power graphs.
\begin{lemma}\label{SepSetNbg}
Let $n \in \mathbb{N}$ be neither a prime power nor a product of two distinct primes. Then for every $\overline{a} \in \widetilde{\mathbb{Z}}_n$, $\widetilde{N}([\overline{a}])$ is a separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
\end{lemma}
\begin{proof}
From \Cref{SepSetNx}, $N([\overline{a}])$ is a separating set of $\mathcal{G}(\mathbb{Z}_n)$. Thus, as each element of $\mathcal{S}(\mathbb{Z}_n)$ is adjacent to all other elements of $\mathcal{G}(\mathbb{Z}_n)$, the proof follows.
\end{proof}
The following observation will be useful in \Cref{MinSepNbd}.
\begin{remark}
For $n \in \mathbb{N}$, $\overline{a} \in \widetilde{\mathbb{Z}}_n$ and $b=(a,n)$, the following holds in $\mathcal{G}(\mathbb{Z}_n)$:
\begin{equation}\label{NbdUnion}
\widetilde{N}\left(\left[\overline{a}\right]\right)=\bigcup_{\substack{c|b \\ 1 < c < b}}[\widebar{c}] \hspace{5pt}\cup \bigcup_{\substack{b | d, d | n \\ b < d < n}} [\widebar{d} ]
\end{equation}
\end{remark}
\begin{theorem}\label{MinSepNbd}
If $n \in \mathbb{N}$ is not a product of two primes and $n=p_1^{\alpha_1}p_2^{\alpha_2}\ldots p_r^{\alpha_r}$, where $r \geq 2$, $p_1 < p_2 < \cdots < p_r$ are primes and $\alpha_i \in \mathbb{N}$ for all $1 \leq i \leq r$. Then for any $1 \leq k \leq r$, the following statements hold:
\begin{enumerate}[\rm(i)]
\item $\widetilde{N}\left(\left[\overline{p_k^{\alpha_k}}\right]\right)$ is a minimal separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
\item If $\alpha_k > 1$ and $1 \leq \beta_k < \alpha_k$, $\widetilde{N}\left(\bigg[\overline{p_k^{\beta_k}}\bigg]\right)$ is not a minimal separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) We denote $S=\widetilde{N}\left(\left[\overline{p_k^{\alpha_k}}\right]\right)$ and $\Gamma = \widetilde{\mathcal{G}}(\mathbb{Z}_n)-S$. By \Cref{SepSetNbg}, $S$ is a separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$. Then $\Gamma$ is disconnected and $V(\Gamma)=\left[\overline{p_k^{\alpha_k}}\right] \cup \displaystyle \bigcup_{\substack{a|n, 1< a < n \\ p_k^{\alpha_k} \centernot | a, \hspace{1pt} a \centernot | p_k^{\alpha_k}}} [\overline{a}]$. Let $C_1=\left[\overline{p_k^{\alpha_k}}\right]$ and $C_2=\displaystyle \bigcup_{\substack{a|n, 1< a < n,\\ p_k^{\alpha_k} \centernot | a, \hspace{1pt} a \centernot | p_k^{\alpha_k}}} [\overline{a}]$. Then the subgraph induced by $C_1$ is complete and hence connected in $\Gamma$. Notice that $\overline{p_i} \in C_2$ for all $1 \leq i \leq r, i \neq k$ and every other $\overline{b} \in C_2$ is adjacent to some $\overline{p_j}$ for $1 \leq j \leq r, j \neq k$ in $\Gamma$. Moreover, if $\overline{p_i}, \overline{p_j} \in C_2$ and $i \neq j$, then both are adjacent to $\overline{p_ip_j} \in C_2$ in $\Gamma$. Thus the subgraph of $\Gamma$ induced by $C_2$ is also connected. So $\Gamma$ consists of exactly two components: the subgraphs induced by $C_1$ and $C_2$. Thus to show that $S$ is a minimal separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$, it is enough to show that every element of $S$ is adjacent to some element of $C_1$ and some element of $C_2$.
Let $\overline{c} \in S$. Notice that every element of $C_1$ is adjacent to $\overline{c}$. We next show that $\overline{c}$ is adjacent to some element of $C_2$. Let $d=(c,n)$, so that $\s{\overline{d}}=\s{\overline{c}}$ and $\overline{d} \in S$. So by \Cref{adjall1}, it is enough to show that $\overline{d}$ is adjacent to some element of $C_2$.
Since $\overline{d}$ is adjacent to $\overline{p_k^{\alpha_k}}$, either $\overline{p_k^{\alpha_k}} \bigm| \overline{d}$ or $\overline{d} \bigm|\overline{p_k^{\alpha_k}}$. Then, because both $d$ and $p_k^{\alpha_k}$ are factors of $n$, we have $p_k^{\alpha_k}|d$ or $d|p_k^{\alpha_k}$. Let $p_k^{\alpha_k}|d$, so that $d=ap_k^{\alpha_k}$ for some integer $a$. If $\gcd\p{a,\dfrac{n}{p_k^{\alpha_k}}}=1$, then $\s{\overline{d}}=\s{p_k^{\overline{\alpha_k}}}$; which is a contradiction. Thus, as $\dfrac{n}{p_k^{\alpha_k}}=\prod \limits_{i=1,i \neq k}^{r} p_i^{\alpha_i}$, there exists $1 \leq l \leq r, l \neq k$ such that $p_l|a$. Then $\overline{p_l}|\overline{a}$ and hence $\overline{p_l}|\overline{d}$. So $\overline{d}$ is adjacent to $\overline{p_l}$, and $\overline{p_l} \in C_2$. Now let $d|p_k^{\alpha_k}$. Then $d=p_k^{\beta}$ for some $1 \leq \beta < \alpha_j$. So $\overline{d}$ is adjacent to $\overline{p_k^{\beta}p_m}$ for any $1 \leq m \leq r, m \neq k$ and $\overline{p_k^{\beta}p_m} \in C_2$. This completes the proof.
(ii) Observe that $\left[\overline{p_k^{\alpha_k}}\right] \subseteq \widetilde{N}\left( \left[\overline{p_k^{\beta_k}}\right]\right)$, and by \eqref{NbdUnion}, we can write
\begin{equation}\label{NotMinSep}
{N}\left(\left[\overline{p_k^{\alpha_k}}\right]\right) \subseteq \widetilde{N}\left(\cb{p_k^{\beta_k}}\right) \cup \left [\overline{p_k^{\beta_k}} \right ]
\end{equation}
Then $\widetilde{N}\left(\left[\overline{p_k^{\beta_k}}\right]\right)-\left [\overline{p_k^{\alpha_k}} \right ]$ is a separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$, because each element of $\left[\overline{p_k^{\beta_k}}\right]$ is adjacent to every element of $\left[\overline{p_k^{\alpha_k}}\right]$, and by \eqref{NotMinSep}, no element of $\left[\overline{p_k^{\alpha_k}}\right]\cup \left[\overline{p_k^{\beta_k}}\right]$ is adjacent any other element of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)- \left \{\widetilde{N}\left(\left[\overline{p_k^{\beta_k}}\right]\right)-\left [\overline{p_k^{\alpha_k}} \right ] \right \}$. Hence $\widetilde{N}\left(\left[\overline{p_k^{\beta_k}}\right]\right)$ is not a minimal separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
\end{proof}
The next result shows that minimal separating sets of $\mathcal{G}(\mathbb{Z}_n)$ obtained in \Cref{MinSepSetZn} and \Cref{MinSepNbd} are same when the largest prime dividing $n$ has power one.
\begin{corollary}\label{EqualSepSet}
Let $n \in \mathbb{N}$ is not a product of two primes and $n=p_1^{\alpha_1}\ldots p_{r-1}^{\alpha_{r-1}} p_r$, where $r \geq 2$, $p_1 < p_2 < \cdots < p_r$ are primes and $\alpha_i \in \mathbb{N}$ for $1 \leq i \leq r$. Then $\widetilde{N}([\overline{p_r}])=\bigcup_{i=1}^{r-1}\langle \overline{p_ip_r}\rangle^*$.
\end{corollary}
\begin{proof}
\begin{align*}
\widetilde{N}\left(\cb{p_r}\right) & =\cz{p_r}^*-\cb{p_r}\\
&=\cz{p_r}^*-\left \{ ap_r \mid 1 \leq a < p_1^{\alpha_1}\ldots p_{r-1}^{\alpha_{r-1}}, \gcd(a,p_1^{\alpha_1}\ldots p_{r-1}^{\alpha_{r-1}})=1 \right \}\\
& =\bigcup_{i=1}^{r-1}\langle \overline{p_ip_r}\rangle^*
\end{align*}
\end{proof}
We now provide an upper bound for $\kappa(\mathcal{G}(\mathbb{Z}_n))$ in the following theorem.
\begin{theorem}\label{bound1}
Suppose $n$ is not a product of two primes and $n=p_1^{\alpha_1}p_2^{\alpha_2}\ldots p_r^{\alpha_r}$, where $r \geq 2$, $p_1 < p_2 < \cdots < p_r$ are primes and $\alpha_i \in \mathbb{N}$ for $1 \leq i \leq r$, then
$$\kappa(\mathcal{G}(\mathbb{Z}_n)) \leq \xi_2(n) := \phi(n) + \dfrac{n}{p_r^{\alpha_r}} + \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) (p_r^{\alpha_r-1}-2).$$
\end{theorem}
\begin{proof}
From \Cref{MinSepNbd}(i), $\kappa(\widetilde{\mathcal{G}}(\mathbb{Z}_n)) \leq \left|\widetilde{N}\left(\cb{p_r^{\alpha_r}}\right)\right|$.
\begin{align*}
\left|\widetilde{N}\left(\cb{p_r^{\alpha_r}}\right)\right| & = \left|\left \langle \overline{p_r^{\alpha_r}}\right \rangle ^* \right| - \left|\cb{p_r^{\alpha_r}}\right| + \sum \limits_{j=1}^{\alpha_r}\left|\cb{p_r^{j}}\right| - \left|\cb{p_r^{\alpha_r}}\right| \\
& =\dfrac{n}{p_r^{\alpha_r}} - 1 - \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) + \sum \limits_{j=1}^{\alpha_r} \phi\left(\dfrac{n}{p_r^{j}}\right) - \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right)\\
& =\dfrac{n}{p_r^{\alpha_r}} - 1+ \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) \sum \limits_{j=1}^{\alpha_r} \phi\left( p_r^{\alpha_r-j}\right) -2 \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) \\
& =\dfrac{n}{p_r^{\alpha_r}} +(p_r^{\alpha_r-1}-2) \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) - 1
\end{align*}
Thus by \Cref{ImpLemma}(ii), $\kappa(\mathcal{G}(\mathbb{Z}_n)) \leq \xi_2(n)$.
\end{proof}
We denote the upper bound obtained in \Cref{Conbound1} by
\begin{equation}
\xi_1(n) = \phi(n)+\displaystyle\dfrac{n}{p_r} - p_r^{\alpha_r-1} \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right)
\end{equation}
and compare $\xi_1(n)$ with $\xi_2(n)$ in the following theorem.
\begin{theorem}
Suppose $n$ is not a product of two primes and $n=p_1^{\alpha_1}p_2^{\alpha_2}\ldots p_r^{\alpha_r}$, where $r \geq 2$, $p_1 < p_2 < \cdots < p_r$ are primes and $\alpha_i \in \mathbb{N}$.
\begin{enumerate}[\rm(i)]
\item $\xi_2(n) = \xi_1(n)$ if and only if $\alpha_r=1$, or $r=2$ and $p_1=2$.
\item $\xi_2(n) < \xi_1(n)$ if and only if $\alpha_r \geq 2$ and $\prod \limits_{i=1}^{r-1} {\left(1- \dfrac{1}{p_i}\right)} < \dfrac{1}{2}$.
\item $\xi_2(n) > \xi_1(n)$ if and only if $\alpha_r \geq 2$ and $\prod \limits_{i=1}^{r-1} {\left(1- \dfrac{1}{p_i}\right)} > \dfrac{1}{2}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Note that
\begin{align*}
\xi_2(n) - \xi_1(n) &= \dfrac{n}{p_r^{\alpha_r}} + (p_r^{\alpha_r-1}-2) \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) - \left \{ \displaystyle\dfrac{n}{p_r} - p_r^{\alpha_r-1} \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) \right \}\\
& = \left (1-p_r^{\alpha_r-1} \right ) \dfrac{n}{p_r^{\alpha_r}} + 2(p_r^{\alpha_r-1}-1) \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) \\
& = (p_r^{\alpha_r-1}-1) \left \{2 \phi\left(\dfrac{n}{p_r^{\alpha_r}}\right) - \dfrac{n}{p_r^{\alpha_r}} \right \} \\
& = (p_r^{\alpha_r-1}-1) \dfrac{n}{p_r^{\alpha_r}} \left \{2 \prod \limits_{i=1}^{r-1} {\left(1- \dfrac{1}{p_i}\right)} - 1 \right \} \numberthis \label{Xi1Xi2Compare}
\end{align*}
The right hand side of \eqref{Xi1Xi2Compare} equals $0$ if and only if $\alpha_r=1$, or
\begin{equation}\label{Xi1Xi2equal}
2 \prod \limits_{i=1}^{r-1} {\left(\dfrac{p_i-1}{p_i}\right)} = 1
\end{equation}
We show that \eqref{Xi1Xi2equal} holds if and only if if $r=2$ and $p_1=2$.
If $p_1 >2$, then $2\prod \limits_{i=1}^{r-1} p_i-1$ is even and $2\prod \limits_{i=1}^{r-1} p_i$ is odd. So \eqref{Xi1Xi2equal} does not hold. Now, let $p_1 =2$. Then, if $r >2$, $2 \prod \limits_{i=1}^{r-1} {\left(\dfrac{p_i-1}{p_i}\right)}=\prod \limits_{i=2}^{r-1} {\left(\dfrac{p_i-1}{p_i}\right)} \neq 1$, since the numerator is even and denominator is odd. So we must have $r=2$. Conversely, if $r=2$ and $p_1=2$, then \eqref{Xi1Xi2equal} holds. Therefore, (i) holds.
Since $p_r^{\alpha_r-1}-1>0$ if and only if $\alpha_r \geq 2$, (ii) and (iii) follow from \eqref{Xi1Xi2Compare}.
\end{proof}
We now state the \emph{principle of well-founded induction} (cf. \cite[Theorem 6.10]{jech}) which we use in \Cref{ConnValue2}. An irreflexive and transitive binary relation $\prec$ over a set $A$ is called \emph{well-founded} if it satisfies the property that for every non-empty subset $B \subseteq A$, there exists $x_0 \in B$ such that there is no $x \in B$ with $x \prec x_0$. \\
\noindent\textbf{Principle of well-founded induction.} Let $\prec$ be a well-founded relation on a set $A$ and let $P$ be a property defined on elements of $A$. Then $P$ holds for all elements of $A$ if and only if the following holds: given any $a \in A$, if $P$ holds for all $b \in A$ with $b \prec a$, then $P$ holds for $a$.
\begin{remark}
The \emph{lexicographic order} $\prec$ on $\mathbb{N} \times \mathbb{N}$, defined by $(a_1,b_1) \prec (a_2,b_2)$ if \[a_1 < a_2, \mbox{ or } a_1 = a_2 \mbox{ and } b_1 < b_2,\] is a well-founded relation.
\end{remark}
\begin{theorem}\label{ConnValue2}
If $n=p^\alpha q^\beta$, where $p,q$ are distinct primes and $\alpha,\beta \in \mathbb{N}$, then
\begin{equation}\label{eqnConnValue2}
\kappa(\mathcal{G}(\mathbb{Z}_n)) = \phi(n)+p^{\alpha-1}q^{\beta-1}.
\end{equation}
In fact, for $n \ne pq$, $\langle \overline{pq}\rangle^*$ is a minimum separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
\end{theorem}
\begin{proof}
We consider the lexicographic order $\prec$ on $\mathbb{N} \times \mathbb{N}$, and prove by applying the principle of well-founded induction that \eqref{eqnConnValue2} holds for all $(\alpha,\beta) \in \mathbb{N} \times \mathbb{N}$. Note that, as $n$ is not a prime power, $\mathcal{G}(\mathbb{Z}_n)$ and hence $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$ are not complete graphs.
By \cite[Theorem 3]{ChattopadhyayConnectivity}, $\kappa(\mathcal{G}(\mathbb{Z}_{pq})) = \phi(n)+1$, hence the statement holds true for $(\alpha,\beta)=(1,1)$.
Now take $(\alpha,\beta) \in \mathbb{N} \times \mathbb{N}$ such that $(1,1) \prec (\alpha,\beta)$. Suppose that \eqref{eqnConnValue2} holds for all $(a,b)\prec(\alpha,\beta)$. Then $n \neq pq$ and hence by \Cref{minsepsetPhi}, $\Gamma:=\widetilde{\mathcal{G}}(\mathbb{Z}_n)$ is connected. Further, by \Cref{MinSepSetZn}, $\langle \overline{pq}\rangle^*$ is a minimal separating set of $\Gamma$. We show that $\langle \overline{pq}\rangle^*$ is a minimum separating set of $\Gamma$.
Let $T$ be a minimal separating set of $\Gamma$. We show that $|{\langle \overline{pq}\rangle}^*| \leq |T|$. If ${\langle \overline{pq} \rangle}^* \subseteq T$, we are done. So let ${\langle \overline{pq} \rangle}^* \nsubset T$. Then there exists an element $\overline{a} \in {\langle \overline{pq} \rangle}^*$ such that $\overline{a} \notin T$. Let $\Gamma_1=\mathcal{G}_{\mathbb{Z}_n}(\langle \overline{p} \rangle^*)$ and $\Gamma_2=\mathcal{G}_{\mathbb{Z}_n}(\langle \overline{q} \rangle^*)$. Then observe that $\Gamma=\Gamma_1 \cup \Gamma_2$, and hence $\Gamma-T=(\Gamma_1-T) \cup (\Gamma_2-T)$. Further, $\overline{a} \in V(\Gamma_1-T) \cap V(\Gamma_2-T)$. Hence as $\Gamma-T$ is disconnected, at least one of $\Gamma_1-T$ or $\Gamma_2-T$ is disconnected.
\noindent\emph{Case 1:} Let $\Gamma_1-T$ be disconnected. If $\alpha=1$, then $|\langle \overline{p} \rangle|=q^{\beta}$ and hence $\Gamma_1$ is a complete graph. So $\Gamma_1-T$ cannot be disconnected. So $\alpha \geq 2$. Then,
\begin{align*}
|T| - |{\langle \overline{pq}\rangle}^*| & \geq \kappa(\Gamma_1)-|{\langle \overline{pq}\rangle}^*|\\
& =\kappa(\mathcal{G}_{\mathbb{Z}_n}(\langle \overline{p} \rangle^*))-|{\langle \overline{pq}\rangle}^*|\\
& =\kappa(\mathcal{G}(\langle \overline{p} \rangle)-\overline{0})-|{\langle \overline{pq}\rangle}^*|\\
& =\kappa(\mathcal{G}(\langle \overline{p} \rangle))-1-\left(|{\langle \overline{pq}\rangle}|-1\right) \hspace{3pt} (\text{ by }\Cref{remark2})\\
& = \kappa(\mathcal{G}(\mathbb{Z}_{p^{\alpha-1} q^\beta}))-|{\langle \overline{pq}\rangle}|\\
& = \phi(p^{\alpha-1} q^\beta)+ p^{\alpha-2} q^{\beta-1} -\left(p^{\alpha-1} q^{\beta-1} \right) \hspace{1pt} \text{ (by induction hypothesis)}\\
& = p^{\alpha-2} q^{\beta-1}(p-1)(q-1)+ p^{\alpha-2} q^{\beta-1}-p^{\alpha-1} q^{\beta-1}\\
& = p^{\alpha-2} q^{\beta-1}\{(p-1)(q-1)+1-p \}\\
& = p^{\alpha-2} q^{\beta-1}(p-1)(q-2) \geq 0. \numberthis \label{ineqConn21}
\end{align*}
\noindent\emph{Case 2:} Let $\Gamma_2-T$ be disconnected. Proceeding as in Case 1, we have $\beta \geq 2$, and
\begin{align*}
|T| - |{\langle \overline{pq}\rangle}^*| & \geq \kappa(\Gamma_2)-|{\langle \overline{pq}\rangle}^*|\\
& =\kappa(\mathcal{G}_{\mathbb{Z}_n}(\langle \overline{q} \rangle^*))-|{\langle \overline{pq}\rangle}^*|\\
& = p^{\alpha-1} q^{\beta-2}\{(p-1)(q-1)+1-q \} \\
& \geq p^{\alpha-1} q^{\beta-2}\{(q-1)+1-q \} = 0. \numberthis \label{ineqConn22}
\end{align*}
So for $(1, 1) \prec (\alpha,\beta)$, $\langle \overline{pq}\rangle^*$ is a minimum separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$ and hence $\kappa(\mathcal{G}(\mathbb{Z}_n)) = \phi(n)+p^{\alpha-1}q^{\beta-1}$. Therefore by the principle of well-founded induction, \eqref{eqnConnValue2} holds for all $(\alpha,\beta) \in \mathbb{N} \times \mathbb{N}$.
\end{proof}
The following is a simple consequence of \Cref{ConnValue2}.
\begin{corollary}\label{conn2}
If $G$ is a cyclic group of order $n=2^\alpha p^\beta$, where $p$ is an odd prime and $\alpha,\beta \in \mathbb{N}$, then $\kappa(\mathcal{G}(G))=\dfrac{n}{2}$.
\end{corollary}
\begin{proof}
\begin{align*}
\kappa(\mathcal{G}(G)) &= \phi(n)+2^{\alpha-1}p^{\beta-1}\\
&= 2^{\alpha-1}p^{\beta-1}(2-1)(p-1)+2^{\alpha-1}p^{\beta-1} = 2^{\alpha-1}p^{\beta} = \dfrac{n}{2}.
\end{align*}
\end{proof}
We now obtain the connectivity of $\mathcal{G}(\mathbb{Z}_{pqr})$ in the following result.
\begin{theorem}\label{VerConEq2}
If $n=pqr$, where $p < q < r$ are primes, then $\cb{pr} \cup \cb{qr}$ is a minimum separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$. Consequently, $$\kappa(\mathcal{G}(\mathbb{Z}_n)) = \phi(n)+p+q-1.$$
\end{theorem}
\begin{proof}
Notice that the equivalence classes of $\widetilde{\mathbb{Z}}_n$ with respect to $\approx$ are precisely: $\cb{p}$, $\cb{q}$, $\cb{r}$, $\cb{pq}$, $\cb{pr}$ and $\cb{qr}$. Construct a graph in which the equivalence classes are vertices and two classes $\cb{a}$ and $\cb{b}$ are adjacent if each element $\cb{a}$ is adjacent to every element of $\cb{b}$ in $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$. Thus the graph will be as shown in \Cref{Graphpqr}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\path(2,1) edge node[anchor=east] {} (2,-1);
\path(-2,1) edge node[anchor=east] {} (-2,-1);
\path(0,-2) edge node[anchor=east] {} (2,-1);
\path(0,-2) edge node[anchor=east] {} (-2,-1);
\path(0,2) edge node[anchor=east] {} (-2,1);
\path(0,2) edge node[anchor=east] {} (2,1);
\draw[fill](0,2) circle (2pt) node[anchor=south] {$\cb{p}$};
\draw[fill](2,1) circle (2pt) node[anchor=west] {$\cb{pr}$};
\draw[fill](-2,1) circle (2pt) node[anchor=east] {$\cb{pq}$};
\draw[fill](0,-2) circle (2pt) node[anchor=north] {$\cb{qr}$};
\draw[fill](2,-1) circle (2pt) node[anchor=west] {$\cb{r}$};
\draw[fill](-2,-1) circle (2pt) node[anchor=east] {$\cb{q}$};
\end{tikzpicture}
\caption{\label{Graphpqr}$\widetilde{\mathcal{G}}(\mathbb{Z}_{pqr})$}
\end{figure}
It is evident from \Cref{Graphpqr} that deletion of any one $\approx$-class does not disconnect $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$. However, deletion of any two $\approx$-classes that are not adjacent in \Cref{Graphpqr} disconnects $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$. Hence by \Cref{minsepunion}, a minimal separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$ is precisely the union of any two non-adjacent $\approx$-classes in \Cref{Graphpqr}. Now by \Cref{ClassLemma}(ii), note that $|\cb{p}|=(q-1)(r-1)$, $|\cb{q}|=(p-1)(r-1)$, $|\cb{r}|=(p-1)(r-1)$, $|\cb{pq}|=r-1$, $|\cb{pr}|=q-1$ and $|\cb{qr}|=p-1$, and hence we have the following inequalities:
$$|\cb{p}|>|\cb{q}|>|\cb{r}|,|\cb{pq}|>|\cb{pr}|>|\cb{qr}|,|\cb{r}|>|\cb{pr}|$$
Consequently, $\cb{pr} \cup \cb{qr}$ is of minimum cardinality among the non-adjacent pairs of classes so that $\cb{pr} \cup \cb{qr}$ is a minimum separating set of $\widetilde{\mathcal{G}}(\mathbb{Z}_n)$.
Since $\left|\cb{pr} \cup \cb{qr}\right|=p+q-2$, by \Cref{ImpLemma}(ii), $\kappa(\mathcal{G}(\mathbb{Z}_n)) = \phi(n)+p+q-1$.
\end{proof}
\section{Components of proper power graphs of $p$-groups}
\label{sec-ablpgrp}
Throughout this section, $p$ denotes a prime number. A $p$-\emph{group} is a finite group whose order is some power of $p$. In this section, we study the components of proper power graphs of $p$-groups.
\begin{proposition}\label{Orderp}
Let $G$ be a $p$-group and $x \in G^*$ has order $p$. Then $x$ is adjacent to every other vertex of the component of proper power graph $\mathcal{G}^*(G)$ that contains $x$.
\end{proposition}
\begin{proof}
Let $C$ be the component of $\mathcal{G}^*(G)$ that contains $x$. Consider $y \in V(C)$, $y \neq x$. We show that $x$ is adjacent to $y$. Note that there exists at least one $x,y$-path in $\mathcal{G}^*(G)$; say $x=x_0,x_1,\ldots,x_m=y$. We claim that for all $1 \leq i \leq m$,
\begin{equation}\label{pathp}
x \in \langle x_i \rangle.
\end{equation}
As $x$ and $x_1$ are adjacent, either $x \in \langle x_1 \rangle$ or $x_1 \in \langle x\rangle$. If $x \in \langle x_1 \rangle$, then \eqref{pathp} holds for $i=1$. Now let $x_1 \in \langle x \rangle$. Since $o(x)=p$, we have $\langle x \rangle=\langle x_1 \rangle$. So, again \eqref{pathp} holds for $i=1$.
Suppose $x \in \langle x_k \rangle$ for some $1 \leq k \leq m-1$. We show that $x \in \langle x_{k+1} \rangle$. From adjacency of $x_k$ and $x_{k+1}$, we have $x_k \in \langle x_{k+1} \rangle$ or $x_{k+1} \in \langle x_k \rangle$. If $x_k \in \langle x_{k+1} \rangle$, we get $x \in \langle x_{k+1}\rangle$, by induction hypothesis. Thus \eqref{pathp} holds for $i=k+1$.
Now take $x_{k+1} \in \langle x_k \rangle$. Then $x_{k+1} = x_k^{c_{k+1}p^{\alpha_{k+1}}}$ for some $c_{k+1} \in \mathbb{N}$, $(c_{k+1},p)=1$ and $\alpha_{k+1} \in \mathbb{N}\cup \{0\}$. Hence
\begin{equation}\label{AbelianEqn}
\left \langle x_{k+1} \right \rangle =\left \langle x_k^{p^{\alpha_{k+1}}} \right \rangle
\end{equation}
If $\alpha_{k+1}=0$, then $\langle x_{k+1}\rangle =\langle x_k \rangle$. So $x \in \langle x_{k+1}\rangle$ follows from induction hypothesis.
Now let $\alpha_{k+1}>0$. As $x \in \langle x_k \rangle$, $x = x_k^{c_{k}p^{\alpha_{k}}}$ for some $c_{k} \in \mathbb{N}$, $(c_{k},p)=1$ and $\alpha_k \in \mathbb{N}\cup \{0\}$. Hence $\langle x \rangle = \langle x_{k}^{p^{\alpha_k}} \rangle$. If $\alpha_{k}=0$, then $\langle x \rangle = \langle x_{k} \rangle$. This along with \eqref{AbelianEqn} imply that $\langle x_{k+1}\rangle =\langle x^{p^{\alpha_{k+1}}} \rangle$. Because $o(x)=p$ and $\alpha_{k+1}>0$, we get $\langle x_{k+1}\rangle =\langle e \rangle$; which is a contradiction. Thus $\alpha_k>0$. Since $o(x)=p$ and $\langle x \rangle = \langle x_{k}^{p^{\alpha_k}} \rangle$, we get $o(x_k)=p^{\alpha_k+1}$. Moreover, if $o(x_{k+1})=p^\beta$, from \eqref{AbelianEqn} we get $o(x_k)=p^{\alpha_{k+1}+\beta}$. Hence, using the fact that $\beta \geq 1$, we get $\alpha_k \geq \alpha_{k+1}$. Then $\langle x \rangle = \langle x_{k}^{p^{\alpha_k}} \rangle \subseteq \langle x_k^{p^{\alpha_{k+1}}} \rangle =\langle x_{k+1}\rangle$, so that $x \in \langle x_{k+1} \rangle$. Therefore \eqref{pathp} holds for $i=k+1$.
So we conclude that $x \in \langle x_i \rangle$ for all $1 \leq i \leq n$, and in particular, $x \in \langle y \rangle$. Consequently, $x$ is adjacent to $y$.
\end{proof}
\begin{proposition}\label{Componentp}
If $G$ is a $p$-group, then each component of $\mathcal{G}^*(G)$ has exactly $p-1$ elements of order $p$.
\end{proposition}
\begin{proof}
Let $C$ be a component of $\mathcal{G}^*(G)$. Take $x \in V(C)$. Then $o(x)=p^{\gamma}$ for some $\gamma \in \mathbb{N}$. Then $w=x^{p^{\gamma-1}}$ is an element of order $p$ in $V(C)$. So $C$ has at least one vertex of order $p$. Now let $y$ be an vertex of order $p$ in $C$. If $z$ ($\neq y$) is another vertex of order $p$ in $C$, by \Cref{Orderp}, $x$ and $y$ are adjacent. As $o(y)=o(z)=p$, we get $\langle y\rangle=\langle z \rangle$. Since $\langle y \rangle$ has exactly $p-1$ elements of order $p$, the proof follows.
\end{proof}
Using the fact that every finite abelian group is isomorphic to a direct product of cyclic groups of prime-power order (cf. \cite[Theorem 11.1]{Gallian}) and \Cref{Componentp} we have the following theorem.
\begin{theorem}\label{AbelianCompo}
Let $G$ be an abelian $p$-group isomorphic to a direct product of $r$ cyclic groups. Then the number of components of $\mathcal{G}^*(G)$ is $p^{r-1}+p^{r-2}+\ldots+1$.
\end{theorem}
\begin{proof}
Let $G$ be isomorphic to $H := H_1 \times H_2 \times \ldots \times H_r$, where $H_1, H_2, \ldots ,H_r$ are cyclic $p$-groups. Then it is enough to prove the above statement for $\mathcal{G}^*(H)$.
For any $1\leq i \leq r$, $H_i$ has $p-1$ elements of order $p$ (cf. \cite[Theorem 4.4]{Gallian}), and if $(x_1,x_2,\ldots, x_r) \in H$, then $o((x_1,x_2,\ldots, x_r)) = \textnormal{lcm}(o(x_1),o(x_2),\ldots,o(x_r))$ (cf. \cite[Theorem 8.1]{Gallian}). So $H$ has $p^r-1$ elements of order $p$. Hence by \Cref{Componentp}, the number of components of $\mathcal{G}^*(H)$ is $\frac{p^r-1}{p-1}=p^{r-1}+p^{r-2}+\ldots+1$.
\end{proof}
Since every finite abelian group is isomorphic to a direct product of cyclic groups of prime-power order, by \Cref{AbelianCompo}, proper power graph of a non-cyclic abelian $p$-group has more than one component. Thus we have the following corollary.
\begin{corollary}
If $G$ is a non-cyclic abelian $p$-group, then $k(\mathcal{G}(G))=1$.
\end{corollary}
| {
"timestamp": "2017-03-28T02:07:42",
"yymm": "1703",
"arxiv_id": "1703.08834",
"language": "en",
"url": "https://arxiv.org/abs/1703.08834",
"abstract": "The power graph of a group $G$ is the graph whose vertex set is $G$ and two distinct vertices are adjacent if one is a power of the other. This paper investigates the minimal separating sets of power graphs of finite groups. For power graphs of finite cyclic groups, certain minimal separating sets are obtained. Consequently, a sharp upper bound for their connectivity is supplied. Further, the components of proper power graphs of $p$-groups are studied. In particular, the number of components of that of abelian $p$-groups are determined.",
"subjects": "Combinatorics (math.CO)",
"title": "On connectedness of power graphs of finite groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109511920161,
"lm_q2_score": 0.8031737869342623,
"lm_q1q2_score": 0.7909743410832246
} |
https://arxiv.org/abs/0808.2664 | Communication-optimal parallel and sequential QR and LU factorizations | We present parallel and sequential dense QR factorization algorithms that are both optimal (up to polylogarithmic factors) in the amount of communication they perform, and just as stable as Householder QR.We prove optimality by extending known lower bounds on communication bandwidth for sequential and parallel matrix multiplication to provide latency lower bounds, and show these bounds apply to the LU and QR decompositions. We not only show that our QR algorithms attain these lower bounds (up to polylogarithmic factors), but that existing LAPACK and ScaLAPACK algorithms perform asymptotically more communication. We also point out recent LU algorithms in the literature that attain at least some of these lower bounds. |
\section{Lower Bounds for CAQR}
\label{sec:LowerBounds_CAQR}
In this section, we review known lower bounds on communication
bandwidth for parallel and sequential $\Theta (n^3)$
matrix-matrix multiplication
of matrices stored in
2-D layouts,
extend some of them to the rectangular case, and then extend them to LU
and QR, showing that our sequential and parallel CAQR
algorithms have optimal communication complexity with respect
to both bandwidth (in a Big-Oh sense, and sometimes modulo
polylogarithmic factors).
We will also use the simple fact that if $B$ is a lower bound on the number
of words that must be communicated to implement an algorithm,
and if $W$ is the size of the local memory (in the parallel case)
or fast memory (in the sequential case),
so that $W$ is the largest possible size of a message,
then $B/W$ is a lower bound on the latency, i.e. the number
of messages needed to move $B$ words into or out of the memory.
We use this to derive lower bounds on latency, which are
also attained by our algorithms (again in a Big-Oh sense,
and sometimes modulo polylogarithmic factors).
We begin in section~\ref{SS:MMlowerbounds} by reviewing
known communication complexity bounds for
$\Theta (n^3)$ matrix multiplication,
due first to Hong and Kung \cite{hong1981io} in the sequential case,
and later proved more simply and extended to the parallel case
by Irony, Toledo and Tiskin \cite{irony2004communication}.
It is easy to extend lower bounds for matrix multiplication
to lower bounds for LU decomposition via the following
reduction of matrix multiplication to LU:
\begin{equation}\label{eq:GEMM-to-LU}
\begin{pmatrix}
I & 0 & -B \\
A & I & 0 \\
0 & 0 & I \\
\end{pmatrix}
=
\begin{pmatrix}
I & & \\
A & I & \\
0 & 0 & I \\
\end{pmatrix}
\begin{pmatrix}
I & 0 & -B \\
& I & A \cdot B \\
& & I \\
\end{pmatrix}.
\end{equation}
See \cite{grigori2008calu} for an implementation of
parallel LU that attains these bounds.
See \cite{toledo1997locality} for an implementation of sequential
LU and a proof that it attains the bandwidth lower bound
(whether the latency lower bound is attained is an open problem).
It is reasonable to expect that lower bounds for matrix multiplication
will also apply (at least in a Big-Oh sense) to other
one-sided factorizations, such as QR.
As we will see, QR is not as simple as LU.
All this assumes commutative and associative
reorderings of conventional $\Theta(n^3)$
matrix multiplication,
and so excludes faster algorithms using distributivity
or special constants, such as those of
Strassen \cite{strassen1969gaussian} or Coppersmith and Winograd \cite{coppersmith1982asymptotic},
and their use in asymptotically fast versions of
LU and QR \cite{FastLinearAlgebraIsStable}.
Extending communication lower bounds to these asymptotically
faster algorithms is an open problem.
\begin{comment}
In Section
\ref{SS:lowerbounds:QR}, we will justify our conjecture that these
bounds also apply to certain one-sided matrix factorizations
(including LU and QR) of an $m \times n$ matrix $A$. We begin with
1-D layouts in Section \ref{SS:lowerbounds:1d}, and follow with 2-D
layouts, which are more extensively studied in the literature.
\subsection{Assumptions}\label{SS:lowerbounds:assumptions}
The matrix multiplication results only allow commutative and
associative reorderings of the standard $\Theta(n^3)$ algorithm:
\[
C_{ij} = C_{ij} + \sum_{k} A_{ik} B_{kj}.
\]
Alternate algorithms, such as Strassen's, are not permitted.
In the parallel case, we assume that each processor has a local memory
of a fixed size $W$. The bounds govern the number of words that must
be transferred between each processor's local memories, given any
desired initial data distribution. We do not consider the cost of
this initial distribution.
In the sequential case, the bounds govern the number of floating-point
words transferred between slow and fast memory, given that fast memory
has a fixed size $W$. We generally assume, unlike some authors we
cite, that no part of the input matrix resides in fast memory at the
start of the computation.
For the 1-D layouts, we can assume a block layout (only one block per
processor) without loss of generality, as an implicit row permutation
reduces block cyclic layouts to block layouts. We also generally
assume that the blocks have at least as many rows as columns (i.e.,
$m/P \geq n$).
\subsection{Prior work}
Hong and Kung first developed a lower bound on the number of
floating-point words moved between slow and fast memory for sequential
matrix multiplication \cite{hong1981io}. Others continued this work.
Irony et al.\ simplified the proof and extended these bounds to
include a continuum of parallel matrix distributions
\cite{irony2004communication}. The latter work uses a proof technique
which we conjecture could be extended to cover a variety of one-sided
dense matrix factorizations, all of which require multiplying
submatrices of the same magnitude as the matrix. These include both
LU and QR. The bounds are known to hold for LU, as matrix-matrix
multiplication can be reduced to LU using the following standard
reduction:
\begin{equation}\label{eq:GEMM-to-LU}
\begin{pmatrix}
I & 0 & -B \\
A & I & 0 \\
0 & 0 & I \\
\end{pmatrix}
=
\begin{pmatrix}
I & & \\
A & I & \\
0 & 0 & I \\
\end{pmatrix}
\begin{pmatrix}
I & 0 & -B \\
& I & A \cdot B \\
& & I \\
\end{pmatrix}.
\end{equation}
\end{comment}
\begin{comment}
\begin{equation}\label{eq:GEMM-to-LU}
\begin{pmatrix}
I & -B \\
A & 0 \\
\end{pmatrix}
=
\begin{pmatrix}
I & 0 \\
A & I \\
\end{pmatrix}
\begin{pmatrix}
I & -B \\
0 & AB \\
\end{pmatrix}.
\end{equation}
\end{comment}
\begin{comment}
Extending the bound to QR factorization is an open problem, as far as
we know, but it is reasonable to conjecture that the same bound holds.
\subsection{From bandwidth to latency}\label{SS:lowerbounds:bw2lat}
Given a lower bound on the number of words transferred, we can use the
(local resp.\ fast) memory capacity $W$ to derive a lower bound on the
number of messages, in both the parallel and sequential cases. This
is because the best an algorithm can do in terms of the number of
messages is to fill up its local resp.\ fast memory before it performs
another message. So we can take any ``inverse bandwidth'' lower bound
and divide it by $W$ to get a ``latency'' lower bound.
\end{comment}
\subsection{Matrix Multiplication Lower Bounds}
\label{SS:MMlowerbounds}
We review lower bounds in \cite{hong1981io,irony2004communication}
for multiplication of two $n$-by-$n$ matrices $C = A \cdot B$
using commutative and associative (but not distributive)
reorderings of the usual $\Theta(n^3)$ algorithm.
In the sequential case, they assume that $A$ and $B$ initially reside
in slow memory, that there is a fast memory of size $W < n^2$,
and that the product $C = A \cdot B$ must be computed and eventually
reside in slow memory.
They bound from below the number of
words that need to be moved between slow memory and fast memory
to perform this task:
\begin{equation}\label{eqn_MatMul_seq_bw_lowerbound}
{\rm \#\ words\ moved} \geq \frac{n^3}{2 \sqrt{2} W^{1/2}} - W \approx
\frac{n^3}{2 \sqrt{2} W^{1/2}} \; \; .
\end{equation}
Since only $W$ words can be moved in one message, this also provides
a lower bound on the number of messages:
\begin{equation}\label{eqn_MatMul_seq_lat_lowerbound}
{\rm \#\ messages} \geq \frac{n^3}{2 \sqrt{2} W^{3/2}} - 1 \approx
\frac{n^3}{2 \sqrt{2} W^{3/2}} \; \; .
\end{equation}
In the rectangular case, where $A$ is $n$-by-$r$, $B$ is $r$-by-$m$,
and $C$ is $n$-by-$m$, so that the number of arithmetic operations in
the standard algorithm is $2mnr$, the above two results still apply,
but with $n^3$ replaced by $mnr$.
The parallel case is considered in \cite{irony2004communication}.
There is
actually a spectrum of algorithms, from the so-called 2D case,
that use little extra memory beyond that needed to store equal fractions
of the matrices $A$, $B$ and $C$
(and so about $3n^2/P$ words for each of $P$ processors, in the square case),
to the 3D case, where each input matrix is replicated up to $P^{1/3}$ times,
so with each processor needing memory of size $n^2/P^{2/3}$ in the square case.
We only consider the 2D case, which is the conventional, memory
scalable approach.
In the 2D case, with square matrices, Irony et al show that
if each processor has $\mu n^2 /P$ words of local
memory, and $P \geq 32 \mu^3$, then at least one of the processors
must send or receive at least the following number of words:
\begin{equation}\label{eqn_MatMul_par_bw_lowerbound}
{\rm \#\ words\ sent\ or\ received} \geq \frac{n^2}{4 \sqrt{2} (\mu P)^{1/2}}
\end{equation}
and so using at least the following number of messages
(assuming a maximum message size of $n^2/P$):
\begin{equation}\label{eqn_MatMul_par_lat_lowerbound}
{\rm \#\ messages} \geq \frac{P^{1/2}}{4 \sqrt{2} (\mu)^{3/2}} \; \; .
\end{equation}
We wish to extend this to the case of rectangular matrices.
We do this in preparation for analyzing CAQR in the rectangular case.
The proof is a simple extension of Thm.~4.1 in
\cite{irony2004communication}.
\theorem{
Consider the conventional matrix multiplication algorithm applied to
$C = A \cdot B$ where
$A$ is $n$-by-$r$, $B$ is $r$-by-$m$, and $C$ is $n$-by-$m$,
implemented on a $P$ processor distributed memory parallel computer.
Let $\bar{n}$, $\bar{m}$ and $\bar{r}$ be the sorted values of
$n$, $m$, and $r$, i.e. $\bar{n} \geq \bar{m} \geq \bar{r}$.
Suppose each processor has $3\bar{n}\bar{m}/P$ words of local memory,
so that it can fit 3 times as much as $1/P$-th of the largest of the
three matrices. Then as long as
\begin{equation}\label{eqn:RectMM}
\bar{r} \geq \sqrt{\frac{864 \bar{n}\bar{m}}{P}}
\end{equation}
(i.e. none of the matrices is ``too rectangular'')
then the number of words at least one processor must send or
receive is
\begin{equation}\label{eqn:RectMM_bw}
{\rm \#\ words\ moved} \geq
\frac{\sqrt{\bar{n}\bar{m}} \cdot \bar{r}}
{\sqrt{96 P}}
\end{equation}
and the number of messages is
\begin{equation}\label{eqn:RectMM_lat}
{\rm \#\ messages} \geq
\frac{ \sqrt{P} \cdot \bar{r}}
{ \sqrt{864 \bar{n}\bar{m}} }
\end{equation}
}
\rm
\begin{proof}
We use (\ref{eqn_MatMul_seq_bw_lowerbound})
with $\bar{m} \bar{n} \bar{r}/P$ substituted for $n^3$,
since at least one processor does this much arithmetic,
and $W = 3\bar{n}\bar{m}/P$ words of local memory.
The constants in inequality (\ref{eqn:RectMM})
are chosen so that the first term in
(\ref{eqn_MatMul_seq_bw_lowerbound}) is at least $2W$,
and half the first term is a lower bound.
\end{proof}
It is well-known that the communication lower bound for
sequential matrix multiplication is attained by
``tiling'' or ``blocking'' the matrices into square
blocks of dimension $\sqrt{W/3}$, and for parallel
matrix multiplication by Cannon's algorithm \cite{cannon1969cellular}.
\begin{comment}
\subsection{Lower bounds for TSQR}\label{SS:lowerbounds:1d}
TSQR with data stored in a 1D layout
is simpler than the general CAQR case, and does
not depend on the above bounds for matrix multiplication.
\subsubsection{Sequential TSQR}
In the sequential case, the $m \times n$ matrix $A$ must be read from
slow memory into fast memory at least once, if we assume that fast
memory is empty at the start of the computation, and the answer written
out to slow memory.
Thus, the number of
words transferred (the bandwidth lower bound) is at least $2mn$.
As described in Table~\ref{tbl:QR:perfcomp:seq}
in Section~\ref{S:TSQR:perfcomp}, and in more detail in Appendix~B,
our sequential TSQR moves
\[
\frac{mn^2}{W - n(n+1)/2}
+ 2mn
- \frac{n(n+1)}{2}
\]
words. Since we assume $W \geq \frac{3}{2} n^2$, this is little more
than the lower bound $2mn$.
In contrast,
blocked left-looking Householder QR moves
\[
+ \frac{m^2 n^2}{2 W}
- \frac{m n^3}{6W}
+ \frac{3mn}{2}
- \frac{3n^2}{4}
\]
words, where the first and second terms combined can be $\Theta(\frac{mn}{W})$
times larger than the lower bound (see Table~\ref{tbl:QR:perfcomp:seq});
note that $\frac{mn}{W}$ is how many times larger the matrix is than
the fast memory.
The number of slow memory reads and writes (the latency lower bound)
is at least $2mn/W$.
As described in the same sections as before, our sequential TSQR sends
\[
\frac{2mn}{W - n(n+1)/2}
\]
words, which is close to the lower bound.
In contrast, blocked left-looking Householder QR sends
\[
\frac{2mn}{W}
+
\frac{mn^2}{2W}
\]
messages, which can be $\Theta(n)$ times larger than the
lower bound (see Table~\ref{tbl:QR:perfcomp:seq}).
\subsubsection{Parallel TSQR}
\label{sec:par_TSQR_comm_bounds}
In the parallel case, we prefer for 1-D layouts to distinguish between
the minimum number of messages per processor, and the number of
messages along the critical path. For example, one can perform a
reduction linearly, so that each processor only sends one message to
the next processor. This requires $P - 1$ messages along the critical
path, but only one message per processor. A lower bound on the
minimum number of sends or receives performed by any processor is also
a lower bound on the number of messages along the critical path. The
latter is more difficult to analyze for 2-D layouts, so we only look
at the critical path for 1-D layouts. By the usual argument that any
nontrivial function of data distributed across $P$ processors requires
at least $\log_2 P$ messages to compute, the critical path length
$C_{\text{1-D}}(m,n,P)$ satisfies
\begin{equation}\label{eq:lowerbound:1d:par:lat}
C_{\text{1-D}}(m,n,P) \geq \log_2 P.
\end{equation}
This is also the number of messages required per processor along the
critical path. This lower bound is obviously attained by parallel TSQR
based on a binary tree.
Appendix~\ref{S:CommLowerBoundsFromCalculus}
shows formally that for any reduction tree computing the QR
decomposition of $\frac{m}{p} \times n$ matrices at its leaves, each
path from the leaves to the root must send at least $n(n+1)/2$ words
of information along each edge. This means the bandwidth cost is at
least $n(n+1)/2$ times the length of critical path, or at least
$\log(P)n(n+1)/2$. This is clearly attained by TSQR (see Table
\ref{tbl:QR:perfcomp:par}).
\end{comment}
\subsection{Lower Bounds for CAQR}\label{SS:lowerbounds:2d}
Now we need to extend our analysis of matrix multiplication.
We assume all variables are real; extensions to the complex
case are straightforward.
Suppose $A = QR$ is $m$-by-$n$, $n$ even,
so that
\[
\bar{Q}^T \cdot \bar{A}
\equiv
\left(
Q( 1:m, 1:\frac{n}{2} )
\right)^T
\cdot
A( 1:m, \frac{n}{2}+1:n )
=
R( 1:\frac{n}{2}, \frac{n}{2}+1:n )
\equiv
\bar{R} \; \; .
\]
It is easy to see that $\bar{Q}$
depends only on the first $\frac{n}{2}$ columns of $A$, and
so is independent of $\bar{A}$. The obstacle to directly
applying existing lower bounds for matrix multiplication
of course is that $\bar{Q}$ is not represented as an explicit
matrix, and $\bar{Q}^T \cdot \bar{A}$ is not implemented by
straightforward matrix multiplication.
Nevertheless, we argue that the same data dependencies
as in matrix multiplication can be found inside many
implementations
of $\bar{Q}^T \cdot \bar{A}$,
and that therefore
the geometric ideas underlying the analysis in
\cite{irony2004communication} still apply.
Namely, there are two data structures $\tilde{Q}$
and $\tilde{A}$ indexed with pairs of subscripts
$(j,i)$ and $(j,k)$ respectively with the following properties.
\begin{itemize}
\item $\tilde{A}$ stores $\bar{A}$ as well as all intermediate
results which may overwrite $\bar{A}$.
\item $\tilde{Q}$ represents $\bar{Q}$, i.e., an $m$-by-$\frac{n}{2}$
orthogonal matrix. Such a matrix is a member of the Stiefel manifold
of orthogonal matrices, and is known to require
$\frac{mn}{2} - \frac{n}{4}(\frac{n}{2}+1)$ independent parameters
to represent, with column $i$ requiring $m-i$ parameters,
although a particular algorithm may represent
$\bar{Q}$ using more data.
\item The algorithm operates mathematically independently on each column
of $\bar{A}$, i.e., methods like that of Strassen are excluded.
This means that the algorithm performs at least
$\frac{mn}{2} - \frac{n}{4}(\frac{n}{2}+1)$ multiplications
on each $m$-dimensional column vector of $\bar{A}$
(see subsection~\ref{SS:lowerbounds:2d:flops} for a proof),
and does the same operations on each column of $\bar{A}$.
\item For each $(i,k)$ indexing $\bar{R}_{i,k}$, which is the
component of the $k$-th column $\bar{A}_{:,k}$ of $\bar{A}$ in
the direction of the $i$-th column $\bar{Q}_{:,i}$ of $\bar{Q}$,
it is possible to identify at least $m-i$ common components of $\tilde{A}_{:,k}$
and of $\tilde{Q}_{:,i}$ such that a parameter associated with
$\tilde{Q}_{j,i}$ is multiplied by a value stored in $\tilde{A}_{j,k}$.
\end{itemize}
The last point, which says that $\bar{Q}^T \cdot \bar{A}$
has at least the same dependencies as matrix multiplication,
requires illustration.
\begin{itemize}
\item Suppose $\bar{Q}$ is represented as a product of $\frac{n}{2}$
Householder reflections with a projection $\hat{Q}$ onto the
first $\frac{n}{2}$ coordinates,
$\bar{Q} =
(I - \tau_1 u_1u_1^T)
\cdots
(I - \tau_{n/2} u_{n/2} u_{n/2}^T)
\hat{Q}$,
normalized in the
conventional way where the topmost nonzero entry of each $u_j$ is one,
and $\hat{Q}$ consists of the first $n/2$ columns of the $n$-by-$n$
identity matrix.
Then $\tilde{Q}_{j,i} = u_i(j)$ is multiplied by some intermediate value of
$\bar{A}_{j,k}$, i.e. $\tilde{A}_{j,k}$.
\item Suppose $\bar{Q}$ is represented as a product of block
Householder transformations $(I-Z_1U_1^T) \cdots (I-Z_f U_f^T) \hat{Q}$
where $U_g$ and $Z_g$ are $m$-by-$b_g$ matrices, $U_g$ consisting
of $b_g$ Householder vectors side-by-side.
Again associate $\tilde{Q}_{j,i}$ with
the $j$-th entry of the $i$-th Householder vector $u_i(j)$.
\item Recursive versions of QR \cite{elmroth1998new} apply
blocked Householder transformations organized so as to better
use BLAS3, but still let us use the approach of the last bullet.
\item Suppose $\bar{Q}$ is represented as a product of
$\frac{mn}{2} - \frac{n}{4}(\frac{n}{2}+1)$ Givens rotations,
each one creating a unique subdiagonal zero entry in $A$ which is
never filled in. There are
many orders in which these zeros can be created, and possibly
many choices of row that each Givens rotation may rotate with to zero
out its desired entry.
If the desired zero entry in $A_{j,i}$ is created by
the rotation in rows $j'$ and $j$, $j'<j$,
then associate $\tilde{Q}_{j,i}$ with the value
of the cosine in the Givens rotation, since this will be multiplied
by $\bar{A}_{j,k}$.
\item Suppose, finally, that we use CAQR to perform the
QR decomposition, so that $\bar{Q} = Q_1 \cdots Q_f \hat{Q}$, where
each $Q_g$ is the result of TSQR on $b_g$ columns.
Consider without loss of generality $Q_1$, which operates
on the first $b_1$ columns of $A$.
We argue that TSQR still produces $m-i$ parameters associated
with column $i$ as the above methods. Suppose there are $P$
row blocks, each of dimension $\frac{m}{P}$-by-$b_1$.
Parallel TSQR initially does QR independently on each block, using
any of the above methods; we associate multipliers as
above with the subdiagonal entries in each block. Now
consider the reduction tree that combines $q$ different $b_1$-by-$b_1$
triangular blocks at any particular node.
This generates
$(q-1)b_1(b_1+1)/2$ parameters that multiply the equal number of entries
of the $q-1$ triangles being zeroed out, and so can be associated with
appropriate entries of $\tilde{Q}$. Following the reduction tree, we
see that parallel TSQR produces exactly
as many parameters as Householder reduction,
and that these may be associated one-for-one with all subdiagonal
entries of $\tilde{Q}(:,1:b_1)$ and $\tilde{A}(:,1:b_1)$ as above.
Sequential TSQR reduction is analogous.
\end{itemize}
We see that we have only tried to capture the dependencies
of a fraction of the arithmetic operations performed by various
QR implementations; this is all we need for a lower bound.
Now we resort to the geometric approach of
\cite{irony2004communication}: Consider a three dimensional
block of lattice points, indexed by $(i,j,k)$.
Each point on the $(i,0,k)$ face is associated with $\bar{R}_{i,k}$,
for $1 \leq i,k \leq \frac{n}{2}$.
Each point on the $(0,j,k)$ face is associated with $\tilde{A}_{j,k}$,
for $1 \leq k \leq \frac{n}{2}$ and $1 \leq j \leq m$.
Each point on the $(i,j,0)$ face is associated with $\tilde{Q}_{j,i}$,
for $1 \leq i \leq \frac{n}{2}$ and $1 \leq j \leq m$.
Finally, each interior point $(i,j,k)$ for
$1 \leq i,k \leq \frac{n}{2}$ and $1 \leq j \leq m$ represents the
multiplication $\tilde{Q}_{j,i} \cdot \tilde{A}_{j,k}$.
The point is that the multiplication at $(i,j,k)$ cannot occur
unless $\tilde{Q}_{j,i}$ and $\tilde{A}_{j,k}$ are together in memory.
Finally, we need the Loomis-Whitney inequality \cite{loomis1949inequality}:
Suppose $V$ is a set of lattice points in 3D,
$V_i$ is projection of $V$ along $i$ onto the $(j,k)$ plane,
and similarly for $V_j$ and $V_k$. Let $|V|$ denote the cardinality of $V$,
i.e. counting lattice points. Then
$|V|^2 \leq |V_i| \cdot |V_j| \cdot |V_k|$.
We can now state
\lemma{
Suppose a processor with local (fast) memory of size $W$ is participating
in the QR decomposition of an $m$-by-$n$ matrix, $m \geq n$, using an
algorithm of the sort discussed above.
There may or may not be other processors participating (i.e. this lemma covers
the sequential and parallel cases). Suppose the processor performs $F$
multiplications. Then the processor must move
the following number of words into or out of its memory:
\begin{equation}\label{Thm:1_bw}
{\rm \#\ of\ words\ moved} \geq \frac{F}{(8W)^{1/2}} - W
\end{equation}
using at least the following number of messages:
\begin{equation}\label{Thm:1_lat}
{\rm \#\ of\ messages} \geq \frac{F}{(8W^3)^{1/2}} - 1
\end{equation}
}
\label{lemma:LB}
\rm
\begin{proof}
The proof closely follows that of Lemma~3.1 in \cite{irony2004communication}.
We decompose the computation into phases. Phase $l$ begins when the
total number of words moved into and out of memory is exactly $lW$.
Thus in each phase, except perhaps the last, the memory loads and stores
exactly $W$ words.
The number of words $n_A$ from different $\tilde{A}_{jk}$ that the processor
can access in its memory during a phase is $2W$, since each word was either
present at the beginning of the phase or read during the phase.
Similarly the number of coefficients $n_Q$ from different $\tilde{Q}_{ji}$
also satisfies $n_Q \leq 2W$. Similarly, the number $n_R$ of locations
into which intermediate results like $\tilde{Q}_{ji} \cdot \tilde{A}_{jk}$
can be accumulated or stored is at most $2W$. Note that these intermediate
results could conceivably be stored or accumulated in $\tilde{A}$ because
of overwriting; this does not affect the upper bound on $n_R$.
By the Loomis-Whitney inequality, the maximum number of useful multiplications
that can be done during a phase (i.e. assuming intermediate results are
not just thrown away) is bounded by
$\sqrt{n_A \cdot n_Q \cdot n_R} \leq \sqrt{8W^3}$. Since the processor does
$F$ multiplications, the number of full phases required is at least
\[
\left\lfloor \frac{F}{\sqrt{8W^3}} \right\rfloor \geq \frac{F}{\sqrt{8W^3}} -1
\]
so the total number of words moved is $W$ times larger, i.e. at least
\[
{\rm \#\ number\ of\ words\ moved} \geq
\frac{F}{\sqrt{8W}} -W \; \; .
\]
The number of messages follows by dividing by $W$, the maximum
message size.
\end{proof}
\rm
The following is our main result for sequential CAQR:
\corollary{
Consider a single processor computing the QR decomposition of
an $m$-by-$n$ matrix with $m \geq n$, using an algorithm of the
sort discussed above. Then the number of words moved between
fast and slow memory is at least
\begin{equation}\label{Thm:2_bw}
{\rm \#\ of\ words\ moved} \geq \frac
{\frac{mn^2}{4}-\frac{n^2}{8}(\frac{n}{2}+1)}
{(8W)^{1/2}} - W
\geq \frac{3n^2(m - \frac{4}{3})}{16(8W)^{1/2}} - W
\end{equation}
using at least the following number of messages:
\begin{equation}\label{Thm:2_lat}
{\rm \#\ of\ messages} \geq \frac
{\frac{mn^2}{4}-\frac{n^2}{8}(\frac{n}{2}+1)}
{(8W^3)^{1/2}} - 1
\geq \frac{3n^2(m - \frac{4}{3})}{16(8W^3)^{1/2}} - 1
\end{equation}
}
\label{corollary:SeqCAQR}
\rm
\begin{proof}
The proof follows easily from Lemma~\ref{lemma:LB} by
using the lower bound
$F \geq {\frac{mn^2}{4}-\frac{n^2}{8}(\frac{n}{2}+1)}$ on the number
of multiplications by any algorithm in the class discussed above
(see Lemma~\ref{lemma:F_lowerbound} in
subsection~\ref{SS:lowerbounds:2d:flops} for a proof).
\end{proof}
\rm
The lower bound could be increased by a constant factor by using
a specific number of multiplications (say $mn^2 - n^3 / 3$
using Householder reductions), instead of arguing more generally based on
the number of parameters needed to represent orthogonal matrices.
Comparing to the performance model in
Section~\ref{sec:CAQR_sequential}, especially
Table~\ref{tbl:CAQR:seq:model:opt},
we see that sequential CAQR attains these bounds to within a
constant factor.
The following is our main result for parallel CAQR:
\corollary{
Consider a parallel computer with $P$ processors
and $W$ words of memory per processor
computing the QR decomposition of
an $m$-by-$n$ matrix with $m \geq n$, using an algorithm of the
sort discussed above.
Then the number of words sent and received
by at least one processor
is at least
\begin{equation}\label{Thm:3a_bw}
{\rm \#\ of\ words\ moved} \geq \frac
{\frac{mn^2}{4}-\frac{n^2}{8}(\frac{n}{2}+1)}
{P(8W)^{1/2}} - W
\geq \frac{3n^2(m - \frac{4}{3})}{16p(8W)^{1/2}} - W
\end{equation}
using at least the following number of messages:
\begin{equation}\label{Thm:3a_lat}
{\rm \#\ of\ messages} \geq \frac
{\frac{mn^2}{4}-\frac{n^2}{8}(\frac{n}{2}+1)}
{P(8W^3)^{1/2}} - 1
\geq \frac{3n^2(m - \frac{4}{3})}{16P(8W^3)^{1/2}} - 1
\end{equation}
In particular, when each processor has $W = mn/P$ words of memory
and the matrix is not too rectangular, $n \geq \frac{2^{11}m}{P}$,
then the number of words sent and received
by at least one processor is at least
\begin{equation}\label{Thm:3b_bw}
{\rm \#\ of\ words\ moved} \geq
\sqrt{\frac{m n^3}{2^{11}P}}
\end{equation}
using at least the following number of messages:
\begin{equation}\label{Thm:3b_lat}
{\rm \#\ of\ messages} \geq
\sqrt{\frac{nP}{2^{11}m}} \; \; .
\end{equation}
In particular, in the square case $m=n$, we get that
as long as $P \geq 2^{11}$,
then the number of words sent and received
by at least one processor is at least
\begin{equation}\label{Thm:4b_bw}
{\rm \#\ of\ words\ moved} \geq
{\frac{n^2}{2^{11/2}P^{1/2}}}
\end{equation}
using at least the following number of messages:
\begin{equation}\label{Thm:4b_lat}
{\rm \#\ of\ messages} \geq
\sqrt{\frac{P}{2^{11}}} \; \; .
\end{equation}
}
\rm
\begin{proof}
The result follows from the previous Corollary, since
at least one processor has to do $1/P$-th of the work.
\end{proof}
Comparing to the performance model in Section~\ref{sec:CAQR_parallel},
especially Table~\ref{tbl:CAQR:par:model:opt}, we see that
parallel CAQR attains these bounds to within a constant factor.
\subsection{Lower Bounds on Flop Counts for QR}
\label{SS:lowerbounds:2d:flops}
This section proves lower bounds on arithmetic for {\em any} ``columnwise''
implementation of QR, by which we mean one whose operations can be reordered
so as to be left looking, i.e. the operations that compute columns $i$
of $Q$ and $R$ depend on data only in columns 1 through $i$ of $A$.
The mathematical dependencies are such that columns $i$ of $Q$ and $R$
do only depend on columns 1 through $i$ of $A$, but saying that operations
only depend on these columns eliminates algorithms like Strassen.
(It is known that QR can be done asymptotically as fast as any fast
matrix multiplication algorithm like Strassen, and stably
\cite{FastLinearAlgebraIsStable}.)
This section says where the lower bound on
$F$ comes from that is used in the proof of
Corollary~\ref{corollary:SeqCAQR} above.
The intuition is as follows.
Suppose $A = QR$ is $m$-by-$(j+1)$,
so that
\[
\bar{Q}^T \cdot \bar{A}
\equiv (Q(1:m,1:j))^T \cdot A(1:m,j+1) =
R(1:j, j+1)
\equiv \bar{R} \; \; .
\]
where $\bar{Q}$ only depends on the first $j$ columns of $A$, and
is independent of $\bar{A}$. As an arbitrary $m$-by-$j$ orthogonal
matrix, a member of the Stiefel manifold of dimension
$mj-j(j+1)/2$, $\bar{Q}$ requires $mj-j(j+1)/2$ independent
parameters to represent. We will argue that no matter how $\bar{Q}$
is represented, i.e. without appealing to the special structure of
Givens rotations or Householder transformations, that unless
$mj-j(j+1)/2$ multiplications are performed to compute $\bar{R}$
it cannot be computed correctly, because it cannot depend on
enough parameters.
Assuming for a moment that this is true, we get a lower bound
on the number of multiplications needed for QR on an $m$-by-$n$ matrix
by summing
$\sum_{j=1}^{n-1} [mj-j(j+1)/2] = \frac{mn^2}{2} - \frac{n^3}{6} + O(mn)$.
The two leading terms are half the multiplication count for Householder
QR (and one fourth of the total operation count, including additions).
So the lower bound is rather tight.
Again assuming this is true, we get a lower bound on the
value $F$ in Corollary~\ref{corollary:SeqCAQR} by multiplying
$\frac{n}{2} \cdot (m\frac{n}{2} - \frac{n}{2}(\frac{n}{2}+1)/2)
= \frac{mn^2}{4} - \frac{n^2}{8}(\frac{n}{2}+1) \leq F$.
Now we prove the main assertion, that $mj-j(j+1)/2$ multiplications are needed
to compute the single column $\bar{R} = \bar{Q}^T \cdot \bar{A}$, no matter how
$\bar{Q}$ is represented. We model the computation as a DAG (directed
acyclic graph) of operations with the following properties, which we
justify as we state them.
\begin{enumerate}
\item There are $m$ input nodes labeled by the $m$ entries of $\bar{A}$,
$a_{1,j+1}$ through $a_{m,j+1}$.
We call these $\bar{A}$-input nodes for short.
\item There are at least $mj-j(j+1)/2$ input nodes labeled by parameters
representing $\bar{Q}$, since this many parameters are needed to
represent a member of the Stiefel manifold.
We call these $\bar{Q}$-input nodes for short.
\item There are two types of computation nodes, addition and multiplication.
In other words, we assume that we do not do divisions, square roots, etc.
Since we are only doing matrix multiplication, this is reasonable.
We note that any divisions or square roots in the overall algorithm
may be done in order to compute the parameters represented $\bar{Q}$.
Omitting these from consideration only lowers our lower bound
(though not by much).
\item There are no branches in the algorithm. In other words, the
way an entry of $\bar{R}$ is computed does not depend on the numerical
values. This assumption reflects current algorithms, but could in fact
be eliminated as explained later.
\item Since the computation nodes only do multiplication and addition,
we may view the output of each node as a polynomial in entries of $\bar{A}$
and parameters representing $\bar{Q}$.
\item We further restrict the operations performed so that the output of
any node must be a homogeneous linear polynomial in the entries of $\bar{A}$.
In other words, we never multiply two quantities depending on entries
of $\bar{A}$ to get a quadratic or higher order polynomial,
or add a constant or parameter depending on $\bar{Q}$ to an entry of
$\bar{A}$. This is
natural, since the ultimate output is linear and homogeneous in $\bar{A}$,
and any higher degree polynomial terms or constant terms would have to
be canceled away. No current or foreseeable algorithm (even Strassen based)
would do this, and numerical stability would likely be lost.
\item There are $j$ output nodes labeled by the entries of $\bar{R}$,
$r_{1,j+1}$ through $r_{j,j+1}$.
\end{enumerate}
The final requirement means that multiplication nodes are only allowed
to multiply $\bar{Q}$-input nodes and homogeneous linear functions
of $\bar{A}$, including $\bar{A}$-input nodes.
Addition nodes may add homogeneous linear functions of $\bar{A}$
(again including $\bar{A}$-input nodes), but not add $\bar{Q}$-input nodes
to homogeneous linear functions of $\bar{A}$.
We exclude the possibility of adding or multiplying $\bar{Q}$-input nodes,
since the results of these could just be represented as additional
$\bar{Q}$-input nodes.
Thus we see that the algorithm represented by the DAG just described
outputs $j$ polynomials that are homogeneous and linear in $\bar{A}$.
Let $M$ be the total number of multiplication nodes in the DAG.
We now want to argue that unless $M \geq mj-j(j+1)/2$,
these output polynomials cannot possibly compute the right answer.
We will do this by arguing that the dimension of
a certain algebraic variety they define is both bounded above by $M$,
and the dimension must be at least $mj-j(j+1)/2$ to get the right answer.
Number the output nodes from $1$ to $j$.
The output polynomial representing node $i$ can be written as
$\sum_{k=1}^m p_{k,i} (\bar{Q}) a_{k,j+1}$, where $p_{k,i}(\bar{Q})$ is
a polynomial in the values of the $\bar{Q}$-input nodes. According
to our rules for DAGs above, only multiplication nodes can introduce
a dependence on a previously unused $\bar{Q}$-input node, so
all the $p_{k,i}(\bar{Q})$
can only depend on $M$ independent parameters.
Finally, viewing each output node as a vector
of $m$ coefficient polynomials
\linebreak
$(p_{1,i} (\bar{Q}),...,p_{m,i} (\bar{Q}))$,
we can view the entire output as a vector of $mj$ coefficient polynomials
$V(\bar{Q}) = (p_{1,1}(\bar{Q}),...,p_{m,j}(\bar{Q}))$,
depending on $M$ independent parameters.
This vector of length $mj$ needs to represent the set of
all $m$-by-$j$ orthogonal matrices. But the Stiefel manifold of
such orthogonal matrices has dimension $mj-j(j+1)/2$, so the surface
defined by $V$ has to have at least this dimension, i.e. $M \geq mj-j(j+1)/2$.
As an extension, we could add branches to our algorithm by noting that the
output of our algorithm would be piecewise polynomials, on regions
whose boundaries are themselves defined by varieties in the
same homogeneous linear polynomials. We can apply the above argument
on all the regions with nonempty interiors to argue that the same
number of multiplications is needed.
In summary, we have proven
\lemma{
Suppose we are doing the QR factorization of an $m$-by-$n$
matrix using any ``columnwise'' algorithm in the sense described
above. Then at least $mj - j(j+1)/2$ multiplications are required
to compute column $j+1$ of $R$, and at least
$\frac{mn^2}{4} - \frac{n^2}{8}(\frac{n}{2} + 1)$
multiplications to compute columns $\frac{n}{2}+1$ through $n$
of $R$.
}
\label{lemma:F_lowerbound}
\rm
\endinput
\begin{comment}
\subsection{Lower Bounds on Communication for CAQR (version 2)}
\label{SS:lowerbounds:2d:V2}
I tried to extend the argument of the last section to a communications
lower bound more general than in the one in Section~\ref{SS:lowerbounds:2d}.
This would require showing that the dependencies in any columnwise
implementation of QR
could be organized to be able to do the Loomis-Whitney-style analysis
of that section. But it is a challenge to do this in a way that
sheds any more light than before, not to make
so many assumptions about the algorithm that it would be hard to see
how it would apply to anything beyond the list of algorithms in the
Section~\ref{SS:lowerbounds:2d}
(Givens, Householder, block Householder, recursive Householder, CAQR,
and mixtures thereof).
In this case, I think these algorithms are be easier to understand
case-by-case than by finding a rather abstract common structure.
So I propose to leave
Section~\ref{SS:lowerbounds:2d}
as is, unless someone else has a better idea.
\end{comment}
\endinput
\section{Communication-Avoiding QR - CAQR}
\label{sec:CAQR_optimal}
We present the CAQR algorithm for computing the
QR factorization of an $m$-by-$n$ matrix $A$, with $m \geq n$.
In the parallel case $A$ is stored on a two-dimensional grid of
processors $P = P_r \times P_c$ in a 2-D block-cyclic layout,
with blocks of dimension $b \times b$.
We assume that all the blocks have the same size;
we can always pad the input matrix with zero rows and columns to
ensure this is possible.
In the sequential case we also assume $A$ is stored in a
$P_r \times P_c$ 2-D blocked layout, with individual
$\frac{m}{P_r}$-by-$\frac{n}{P_c}$
blocks stored contiguously in memory.
For a detailed description of the 2-D
block cyclic layout, see \cite{scalapackusersguide}.
Stated most simply, parallel (resp. sequential) CAQR simply
implements the right-looking QR factorization using parallel
(resp. sequential) TSQR as the panel factorization.
The rest is bookkeeping.
Section~\ref{sec:CAQR_parallel} discusses parallel CAQR in
more detail,
and comparing performance to ScaLAPACK.
We also show, given $m$, $n$ and $P$, to
choose $P_r$, $P_c$ and $b$ to minimize running times
of both algorithms; our proof of CAQR's optimality depends
on these choices. Section~\ref{sec:CAQR_sequential} does the
same for sequential CAQR and an out-of-DRAM algorithm from
ScaLAPACK, whose floating point operations are counted
sequentially.
Subsection~\ref{sec:seq_qr_other} discusses other sequential
QR algorithms, including showing that recursive QR routines of Elmroth
and Gustavson \cite{elmroth2000applying} also minimize
bandwidth, though possibly not latency.
\subsection{Parallel CAQR}
\label{sec:CAQR_parallel}
We describe a few details
most relevant to the complexity but refer the reader to
\cite[Section 13]{TSQR_technical_report} for details.
At the $j$-th step of the algorithm, parallel TSQR
is used to factor the panel of dimension $m-(j-1)b$-by-$b$,
whose top left corner is at matrix diagonal entry $(j-1)b+1$.
We assume for simplicity that the $m_j = m-(j-1)b$ rows are
distributed across all $P_r$ processors in the processor column.
When we do parallel TSQR on the panel, all the at most
$\frac{m}{P_r}$ local rows of the panel stored on a processor are factored together
in the first step of TSQR. After the panel factorization,
we multiply the transpose of the $Q$ factor times the trailing
submatrix as follows. First, the Householder vectors representing the $Q$ factor
of the $\frac{m}{P_r}$ local rows of the panel are broadcast to all the processes in the
same processor row, and applied to their submatrices in an
embarrassingly parallel fashion.
Second, the Householder vectors $Y$ of the smaller $Q$ factors in TSQR's
binary reduction tree are independently broadcast along their processor rows,
and the updates to the $b$ rows in each pair of processors are performed
in parallel, with the triangular $T$ factor of the block Householder
transformation $I - YTY^T$ being computed by one of the two processors,
and with the two processors exchanging only $b$ rows of data.
Table~\ref{tbl:CAQR:par:model} summarizes the operation counts,
including divisions counted separately, as well as a similar
model for ScaLAPACK's PDGEQRF for comparison.
We make the following observations.
Parallel CAQR does slightly more
flops than ScaLAPACK (but only in lower order terms), and sends
nearly the same of words (actually very slightly fewer).
But CAQR reduces the $3n \log P_r$ term in ScaLAPACK's message
count by a factor of $b$, and so can reduce the overall
message count by as much as a factor of $b$ (depending $P_r$ and $P_c$).
Thus by increasing the block size $b$, we can lower the number of messages
by a large factor. But we can't raise $b$ arbitrarily without
increasing the flop count; next we show how to
choose the parameters $b$, $P_r$ and $P_c$ to minimize the runtime.
\begin{table}[h]
\small
\centering
\begin{tabular}{l | l}
& Parallel CAQR \\ \hline
\# messages & $\frac{3n}{b} \log P_r + \frac{2n}{b} \log P_c$ \\ \hline
\# words & $\left(
\frac{n^2}{P_c}
+ \frac{bn}{2}
\right) \log P_r
+ \left(
\frac{mn - n^2/2}{P_r} + 2n
\right) \log P_c$ \\ \hline
\# flops & $\frac{2n^2(3m-n)}{3P}
+ \frac{bn^2}{2P_c}
+ \frac{3bn(2m - n)}{2P_r}
+ \left( \frac{4 b^2 n}{3}
+ \frac{n^2 (3b+5)}{2 P_c}
\right) \log P_r
- b^2 n$ \\ \hline
\# divisions & $\frac{mn - n^2/2}{P_r}
+ \frac{bn}{2} \left( \log P_r - 1 \right)$
\\ \hline \hline
& ScaLAPACK's \texttt{PDGEQRF} \\ \hline
\# messages & $3n \log P_r + \frac{2n}{b} \log P_c$ \\ \hline
\# words & $\left(
\frac{n^2}{P_c}
+ bn
\right) \log P_r
+ \left(
\frac{mn - n^2/2}{P_r}
+ \frac{bn}{2}
\right) \log P_c$ \\ \hline
\# flops & $\frac{2n^2(3m-n)}{3P}
+ \frac{bn^2}{2P_c}
+ \frac{3bn(2m - n)}{2P_r}
- \frac{b^2 n}{3 P_r}$
\\ \hline
\# divisions & $\frac{mn - n^2/2}{P_r}$ \\
\end{tabular}
\caption{Performance models of parallel CAQR and ScaLAPACK's
\lstinline!PDGEQRF! when factoring an $m \times n$ matrix, $m \geq n$,
distributed in a 2-D block cyclic layout on a $P_r \times P_c$
grid of processors with square $b \times b$ blocks. All terms are
counted along the critical path. In this table exclusively, ``flops''
only includes floating-point additions and multiplications, not
floating-point divisions, which are shown separately.
Some lower-order terms are omitted.}
\label{tbl:CAQR:par:model}
\end{table}
When choosing $b$, $P_r$, and $P_c$ to minimize the runtime,
they must satisfy the following conditions:
\begin{equation}\label{eq:CAQR:par:opt:ansatz:constraints}
1 \leq P_r, P_c \leq P
\; \; , \;
P_r \cdot P_c = P
\; \; , \;
1 \leq b \leq \frac{m}{P_r}
\; \; {\rm and} \;
1 \leq b \leq \frac{n}{P_c}
\end{equation}
For simplicity we will assume that $P_r$ evenly divides $m$ and that $P_c$
evenly divides $n$.
Example values of $b$, $P_r$, and $P_c$ which satisfy the
constraints in Equation \eqref{eq:CAQR:par:opt:ansatz:constraints} are
\[
P_r = \sqrt{\frac{m P}{n}}
\; \; , \;
P_c = \sqrt{\frac{n P}{m}}
\; \; {\rm and} \;
b = \sqrt{\frac{m n}{P}}
\]
These values are chosen simultaneously to minimize the approximate
number of words sent, $n^2/P_c + mn/P_r$, and the approximate number
of messages, $5n/b$, where for simplicity we temporarily ignore
logarithmic factors and lower-order terms in Table
\ref{tbl:CAQR:par:model}. This suggests using the following ansatz:
\begin{equation}\label{eq:CAQR:par:opt:ansatz}
P_r = K \cdot \sqrt{\frac{m P}{n}}
\; \; , \;
P_c = \frac{1}{K} \cdot \sqrt{\frac{n P}{m}}
\; \; \text{and} \;
b = B \cdot \sqrt{\frac{m n}{P}},
\end{equation}
for general values of $K$ and $B \leq \min\{ K, 1/K \}$, since we can
thereby explore all possible values of $b$, $P_r$ and $P_c$ satisfying
\eqref{eq:CAQR:par:opt:ansatz:constraints}.
Using the substitutions in Equation \eqref{eq:CAQR:par:opt:ansatz},
the flop count (neglecting lower-order terms, including the division
counts) becomes
\begin{multline}\label{eq:CAQR:par:opt:ansatz:flops}
\frac{mn^2}{P} \left(
2
- B^2
+ \frac{3B}{K}
+ \frac{B K}{2}
\right)
-
\frac{n^3}{P} \left(
\frac{2}{3}
+ \frac{3B}{2K}
\right)
+ \\
\frac{mn^2 \log\left( K \cdot \sqrt{\frac{mP}{n}} \right)}{P} \left(
\frac{4B^2}{3}
+ \frac{3 B K}{2}
\right).
\end{multline}
We wish to choose $B$ and $K$ so as to minimize the flop count. We
know at least that we need to eliminate the dominant $mn^2
\log(\dots)$ term, so that parallel CAQR has the same asymptotic flop
count as ScaLAPACK's \lstinline!PDGEQRF!. This is because we know
that CAQR performs at least as many floating-point operations
(asymptotically) as \lstinline!PDGEQRF!, so matching the highest-order
terms will help minimize CAQR's flop count.
To make the high-order terms of \eqref{eq:CAQR:par:opt:ansatz:flops}
match the $2mn^2/P - 2n^3/(3P)$ flop count of ScaLAPACK's parallel QR
routine, while minimizing communication as well, we can pick $K=1$ and
\[
B = o\left(
\log^{-1}\left(
\sqrt{\frac{ m P }{ n }}
\right)
\right);
\]
for simplicity we will use
\begin{equation}\label{eq:CAQR:par:opt:flops:B}
B = \log^{-2} \left(
\sqrt{\frac{ m P }{ n }}
\right)
\end{equation}
although $B$ could be multiplied by
some positive constant.
The above choices of $B$ and $K$ make the flop count as follows, with
some lower-order terms omitted:
\begin{equation}\label{eq:CAQR:par:opt:flops}
\frac{2mn^2}{P}
- \frac{2n^3}{3P}
+ \frac{3 m n^2}{P \log\left( \frac{m P}{n} \right)}
\end{equation}
Thus, we can choose the block size $b$ so as to match the higher-order
terms of the flop count of ScaLAPACK's parallel QR factorization
\lstinline!PDGEQRF!.
Using the substitutions in Equations \eqref{eq:CAQR:par:opt:ansatz}
and \eqref{eq:CAQR:par:opt:flops:B} with $K = 1$, the number of
messages becomes
\begin{equation}
\label{eq:CAQR:par:opt:lat}
\sqrt{\frac{n P}{m}}
\cdot \log^2\left( \sqrt{\frac{m P}{n}} \right)
\cdot \log\left( P \sqrt{\frac{m P}{n}} \right).
\end{equation}
\begin{comment}
The best we can do with the latency is to make $C$ as large as
possible, which makes the block size $b$ as large as possible. The
value $C$ must be a constant, however; specifically, the flop counts
require
\[
\begin{aligned}
C &= \Omega\left( \log^{-2} \left( K \sqrt{\frac{m P}{n}} \right)
\right)\,\text{and} \\
C &= O(1).
\end{aligned}
\]
We leave $C$ as a tuning parameter in the number of messages Equation
\eqref{eq:CAQR:par:opt:lat}.
\end{comment}
Using the substitutions in Equation \eqref{eq:CAQR:par:opt:ansatz}
and \eqref{eq:CAQR:par:opt:flops:B}, the number of words transferred
between processors on the critical path, neglecting lower-order terms,
becomes
\begin{multline}\label{eq:CAQR:par:opt:bw}
\sqrt{\frac{m n^3}{P}} \log P
- \frac{1}{4} \sqrt{\frac{n^5}{m P}} \log \left( \frac{n P}{m} \right)
+ \frac{1}{4} \sqrt{\frac{m n}{P}} \log^3\left( \frac{m P}{n} \right)
\approx \\
\sqrt{\frac{m n^3}{P}} \log P
- \frac{1}{4} \sqrt{\frac{n^5}{m P}} \log \left( \frac{n P}{m} \right).
\end{multline}
\begin{comment}
In the second step above, we eliminated the $C$ term, as it is a
lower-order term (since $m \geq n$). Thus, $C$ only has a significant
effect on the number of messages and not the number of words
transferred.
\end{comment}
The results of these computations are shown in
Table \ref{tbl:CAQR:par:model:opt}, which also shows
the results for ScaLAPACK, whose analogous
analysis appears in
\cite[Section 15]{TSQR_technical_report},
and the communication lower bounds, which are discussed
in Section~\ref{sec:LowerBounds_CAQR}.
\begin{comment}
the number of messages and
number of words used by parallel CAQR and ScaLAPACK when $P_r$, $P_c$,
and $b$ are independently chosen so as to minimize the runtime models,
as well as the optimal choices of these parameters. In summary, if we
choose $b$, $P_r$, and $P_c$ independently and optimally for both
algorithms, the two algorithms match in the number of flops and words
transferred, but CAQR sends a factor of $\Theta(\sqrt{mn/P})$ messages
fewer than ScaLAPACK QR. This factor is the local memory requirement
on each processor, up to a small constant.
\end{comment}
\begin{table}[h!]
\centering
\begin{tabular}{l | l}
& Parallel CAQR w/ optimal $b$, $P_r$, $P_c$ \\ \hline
\# flops & $\frac{2mn^2}{P} - \frac{2n^3}{3P}$ \\
\# messages & $\frac{1}{4}
\sqrt{\frac{n P}{m}}
\log^2\left(
\frac{m P}{n}
\right)
\cdot \log\left(
P \sqrt{\frac{m P}{n}}
\right)$ \\
\# words & $\sqrt{\frac{m n^3}{P}} \log P
- \frac{1}{4} \sqrt{\frac{n^5}{m P}}
\log\left( \frac{n P}{m} \right)$ \\
Optimal $b$ & $ \sqrt{\frac{m n}{P}}
\log^{-2} \left( \frac{m P}{n} \right)$ \\
Optimal $P_r$ & $\sqrt{\frac{m P}{n}}$ \\
Optimal $P_c$ & $\sqrt{\frac{n P}{m}}$
\\ \hline \hline
& \texttt{PDGEQRF} w/ optimal $b$, $P_r$, $P_c$ \\ \hline
\# flops & $\frac{2mn^2}{P} - \frac{2n^3}{3P}$ \\
\# messages & $\frac{n}{4}
\log\left( \frac{m P^5}{n} \right)
\log\left( \frac{m P}{n} \right)
+ \frac{3n}{2} \log\left( \frac{m P}{n} \right)$ \\
\# words & $\sqrt{\frac{m n^3}{P}} \log P
- \frac{1}{4} \sqrt{\frac{n^5}{m P}}
\log\left( \frac{n P}{m} \right)$ \\
Optimal $b$ & $ \sqrt{\frac{mn}{P}}
\log^{-1}\left( \frac{m P}{n} \right)$ \\
Optimal $P_r$ & $\sqrt{\frac{m P}{n}}$ \\
Optimal $P_c$ & $\sqrt{\frac{n P}{m}}$
\\ \hline \hline
& Theoretical lower bound \\ \hline
\# messages & $\sqrt{\frac{n P}{2^{11} m}}$ \\
\# words & $\sqrt{\frac{mn^3}{2^{11} P}}$ \\
\end{tabular}
\caption{Highest-order terms in the performance models of parallel
CAQR, ScaLAPACK's \lstinline!PDGEQRF!, and theoretical lower bounds
for each, when factoring an $m \times n$ matrix, distributed in a
2-D block cyclic layout on a $P_r \times P_c$ grid of processors
with square $b \times b$ blocks. All terms are counted along the
critical path. The theoretical lower bounds assume that $n \geq
2^{11} m / P$, i.e., that the matrix is not too tall and skinny.
In summary, if we choose $b$, $P_r$, and $P_c$ independently and
optimally for both algorithms, the two algorithms match in the number of flops and words
transferred, but CAQR sends a factor of $\Theta(\sqrt{mn/P})$ messages
fewer than ScaLAPACK QR.
This factor is the local memory requirement on each processor, up to a
small constant.}
\label{tbl:CAQR:par:model:opt}
\end{table}
\subsection{Sequential CAQR}
\label{sec:CAQR_sequential}
As stated above, sequential CAQR is just right-looking QR factorization
with TSQR used for the panel factorization. (In fact left-looking
QR with TSQR has the same costs \cite[Appendix C]{TSQR_technical_report},
but we stick with the right-looking algorithm for simplicity.)
We also assume the $m$-by-$n$ matrix $A$ is stored in a
$P_r \times P_c$ 2-D blocked layout, with individual
$\frac{m}{P_r}$-by-$\frac{n}{P_c}$
blocks stored contiguously in memory, with $m \geq n$ and
$\frac{m}{P_r} \geq \frac{n}{P_c}$.
For TSQR to work as analyzed we need to choose
$P_r$ and $P_c$ large enough for one such
$\frac{m}{P_r}$-by-$\frac{n}{P_c}$
block to fit in fast memory, plus a bit more.
For CAQR we will need to choose $P_r$ and $P_c$ a bit
larger, so that a bit more than 3 such blocks fit in fast memory;
this is in order to perform an update on two such blocks
in the trailing matrix given Householder vectors from TSQR
occupying $\frac{mn}{P_r P_c} + \frac{n^2}{2P_c^2}$ words,
or at most $\frac{4mn}{P}$ altogether. In other words,
we need $\frac{4mn}{P}\leq W$ or $P \geq \frac{4mn}{W}$.
Leaving details to \cite[Appendix C]{TSQR_technical_report},
we summarize the complexity analysis by
\begin{eqnarray}
\label{eq:CAQR:seq:modeltime:P}
T_{\text{seq.\ CAQR}} (m,n,P_c,P_r)
& \leq & \left( \frac{3}{2} P(P_c-1) \right) \alpha + \nonumber \\
& & \left( \frac{3}{2} mn \left( P_c + \frac{4}{3} \right)
- \frac{1}{2} n^2 P_c
\right) \beta \\
& & + \left( 2n^2m - \frac{2}{3}n^3 \right) \gamma \nonumber
\end{eqnarray}
where we have ignored lower order terms,
and used $P_r$ as an upper
bound on the number of blocks in each panel
since this only increases the run time slightly,
and is simpler to evaluate than for the true number of
blocks $P_r - \lfloor (J-1) \frac{n P_r}{m P_c} \rfloor$.
Now we choose $P$, $P_r$ and $P_c$ to minimize the runtime.
From the above formula for
$T_{\text{seq.\ CAQR}} (m,n,P_c,P_r)$, we see that the runtime is
an increasing function of $P_r$ and $P_c$, so that we would like
to choose them as small as possible, within the limits imposed by
the fast memory size $P \geq \frac{4mn}{W}$. So we choose
$P = \frac{4mn}{W}$ (assuming here and elsewhere that the
denominator evenly divides the numerator). But we still need to
choose $P_r$ and $P_c$ subject to $P_r \cdot P_c = P$.
Examining $T_{\text{seq.\ CAQR}} (m,n,P_c,P_r)$ again, we see
that if $P$ is fixed, the runtime is also an increasing function
of $P_c$, which we therefore want to minimize. But we are assuming
$\frac{m}{P_r} \geq \frac{n}{P_c}$, or $P_c \geq \frac{nP_r}{m}$.
The optimal choice is therefore $P_c = \frac{nP_r}{m}$
or $P_c = \sqrt{\frac{nP}{m}}$, which also means
$\frac{m}{P_r} = \frac{n}{P_c}$, i.e., the blocks in the algorithm
are square. This choice of $P_r = \frac{2m}{\sqrt{W}}$ and
$P_c = \frac{2n}{\sqrt{W}}$ therefore minimizes the runtime,
yielding
\begin{eqnarray}
\label{eq:CAQR:seq:modeltime:P:opt}
T_{\text{Seq.\ CAQR}} (m,n,W) & \leq &
\left( 12 \frac{mn^2}{W^{3/2}} \right) \alpha +
\left( 3 \frac{mn^2}{\sqrt{W}} +
\right) \beta +
\nonumber \\
& & \left( 2mn^2 - \frac{2}{3}n^3 \right) \gamma.
\end{eqnarray}
We note that the bandwidth term is proportional to $\frac{mn^2}{\sqrt{W}}$,
and the latency term is $W$ times smaller,
both of which match (to within constant factors), the lower bounds
on bandwidth and latency to be described in
Section~\ref{sec:LowerBounds_CAQR}.
The results of this analysis are shown in
Table~\ref{tbl:CAQR:seq:model:opt}, which also
shows the results for an out-of-DRAM algorithm
PFDGEQRF from ScaLAPACK, whose internal block
sizes $b$ and $c$ have been chosen to minimize disk traffic,
and where we count the floating point operations sequentially
(see \cite[Appendix F]{TSQR_technical_report});
it can also be thought of as a hypothetical model
for an optimized left-looking version of LAPACK's DGEQRF.
\begin{table}[h]
\centering
\begin{tabular}{l | l}
& Sequential CAQR w/ optimal $P_c$, $P_c$ \\ \hline
\# flops & $2mn^2 - \frac{2}{3}n^3$ \\
\# messages & $12 \frac{mn^2}{W^{3/2}}$ \\
\# words & $3 \frac{mn^2}{\sqrt{W}}$ \\
Opt.\ $P$ & $4mn/W$ \\
Opt.\ $P_r$ & $2m / \sqrt{W}$ \\
Opt.\ $P_c$ & $2n / \sqrt{W}$
\\ \hline \hline
& ScaLAPACK's \texttt{PFDGEQRF} w/ optimal $b$, $c$\\ \hline
\# flops & $2mn^2 - \frac{2}{3}n^3$ \\
\# messages & $\frac{mn^2}{2W} + \frac{2mn}{W}$ \\
\# words & $\frac{m^2 n^2}{2W} - \frac{m n^3}{6W}
+ \frac{3mn}{2} - \frac{3n^2}{4}$ \\
Opt.\ $b$ & $1$ \\
Opt.\ $c$ & $\approx \frac{W}{m}$
\\ \hline \hline
& Theoretical lower bound \\ \hline
\# messages & $\frac{3n^2(m - \frac{4}{3})}{16(8W^3)^{1/2}} - 1$ \\
\# words & $\frac{3n^2(m - \frac{4}{3})}{16(8W)^{1/2}} - W$ \\
\end{tabular}
\caption{Highest-order terms in the performance models of sequential
CAQR, ScaLAPACK's out-of-DRAM QR factorization \texttt{PFDGEQRF}
running on one processor, and theoretical lower bounds for each, when
factoring an $m \times n$ matrix with a fast memory capacity of $W$ words.}
\label{tbl:CAQR:seq:model:opt}
\end{table}
\input{seq_qr_other}
\endinput
\endinput
\begin{comment}
Since $m > n$, we must have
\[
W \geq \frac{3n^2}{P} + \frac{n^2}{P_c^2} + \frac{n}{P_c}
\]
in order to solve the problem at all, no matter what values of $P_r$
and $P_c$ we use.
\end{comment}
\begin{comment}
If this condition is satisfied, we can then pick
$P_r$ and $P_c$ so as to maximize the block size (and therefore
minimize the number of transfers between slow and fast memory) in our
algorithm. A good heuristic is to maximize the fast memory usage,
which occurs when we pick $P_r$ and $P_c$ so that
\[
W = \frac{3mn}{P} + \frac{n^2}{P_c^2} + \frac{n}{P_c}.
\]
There could be many values of $P_r$ and $P_c$ that satisfy this
expression, so we leave this as an implicit constraint when seeking to
minimize the communication costs. If we assume $m = n$ and $P_c = P_r
= \sqrt{P}$, then a reasonable approximation of the optimal value of
$P$ is
\[
P = \frac{4n^2}{W}.
\]
\end{comment}
\begin{comment}
\subsection{Total arithmetic operations}
Sequential CAQR is not merely a reordering of standard Householder QR,
but actually performs different operations. Thus, we cannot assume in
advance that the floating-point operation count is the same as that of
standard Householder QR. Both left-looking and right-looking
sequential CAQR perform the same floating-point operations, just in a
different order, so it suffices to count flops for the right-looking
factorization. Here is an outline of the algorithm, annotated with
flop counts for each step:
\begin{algorithmic}[1]
\For{$J=1$ to $\min\{ P_c, P_r \}$}
\State{Factor the current panel (blocks $J$ to $P_r$)}\Comment{$(P_r
- J + 1) \left( \frac{2mn^2}{P \cdot P_c} \right) -
\frac{2n^3}{3 P_c^3}$ flops}
\For{$K = J + 1$ to $P_c$}
\For{$L = P_r$ down to $J+1$}
\State{Perform two-block update on blocks $L$ and $L-1$ of right
panel, using current panel $J$}\Comment{$\frac{4mn^2}{P \cdot
P_c}$ flops}
\EndFor
\State{Perform one-block update on block $J$ of right panel $K$,
using current panel $J$}\Comment{$\frac{2mn^2}{P \cdot P_c} -
\frac{2n^3}{3 P_c^3}$ flops}
\EndFor
\EndFor
\end{algorithmic}
All flop counts assume $m/P_r \geq n/P_c$. If we additionally assume
$P_r \geq P_c$, we obtain a flop count of
\begin{equation}\label{eq:CAQR:seq:flops:general:Pc}
\begin{split}
\text{Flops}_{\text{Seq.\ CAQR, factor}}(m,n,P_r,P_c) = \\
2mn^2 \left(
1
- \frac{P_c^2}{3P}
+ \frac{1}{3P}
+ \frac{1}{P_c}
- \frac{P_r}{P}
\right)
- \frac{2n^3 P_r}{3 P_c^2}
\end{split}
\end{equation}
If we instead assume $P_c \geq P_r$, we obtain a flop count of
\begin{equation}\label{eq:CAQR:seq:flops:general:Pr}
\begin{split}
\text{Flops}_{\text{Seq.\ CAQR, factor}}(m,n,P_r,P_c) = \\
2mn^2 \left(
\frac{P_r^2}{P}
- \frac{P_r^3}{3 P \cdot P_c}
+ \frac{1}{P_c}
- \frac{P_r}{P}
+ \frac{P_r}{3 P \cdot P_c}
\right)
- \frac{2n^3 P_r}{3 P_c^2}
\end{split}
\end{equation}
In the case that $m = n$ (i.e., the matrix is square) and $P_r = P_c =
\sqrt{P}$, both of these formulas reduce to
\[
\frac{4n^3}{3} + \frac{2n^3}{3P} - \frac{2n^3}{3\sqrt{P}}\,\text{flops.}
\]
The standard Householder QR factorization requires $\frac{4}{3}n^3$
flops, which is about the same as sequential CAQR, for a sufficiently
large number of blocks $P$. If we choose $P$ such that $P = 4n^2 /
W$, then the flop count in this case becomes
\[
\frac{4}{3} n^3 + \frac{n W}{6} - \frac{n^2 \sqrt{W}}{3}.
\]
In the limiting case of $W = 4n^2$, we recover the flop count of
standard Householder QR.
\subsection{Communication requirements}
Here, we count the total volume and number of block read and write
operations in sequential CAQR. Since both the left- and right-looking
versions transfer essentially the same number of words between slow
and fast memory, using essentially the same number of messages, we
need only analyze the right-looking variant. Note that in this
version, one can choose to sweep either in row order or in column
order over the trailing matrix. The bandwidth and latency differences
between the two are not significant, so we analyze the column-order
algorithm. Algorithm \ref{Alg:CAQR:seq:RL:detail} shows the
column-order right-looking factorization in more detail.
\begin{algorithm}[h]
\caption{Column-order right-looking sequential CAQR factorization}
\label{Alg:CAQR:seq:RL:detail}
\begin{algorithmic}[1]
\For{$J = 1$ to $\min\{P_c, P_r\}$}
\State{Factor the current panel: $P_r-J+1$ reads of each $m/P_r \times
n/P_c$ block of $A$. One write of the first block's $Q$ factor (size
$\frac{mn}{P} - \left( \frac{n}{2P_c} \right) \left( \frac{n}{P_c} - 1
\right)$) and $R$ factor (size $\left( \frac{n}{2P_c} \right)
\left( \frac{n}{P_c} + 1 \right)$). $P_r - J$ writes of the
remaining $Q$ factors (each of size $mn/P + n/P_c$).}
\For{$K = J+1 : P_c$}\Comment{Update trailing matrix}
\State{Load first $Q$ block (size $mn/P - \left( \frac{n}{2P_c}
\right) \left( \frac{n}{P_c} - 1 \right)$) of panel $J$}
\State{Load block $A_{JK}$ (size $mn/P$)}
\State{Apply first $Q$ block to $A_{JK}$}
\For{$L = J+1 : P_r$}
\State{Load current $Q$ block (size $mn/P + n/P_c$) of panel $J$}
\State{Load block $A_{LK}$ (size $mn/P$)}
\State{Apply current $Q$ block to $[A_{L-1,K}; A_{LK}]$}
\State{Store block $A_{L-1,K}$ (size $mn/P$)}
\EndFor
\State{Store block $A_{P_r,K}$ (size $mn/P$)}
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{Left-looking sequential CAQR factorization}
\label{Alg:CAQR:seq:LL:detail}
\begin{algorithmic}[1]
\For{$J = 1$ to $\min\{P_c, P_r\}$}
\For{$K = 1$ to $J-1$}\Comment{Loop across previously factored panels}
\State{Load first $Q$ block (size $mn/P - (n/P_c)((n/P_c)-1)/2$) of panel $K$}
\State{Load block $A_{KJ}$ (size $mn/P$)}\Comment{Current panel's
index is $J$}
\State{Apply first $Q$ block to $A_{KJ}$}
\For{$L = K+1$ to $P_r$}\Comment{Loop over blocks $K+1 : P_r$ of
current panel}
\State{Load current $Q$ block (size $mn/P + n/P_c$) of panel $K$}
\State{Load block $A_{LJ}$ (size $mn/P$)}
\State{Apply current $Q$ block to $[A_{L-1,J}; A_{LJ}]$}
\State{Store block $A_{L-1,J}$ (size $mn/P$)}
\EndFor
\State{Store block $A_{P_r,J}$ (size $mn/P$)}
\EndFor
\State{Factor blocks $J : P_r$ of the current panel: $P_r-J+1$ reads
of each $m/P_r \times n/P_c$ block of $A$. One write of the first block's
$Q$ factor (size $mn/P - (n/P_c)((n/P_c)-1)/2$) and $R$ factor
(size $(n/P_c)((n/P_c) + 1)/2$). $P_r-J$ writes of the
remaining $Q$ factors (each of size $mn/P + n/P_c$).}
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{../TechReport2007/FIGURES/caqr-seq-ll}
\caption{A possible reordering of the current panel updates in the
left-looking sequential QR factorization. A number $K$ on the left
represents a block of a panel's $Q$ factor, and a number on the
right (attached to the ``current panel'') represents an update of a
current panel block by the correspondingly numbered $Q$ factor
block. The ordering of the numbers is one possible ordering of
updates. Note that the $Q$ panels must be applied to the current
panel in a way that respects left-to-right dependencies on a
horizontal level. For example, the update represented by block 2 on
the left must precede updates by blocks 3, 4, and 6. Blocks with no
components on the same horizontal level are independent and can be
reordered freely with respect to one another.}
\label{fig:CAQR:seq:LL:reuse}
\end{figure}
For comparison, we show the left-looking version in detail as
Algorithm \ref{Alg:CAQR:seq:LL:detail}. Note that this algorithm
uses all but the last block of the current panel twice during updates.
By expanding the number of current panel blocks held in fast memory
and reordering the updates, one can increase reuse of current panel
blocks. Figure \ref{fig:CAQR:seq:LL:reuse} illustrates one possible
reordering. We retain the usual ordering, however, so as to restrict
the number of blocks in fast memory to three at once.
For standard Householder QR, a left-looking factorization saves
bandwidth in the out-of-DRAM regime. D'Azevedo et al.\ took the
left-looking approach in their out-of-DRAM ScaLAPACK QR code, for
example \cite{dazevedo1997design}. Sequential CAQR follows a
different communication pattern, however, so we need not expect that a
left-looking factorization saves communication. In fact, both our
right-looking and left-looking approaches have about the same
bandwidth requirements.
\subsubsection{Communication volume}
If we assume that $P_r \geq P_c$, the column-order right-looking
algorithm transfers
\[
\begin{split}
\frac{mn}{2} \left(
1 +
3 P_c
- \frac{P_c^2}{P_r}
+ \frac{P_c}{P_r}
\right) + \\
n^2 \left(
-\frac{1}{4}
+ \frac{1}{4 P_c}
\right) + \\
n \left(
\frac{11}{12}
+ \frac{P}{2}
- \frac{P_c^2}{6}
+ \frac{P_r}{2}
- \frac{3 P_c}{4}
\right).
\end{split}
\]
words between slow and fast memory. (We grouped the terms in this
expression to clarify their asymptotic significance.) The algorithm
does not require $P_r \geq P_c$, but we chose this constraint to avoid
awkward expressions involving $\min\{P_c, P_r\}$. Later we will
reverse the constraint. If we let $m = n$ and $P_r = P_c = \sqrt{P}$,
we obtain a word transfer count of
\[
\begin{split}
n^2 \left(
+ \sqrt{P}
+ \frac{3}{4}
+ \frac{1}{4 \sqrt{P}}
\right) + \\
n \left(
\frac{P}{3}
- \frac{\sqrt{P}}{4}
+ \frac{11}{12}
\right)\, \text{words.}
\end{split}
\]
In this case, if we choose $P$ such that $P = 4n^2 / W$, we obtain a
word transfer count of
\[
\frac{2 n^3}{\sqrt{W}} + O\left( \frac{n^3}{W} \right)\, \text{words.}
\]
Thus, sequential CAQR of a square matrix on a square 2-D block layout
achieves the lower bound of $\Omega(n^3 / \sqrt{W})$ conjectured in
Section \ref{S:lowerbounds}. In general, we may choose $P_r$ and
$P_c$ so as to minimize the communication requirements, given the fast
memory size $W$ and the input matrix dimensions $m$ and $n$.
If we assume that $P_r \leq P_c$ rather than $P_r \geq P_c$, then
column-order right-looking sequential CAQR transfers
\[
\begin{split}
mn \left(
\frac{3}{2}
- \frac{P_r}{2 Pc}
+ \frac{3 P_r}{2}
- \frac{P_r^2}{2 Pc}
\right) + \\
n^2 \left(
\frac{P_r^2}{4 P_c^2}
- \frac{P_r}{2 P_c}
+ \frac{P_r}{4 P_c^2}
\right) + \\
n \left(
+ \frac{5 P_r}{12 P_c}
+ \frac{P_r^2}{4 P_c}
+ \frac{P_r^2}{2}
- \frac{P_r^3}{6 P_c}
\right)
\end{split}
\]
words between slow and fast memory. (We grouped the terms in this
expression to clarify their asymptotic significance.) As a sanity
check, it is easy to verify that when $m = n$ and $P_c = P_r =
\sqrt{P}$ and $P = 4n^2 / W$, we obtain a communication volume of
\[
\frac{2n^3}{\sqrt{W}} + O\left( \frac{n^3}{W} \right),
\]
just as in the previous paragraph.
\subsubsection{Number of block transfers}
The column-oriented right-looking algorithm reads and writes a total of
\[
\sum_{J=1}^{\min\{P_c, P_r\}} \left(
2(P_r - J + 1) +
\sum_{K = J+1}^{P_c} \left(
3 +
\sum_{L = J+1}^{P_r} 3
\right)
\right)
\]
blocks between fast and slow memory. If we assume that $P_r \geq
P_c$, then this sums to
\[
P_r \left(
\frac{3 P_c^2}{2}
+ \frac{P_c}{2}
\right)
- P_c^2 \left(
P_c - 1
\right)
\]
transfers between slow and fast memory. This is always minimized by
taking $P_c$ as large as possible. Since $P_r \geq P_c$, this means
$P_c = P_r = \sqrt{P}$. If instead we assume that $P_r \leq P_c$, we
then obtain
\[
\frac{3 P_c}{2} \left(
P_r
+ P_r^2
\right)
- P_r^2 \left(
1 + P_r
\right)
\]
This is always minimized by taking $P_r$ as large as possible, and
since we assume $P_r \leq P_c$ here, this means $P_r = P_c =
\sqrt{P}$. Consequently, for any choice of $P_r$ and $P_c$ such that
$P_r \cdot P_c = P$, the least number of memory transfers is
\[
P + P^{3/2},
\]
obtained when $P_r = P_c = \sqrt{P}$. Note, however, that this may
not be possible due to constraints on the fast memory size, nor may it
be desirable for optimizing bandwidth requirements. If we
nevertheless take $P_r = P_c = \sqrt{P}$ and choose $P$ such that $P =
4 n^2 / W$ (assuming $W$ is sufficiently large), then the algorithm
requires about
\[
\frac{8 n^2}{W^{3/2}} + O\left( \frac{n^2}{W} \right)
\]
transfers between slow and fast memory. This shows that sequential
CAQR satisfies the lower bound of $\Omega(n^3/W^{3/2})$ transfers
conjectured in Section \ref{S:lowerbounds}.
\subsection{Applying $Q$ or $Q^T$}
For the sake of brevity, we omit the straightforward derivation of
this model.
\end{comment}
\endinput
\endinput
\endinput
\endinput
The parallel CAQR (``Communication-Avoiding QR'') algorithm uses
parallel TSQR to perform a right-looking QR factorization of a dense
matrix $A$ on a two-dimensional grid of processors $P = P_r \times
P_c$. The $m \times n$ matrix (with $m \geq n$) is distributed using
a 2-D block cyclic layout over the processor grid, with blocks of
dimension $b \times b$. We assume that all the blocks have the same
size; we can always pad the input matrix with zero rows and columns to
ensure this is possible. For a detailed description of the 2-D block
cyclic layout of a dense matrix, please refer to
\cite{scalapackusersguide}, in particular to the section entitled
``Details of Example Program \#1.'' There is also an analogous
sequential version of CAQR, which we summarize in Section
\ref{S:CAQR-seq} and describe in detail in Appendix
\ref{S:CAQR-seq-detailed}. In summary, Table \ref{tbl:CAQR:par:model}
says that the number of arithmetic operations and words transferred is
roughly the same between parallel CAQR and ScaLAPACK's parallel QR
factorization, but the number of messages is a factor $b$ times lower
for CAQR. For related work on parallel CAQR, see the second paragraph
of Section \ref{S:CAQR-seq}.
CAQR is based on TSQR in order to minimize communication. At each
step of the factorization, TSQR is used to factor a panel of columns,
and the resulting Householder vectors are applied to the rest of the
matrix. As we will show, the block column QR factorization as
performed in \lstinline!PDGEQRF! is the latency bottleneck of the
current ScaLAPACK QR algorithm. Replacing this block column
factorization with TSQR, and adapting the rest of the algorithm to
work with TSQR's representation of the panel $Q$ factors, removes the
bottleneck. We use the reduction-to-one-processor variant of TSQR, as
the panel's $R$ factor need only be stored on one processor (the pivot
block's processor).
CAQR is defined inductively. We assume that the first $j-1$
iterations of the CAQR algorithm have been performed. That is, $j-1$
panels of width $b$ have been factored and the trailing matrix has
been updated. The active matrix at step $j$ (that is, the part of the
matrix which needs to be worked on) is of dimension
\[
(m - (j-1) b) \times (n - (j-1) b) = m_j \times n_j.
\]
\begin{figure}[htbp]
\begin{center}
\leavevmode \includegraphics[scale=0.6]{../TechReport2007/FIGURES/caqr}
\caption{Step $j$ of the QR factorization algorithm. First, the
current panel of width $b$, consisting of the blocks $B_0$,
$B_1$, $\dots$, $B_{p-1}$, is factorized using TSQR. Here, $p$
is the number of blocks in the current panel. Second, the
trailing matrix, consisting of the blocks $C_0$, $C_1$, $\dots$,
$C_{p-1}$, is updated. The matrix elements above the current
panel and the trailing matrix belong to the $R$ factor and will
not be modified further by the QR factorization.}
\label{fig:qr2d}
\end{center}
\end{figure}
Figure \ref{fig:qr2d} shows the execution of the QR factorization.
For the sake of simplicity, we suppose that processors $0$, $\dots$,
$P_r - 1$ lie in the column of processes that hold the current panel $j$.
The $m_j \times b$ matrix $B$ represents the current panel $j$. The
$m_j \times (n_j - b)$ matrix $C$ is the trailing matrix that needs to
be updated after the TSQR factorization of $B$. For each processor
$p$, we refer to the first $b$ rows of its first block row of $B$ and
$C$ as $B_p$ and $C_p$ respectively.
We first introduce some notation to help us refer to different parts
of a binary TSQR reduction tree.
\begin{itemize}
\item $level(i,k) = \left\lfloor \frac{i}{2^k} \right\rfloor$ denotes
the node at level $k$ of the reduction tree which is assigned to a
set of processors that includes processor $i$. The initial stage of
the reduction, with no communication, is $k = 0$.
\item $first\_proc(i,k) = 2^k level(i,k)$ is the index of the
``first'' processor associated with the node $level(i,k)$ at stage
$k$ of the reduction tree. In a reduction (not an all-reduction),
it receives the messages from its neighbors and performs the local
computation.
\item $target(i,k) = first\_proc(i,k) + (i + 2^{k-1}) \mod 2^k$ is the
index of the processor with which processor $i$ exchanges data at
level $k$ of the butterfly all-reduction algorithm.
\item $target\_first\_proc(i,k) = target(first\_proc(i,k)) =
first\_proc(i,k) + 2^{k-1}$ is the index of the processor with which
$first\_proc(i,k)$ exchanges data in an all-reduction at level $k$,
or the index of the processor which sends its data to
$first\_proc(i,k)$ in a reduction at level $k$.
\end{itemize}
Algorithm \ref{Alg:CAQR:j} outlines the right-looking parallel QR
decomposition. At iteration $j$, first, the block column $j$ is
factored using TSQR. We assume for ease of exposition that TSQR is
performed using a binary tree. After the block column factorization
is complete, the matrices $C_p$ are updated as follows. The update
corresponding to the QR factorization at the leaves of the TSQR tree
is performed locally on every processor. The updates corresponding to
the upper levels of the TSQR tree are performed between groups of
neighboring trailing matrix processors as described in Section
\ref{SS:TSQR:localQR:trailing}. Note that only one of the trailing
matrix processors in each neighbor group continues to be involved in
successive trailing matrix updates. This allows overlap of
computation and communication, as the uninvolved processors can finish
their computations in parallel with successive reduction stages.
\begin{algorithm}[h!]
\caption{Right-looking parallel CAQR factorization}
\label{Alg:CAQR:j}
\begin{algorithmic}[1]
\For{$j = 1$ to $n/b$}
\State{The column of processors that holds panel $j$ computes a TSQR
factorization of this panel. The Householder vectors are stored
in a tree-like structure as described in Section
\ref{S:TSQR:impl}.}\label{Alg:CAQR:j:local-factor}
\State{Each processor $p$ that belongs to the column of processes
holding panel $j$ broadcasts along its row of processors the $m_j
/ P_r \times b$ rectangular matrix that holds the two sets of
Householder vectors. Processor $p$ also broadcasts two arrays of
size $b$ each, containing the Householder multipliers $\tau_p$.}
\State{Each processor in the same process row as processor $p$, $0
\leq p < P_r$, forms $T_{p0}$ and updates its local trailing
matrix $C$ using $T_{p0}$ and $Y_{p0}$. (This computation involves
all processors.)}\label{Alg:CAQR:j:local-update}
\For{$k = 1$ to $\log P_r$, the processors that lie in the same row
as processor $p$, where $0 \leq p < P_r$ equals
$first\_proc(p,k)$ or $target\_first\_proc(p,k)$,
respectively.}
\State{Processors in the same process row as
$target\_first\_proc(p,k)$ form $T_{level(p,k),k}$ locally.
They also compute local pieces of
$W = Y_{level(p,k),k}^T C_{target\_first\_proc(p,k)}$,
leaving the results distributed. This computation is
overlapped with the communication in Line \ref{step_comm1}.}\label{Alg:CAQR:j:overlap1}
\State{Each processor in the same process row as
$first\_proc(p,k)$ sends to the processor in the
same column and belonging to the row of processors of
$target\_first\_proc(p,k)$ the local pieces of
$C_{first\_proc(p,k)}$.}\label{step_comm1}
\State{Processors in the same process row as
$target\_first\_proc(p,k)$ compute local pieces of
\[
W = T_{level(p,k),k}^T \left( C_{first\_proc(p,k)} + W \right).
\]}
\State{Each processor in the same process row as
$target\_first\_proc(p,k)$ sends to the processor
in the same column and belonging to the process row
of $first\_proc(p,k)$ the local pieces of $W$.}\label{step_comm2}
\State{Processors in the same process row as
$first\_proc(p,k)$ and
$target\_first\_proc(p,k)$ each complete the rank-$b$
updates $C_{first\_proc(p,k)} := C_{first\_proc(p,k)} - W$ and
$C_{target\_first\_proc(p,k)} := C_{target\_first\_proc(p,k)} -
Y_{level(p,k),k} \cdot W$ locally. The latter computation
is overlapped with the communication in Line \ref{step_comm2}.}\label{Alg:CAQR:j:overlap2}
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
We see that CAQR consists of $\frac{n}{b}$ TSQR factorizations
involving $P_r$ processors each, and $n/b - 1$ applications of the
resulting Householder vectors. Table \ref{tbl:CAQR:par:model}
expresses the performance model over a rectangular $P_r \times P_c$
grid of processors. A detailed derivation of the model is given in
Appendix \ref{S:CAQR-par-detailed}. According to the table, the
number of arithmetic operations and words transferred is roughly the
same between parallel CAQR and ScaLAPACK's parallel QR factorization,
but the number of messages is a factor $b$ times lower for CAQR.
\input{Tables/par-caqr}
The parallelization of the computation is represented by the number of
multiplies and adds and by the number of divides, in Table
\ref{tbl:CAQR:par:model}. We discuss first the parallelization of
multiplies and adds. The first term for CAQR represents mainly the
parallelization of the local Householder update corresponding to the
leaves of the TSQR tree (the matrix-matrix multiplication in line
\ref{Alg:CAQR:j:local-update} of Algorithm \ref{Alg:CAQR:j}), and
matches the first term for \lstinline!PDGEQRF!. The second term for
CAQR corresponds to forming the $T_{p0}$ matrices for the local
Householder update in line \ref{Alg:CAQR:j:local-update} of the
algorithm, and also has a matching term for \lstinline!PDGEQRF!. The
third term for CAQR represents the QR factorization of a panel of
width $b$ that corresponds to the leaves of the TSQR tree (part of
line \ref{Alg:CAQR:j:local-factor}) and part of the local rank-$b$
update (triangular matrix-matrix multiplication) in line
\ref{Alg:CAQR:j:local-update} of the algorithm, and also has a
matching term for \lstinline!PDGEQRF!.
The fourth term in the number of multiplies and adds for CAQR
represents the redundant computation introduced by the TSQR
formulation. In this term, the number of flops performed for computing
the QR factorization of two upper triangular matrices at each node of
the TSQR tree is $(2/3) nb^2 \log(P_r)$. The number of flops
performed during the Householder updates issued by each QR
factorization of two upper triangular matrices is $n^2 (3b+5)/(2 P_c)
\log(P_r)$.
The runtime estimation in Table \ref{tbl:CAQR:par:model} does not take
into account the overlap of computation and communication in lines
\ref{Alg:CAQR:j:overlap1} and \ref{step_comm1} of Algorithm
\ref{Alg:CAQR:j} or the overlap in steps \ref{step_comm2} and
\ref{Alg:CAQR:j:overlap2} of the algorithm. Suppose that at each step
of the QR factorization, the condition
\[
\alpha + \beta \frac{b (n_j - b)}{P_c}
>
\gamma b (b + 1) \frac{n_j - b}{P_c}
\]
is fulfilled. This is the case for example when $\beta / \gamma >
b+1$. Then the fourth non-division flops term that accounts for the
redundant computation is decreased by $n^2 (b+1) \log(P_r) / P_c$,
about a factor of $3$.
\input{Tables/par-caqr-opt}
The execution time for a square matrix ($m=n$), on a square grid of
processors ($P_r = P_c = \sqrt{P}$) and with more lower order terms
ignored, simplifies to:
\begin{equation}\label{Eq:CAQR:time_sq}
\begin{split}
T_{Par.\ CAQR}(n, n, \sqrt{P}, \sqrt{P}) =
\gamma \left( \frac{4n^3}{3P} + \frac{3 n^2 b}{4 \sqrt{P}} \log P \right) \\
+ \beta \frac{3n^2}{4\sqrt{P}} \log P
+ \alpha \frac{5n}{2b} \log P.
\end{split}
\end{equation}
\input{par-caqr-opt}
\subsection{Look-ahead approach}
Our models assume that the QR factorization does not use a look-ahead
technique during the right-looking factorization. With the look-ahead
right-looking approach, the communications are pipelined from left to
right. At each step of factorization, we would model the latency cost
of the broadcast within rows of processors as $2$ instead of
$\log P_c$.
In the next section, we will describe the sequential CAQR algorithm.
\input{seq-caqr}
\section{Comparison of ScaLAPACK's parallel QR and CAQR}
\label{S:CAQR:counts}
\label{S:CAQR-counts}
Here, we compare ScaLAPACK's QR factorization routine
\lstinline!PDGEQRF! with parallel CAQR. Table
\ref{tbl:CAQR:par:model:opt} summarizes the results of this
comparison: if we choose the $b$, $P_r$, and $P_c$ parameters
independently and optimally for both algorithms, the two algorithms
match in the number of flops and words transferred, but CAQR sends a
factor of $\Theta(\sqrt{mn/P})$ messages fewer than ScaLAPACK QR.
This factor is the local memory requirement on each processor, up to a
small constant.
\subsection{\lstinline!PDGEQRF! performance model}
We suppose that we decompose a $m \times n$ matrix with $m
\geq n$ which is distributed block cyclically over a $P_r$ by $P_c$
grid of processors, where $P_r \times P_c = P$. The two-dimensional
block cyclic distribution uses square blocks of dimension $b \times
b$. Equation \eqref{Eq:ScaLAPACK:time} represents the runtime
estimation of ScaLAPACK's QR, in which we assume that there is no
attempt to pipeline communications from left to right and some lower
order terms are omitted.
\begin{multline}
\label{Eq:ScaLAPACK:time}
T_{SC}( m, n, P_r, P_c ) = \\
\left[
\frac{2 n^2}{3 P} \left( 3m - n \right)
+ \frac{ 3(b+1) n \left(m - \frac{n}{2}\right) }{P_r}
+ \frac{b n^2}{2 P_c}
- b n \left( \frac{b}{3} + \frac{3}{2} \right)
\right] \gamma + \\
\left[
\frac{ m n - \frac{n^2}{2} }{P_r}
\right] \gamma_d + \\
\left[
3 n \left(
1 + \frac{1}{b}
\right) \log P_r
+ \frac{2n}{b} \log P_c
\right] \alpha + \\
\left[
\left(
\frac{n^2}{P_c} + n (b+2)
\right) \log P_r
+ \left(
\frac{1}{P_r} \left( mn - \frac{n^2}{2} \right) +
\frac{nb}{2}
\right) \log P_c
\right] \beta \\
\end{multline}
Compare with a less detailed but similar performance estimation in
\cite{scalapackusersguide}, in particular Tables 5.1 and 5.8 (routine
\lstinline!PxGELS!, whose main cost is invoking \lstinline!PDGEQRF!)
and Equation (5.1).
When $P_r = P_c = \sqrt{P}$ and $m=n$, and ignoring more lower-order
terms, Equation \eqref{Eq:ScaLAPACK:time} simplifies to
\begin{equation}
\label{Eq:ScaLAPACK:time:square}
T_{SC}(n,n,\sqrt{P}, \sqrt{P}) = \gamma \frac{4}{3} \frac{n^3}{P}
+ \beta \frac{3}{4} \log{P} \frac{n^2}{\sqrt{P}}
+ \alpha \left( \frac{3}{2} + \frac{5}{2b} \right) n \log P
\end{equation}
\input{par-scalapack-opt}
\begin{comment}
Consider Equation \eqref{Eq:ScaLAPACK:time}. The latency term we want
to eliminate is $\alpha \left( 3 n \log P_r \right)$, which comes from
the QR factorization of a panel of width $b$ (ScaLAPACK's
\lstinline!PDGEQR2! routine). This involves first the computation of
a Householder vector $v$ spread over $P_r$ processors
(\lstinline!DGEBS2D! and \lstinline!PDNRM2!, which use a tree to
broadcast and compute a vector norm). Second, a Householder update is
performed (ScaLAPACK's \lstinline!PDLARF!) by applying $I - \tau v
v^T$ to the rest of the columns in the panel. The update calls
\lstinline!DGSUM2D! in particular, which uses a tree to combine
partial sums from a processor column. In other words, the potential
ScaLAPACK latency bottleneck is entirely in factoring a block column.
\end{comment}
\section{Parallel CAQR performance estimation}\label{S:CAQR:perfest}
We use the performance model developed in the previous section to
estimate the performance of parallel CAQR on three computational
systems, IBM POWER5, Peta, and Grid, and compare it to ScaLAPACK's
parallel QR factorization routine \lstinline!PDGEQRF!. Peta is a
model of a petascale machine with $8100$ processors, and Grid is a
model of $128$ machines connected over the Internet. Each processor in
Peta and Grid can be itself a parallel machine, but our models
consider the parallelism only between these parallel machines.
We expect CAQR to outperform ScaLAPACK, in part because it uses a
faster algorithm for performing most of the computation of each panel
factorization (\lstinline!DGEQR3! vs.\ \lstinline!DGEQRF!), and in
part because it reduces the latency cost. Our performance model uses
the same time per floating-point operation for both CAQR and
\lstinline!PDGEQRF!. Hence our model evaluates the improvement due
only to reducing the latency cost.
We evaluate the performance using matrices of size $n \times n$,
distributed over a $P_r \times P_c$ grid of $P$ processors using a 2D
block cyclic distribution, with square blocks of size $b \times b$.
For each machine we estimate the best performance of CAQR and
\lstinline!PDGEQRF! for a given problem size $n$ and a given number of
processors $P$, by finding the optimal values for the block size $b$
and the shape of the grid $P_r \times P_c$ in the allowed ranges. The
matrix size $n$ is varied in the range $10^3$, $10^{3.5}$, $10^4$,
$\dots$, $10^{7.5}$. The block size $b$ is varied in the range $1$,
$5$, $10$, $\dots$, $50$, $60$, $\dots$, $\min(200, m/P_r, n/P_c)$.
The number of processors is varied from $1$ to the largest power of
$2$ smaller than $p_{max}$, in which $p_{max}$ is the maximum number
of processors available in the system. The values for $P_r$ and $P_c$
are also chosen to be powers of two.
We describe now the parameters used for the three parallel machines.
The available memory on each processor is given in units of 8-byte
(IEEE 754 double-precision floating-point) words. When we evaluate
the model, we set the $\gamma$ value in the model so that the modeled
floating-point rate is 80\% of the machine's peak rate, so as to
capture realistic performance on the local QR factorizations. This
estimate favors ScaLAPACK rather than CAQR, as ScaLAPACK requires more
communication and CAQR more floating-point operations. The inverse
network bandwidth $\beta$ has units of seconds per word. The
bandwidth for Grid is estimated to be the Teragrid backbone bandwidth
of $40$ GB/sec divided by $p_{max}$.
\begin{itemize}
\item \textbf{IBM POWER5:} $p_{max} = 888$, peak flop rate is $7.6$
Gflop/s, $mem = 5 \cdot 10^8$ words, $\alpha = 5 \cdot 10^{-6}$ s,
$\beta = 2.5 \cdot 10^{-9}$ s/word ($1 / \beta = 400$ Mword/s $ =
3.2$ GB/s).
\item \textbf{Peta:} $p_{max} = 8192$, peak flop rate is $500$
Gflop/s, $mem = 62.5 \cdot 10^9$ words, $\alpha = 10^{-5}$ s, $\beta
= 2 \cdot 10^{-9}$ s/word ($1 / \beta = 500$ Mword/s $= 4$ GB/s).
\item \textbf{Grid:} $p_{max} = 128$, peak flop rate is 10 Tflop/s,
$mem = 10^{14}$ words, $\alpha = 10^{-1}$ s, $\beta = 25 \cdot
10^{-9}$ s/word ($1 / \beta = 40$ Mword/s $= .32$ GB/s).
\end{itemize}
There are $13$ plots shown for each parallel machine. The first three
plots display for specific $n$ and $P$ values our models of
\begin{itemize}
\item the best speedup obtained by CAQR, with respect to the runtime
using the fewest number of processors with enough memory to hold
the matrix (which may be more than one processor),
\item the best speedup obtained by \lstinline!PDGEQRF!, computed
similarly, and
\item the ratio of \lstinline!PDGEQRF! runtime to CAQR runtime.
\end{itemize}
The next ten plots are divided in two groups of five. The first group
presents performance results for CAQR and the second group presents
performance results for \lstinline!PDGEQRF!. The first two plots of
each group of five display the corresponding optimal values of $b$ and
$P_r$ obtained for each combination of $n$ and $P$. (Since $P_c = P /
P_r$, we need not specify $P_c$ explicitly.) The last $3$ plots of
each group of $5$ give the computation time to total time ratio, the
latency time to total time ratio, and the bandwidth time to total time
ratio.
The white regions in the plots signify that the problem needed too
much memory with respect to the memory available on the machine. Note
that in our performance models, the block size $b$ has no meaning on
one processor, because there is no communication, and the term $4 n^3
/ (3 P)$ dominates the computation. Thus, for one processor, we set
the optimal value of $b$ to 1 as a default.
CAQR leads to significant improvements with respect to
\lstinline!PDGEQRF! when the latency represents an important fraction
of the total time, as for example when a small matrix is computed on a
large number of processors. On IBM POWER5, the best improvement is
predicted for the smallest matrix in our test set ($n = 10^3$), when
CAQR will outperform \lstinline!PDGEQRF! by a factor of $9.7$ on $512$
processors. On Peta, the best improvement is a factor of $22.9$,
obtained for $n = 10^4$ and $P = 8192$. On Grid, the best improvement
is obtained for one of the largest matrix in our test set
$m=n=10^{6.5}$, where CAQR outperforms \lstinline!PDGEQRF! by a factor
of $5.3$ on $128$ processors.
\subsection{Performance prediction on IBM POWER5}
Figures \ref{fig:PerfComp_ibmp5}, \ref{fig:PerfCAQR_ibmp5}, and
\ref{fig:PerfPDGEQRF_ibmp5} depict modeled performance on the IBM
POWER 5 system. CAQR has the same estimated performance as
\lstinline!PDGEQRF! when the computation dominates the total time.
But it outperforms \lstinline!PDGEQRF! when the fraction of time spent
in communication due to latency becomes significant. The best
improvements are obtained for smaller $n$ and larger $P$, as displayed
in Figure \ref{ibmp5_CMP}, the bottom right corner. For the smallest
matrix in our test set ($n = 10^3$), we predict that CAQR will
outperform \lstinline!PDGEQRF! by a factor of $9.7$ on $512$
processors. As shown in Figure \ref{ibmp5_QRFlatR}, for this matrix,
the communication dominates the runtime of \lstinline!PDGEQRF!, with a
fraction of $0.9$ spent in latency. For CAQR, the time spent in
latency is reduced to a fraction of $0.5$ of the total time from $0.9$
for PDGEQRF, and the time spent in computation is a fraction of $0.3$
of the total time. This is illustrated in Figures
\ref{ibmp5_CAQRcompR} and \ref{ibmp5_CAQRlatR}.
Another performance comparison consists in determining the improvement
obtained by taking the best performance independently for CAQR and
\lstinline!PDGEQRF!, when varying the number of processors from $1$ to
$512$. For $n=10^3$, the best performance for CAQR is obtained when
using $P=512$ and the best performance for \lstinline!PDGEQRF! is
obtained for $P = 64$. This leads to a speedup of more than $3$ for
CAQR compared to \lstinline!PDGEQRF!. For any fixed $n$, we can take
the number of processors $P$ for which \lstinline!PDGEQRF! would
perform the best, and measure the speedup of CAQR over
\lstinline!PDGEQRF! using that number of processors. We do this in
Table \ref{tbl:CAQR:par:POWER5:best}, which shows that CAQR always is
at least as fast as \lstinline!PDGEQRF!, and often significantly
faster (up to $3 \times$ faster in some cases).
Figure \ref{fig:PerfComp_ibmp5} shows that CAQR should scale well,
with a speedup of $351$ on $512$ processors when $m = n = 10^4$. A
speedup of $116$ with respect to the parallel time on $4$ processors
(the fewest number of processors with enough memory to hold the
matrix) is predicted for $m=n=10^{4.5}$ on $512$ processors. In these
cases, CAQR is estimated to outperform \lstinline!PDGEQRF! by factors
of $2.1$ and $1.2$, respectively.
Figures \ref{ibmp5_CAQRPr} and \ref{ibmp5_QRFPr} show that
\lstinline!PDGEQRF! has a smaller value for optimal $P_r$ than CAQR.
This trend is more significant in the bottom left corner of Figure
\ref{ibmp5_QRFPr}, where the optimal value of $P_r$ for
\lstinline!PDGEQRF! is $1$. This corresponds to a 1D block column
cyclic layout. In other words, \lstinline!PDGEQRF! runs faster by
reducing the $3 n \log{P_r}$ term of the latency cost of Equation
\eqref{Eq:ScaLAPACK:time} by choosing a small $P_r$.
\lstinline!PDGEQRF! also tends to have a better performance for a
smaller block size than CAQR, as displayed in Figures
\ref{ibmp5_CAQRb} and \ref{ibmp5_QRFb}. The optimal block size $b$
varies from $1$ to $15$ for \lstinline!PDGEQRF!, and from $1$ to $30$
for CAQR.
\begin{figure}
\begin{center}
\mbox{
\subfigure[Speedup CAQR]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_speedup_QR_TSQR_perf}\label{ibmp5_spdCAQR}}
\subfigure[Speedup PDGEQRF]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_speedup_QR_ScaLA_perf}}
}
\subfigure[Comparison]
{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_compTSQRScaLA_perf}\label{ibmp5_CMP}}
\end{center}
\caption{\label{fig:PerfComp_ibmp5}Performance prediction comparing
CAQR and \lstinline!PDGEQRF! on IBM POWER5.}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{r|c|c}
$\log_{10} n$ & Best $\log_2 P$ for \lstinline!PDGEQRF! & CAQR speedup
\\ \hline
3.0 & 6 & 2.1 \\
3.5 & 8 & 3.0 \\
4.0 & 9 & 2.1 \\
4.5 & 9 & 1.2 \\
5.0 & 9 & 1.0 \\
5.5 & 9 & 1.0 \\
\end{tabular}
\end{center}
\caption{Estimated runtime of \lstinline!PDGEQRF! divided by estimated
runtime of CAQR on a square $n \times n$ matrix, on the IBM POWER5
platform, for those values of $P$ (number of processors) for which
\lstinline!PDGEQRF! performs the best for that problem size.}
\label{tbl:CAQR:par:POWER5:best}
\end{table}
\begin{figure}
\begin{center}
\mbox{
\subfigure[Optimal $b$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_TSQR_optb_perf}\label{ibmp5_CAQRb}}
\subfigure[Optimal $P_r$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_TSQR_optPr_perf}\label{ibmp5_CAQRPr}}
}
\mbox{
\subfigure[Fraction of time in computation]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_TSQR_compR_perf}\label{ibmp5_CAQRcompR}}
\subfigure[Fraction of time in latency]
{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_TSQR_latR_perf}\label{ibmp5_CAQRlatR}}
}
\mbox{
\subfigure[Fraction of time in bandwidth]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_TSQR_bwR_perf}\label{ibmp5_CAQRbwR}}
}
\end{center}
\caption{\label{fig:PerfCAQR_ibmp5}Performance prediction for
CAQR on IBM POWER5.}
\end{figure}
\begin{figure}
\begin{center}
\mbox{
\subfigure[Optimal $b$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_ScaLA_optb_perf}\label{ibmp5_QRFb}}
\subfigure[Optimal
$P_r$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_ScaLA_optPr_perf}\label{ibmp5_QRFPr}}
}
\mbox{
\subfigure[Fraction of time in computation]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_ScaLA_compR_perf}\label{ibmp5_QRFcompR}}
\subfigure[Fraction of time in latency]
{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_ScaLA_latR_perf}\label{ibmp5_QRFlatR}}
}
\mbox{
\subfigure[Fraction of time in bandwidth]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/IBM_POWER5_QR_ScaLA_bwR_perf}\label{ibmp5_QRFbwR}}
}
\end{center}
\caption{\label{fig:PerfPDGEQRF_ibmp5}Performance prediction for
\lstinline!PDGEQRF! on IBM POWER5.}
\end{figure}
\subsection{Performance prediction on Peta}
Figures \ref{fig:PerfComp_Peta}, \ref{fig:PerfCAQR_Peta}, and
\ref{fig:PerfPDGEQRF_Peta} show our performance estimates of CAQR and
\lstinline!PDGEQRF! on the Petascale machine. The estimated division
of time between computation, latency, and bandwidth for
\lstinline!PDGEQRF! is illustrated in Figures \ref{peta_QRFcompR},
\ref{peta_QRFlatR}, and \ref{peta_QRFbwR}. In the upper left corner
of these figures, the computation dominates the total time, while in
the right bottom corner the latency dominates the total time. In the
narrow band between these two regions, which goes from the left bottom
corner to the right upper corner, the bandwidth dominates the time.
CAQR decreases the latency cost, as can be seen in Figures
\ref{peta_CAQRcompR}, \ref{peta_CAQRlatR}, and \ref{peta_CAQRbwR}.
There are fewer test cases for which the latency dominates the time
(the right bottom corner of Figure \ref{peta_CAQRlatR}). This shows
that CAQR is expected to be effective in decreasing the latency cost.
The left upper region where the computation dominates the time is
about the same for both algorithms. Hence for CAQR there are more
test cases for which the bandwidth term is an important fraction of
the total time.
Note also in Figures \ref{peta_QRFoptPr} and \ref{peta_CAQRoptPr} that
optimal $P_r$ has smaller values for \lstinline!PDGEQRF! than for
CAQR. There is an interesting regularity in the value of optimal
$P_r$ for CAQR. CAQR is expected to have its best performance for
(almost) square grids.
As can be seen in Figure \ref{peta_spdCAQR}, CAQR is expected to show
good scalability for large matrices. For example, for $n = 10^{5.5}$,
a speedup of $1431$, measured with respect to the time on $2$
processors, is obtained on $8192$ processors. For $n=10^{6.4}$ a
speedup of $166$, measured with respect to the time on $32$
processors, is obtained on $8192$ processors.
CAQR leads to more significant improvements when the latency
represents an important fraction of the total time. This corresponds
to the right bottom corner of Figure \ref{peta_cmp}. The best
improvement is a factor of $22.9$, obtained for $n = 10^4$ and $P =
8192$. The speedup of the best CAQR compared to the best
\lstinline!PDGEQRF! for $n=10^4$ when using at most $P=8192$
processors is larger than $8$, which is still an important
improvement. The best performance of CAQR is obtained for $P=4096$
processors and the best performance of \lstinline!PDGEQRF! is obtained
for $P=16$ processors.
Useful improvements are also obtained for larger matrices. For $n =
10^6$, CAQR outperforms \lstinline!PDGEQRF! by a factor of $1.4$.
When the computation dominates the parallel time, there is no benefit
from using CAQR. However, CAQR is never slower. For any fixed $n$,
we can take the number of processors $P$ for which \lstinline!PDGEQRF!
would perform the best, and measure the speedup of CAQR over
\lstinline!PDGEQRF! using that number of processors. We do this in
Table \ref{tbl:CAQR:par:Peta:best}, which shows that CAQR always is at
least as fast as \lstinline!PDGEQRF!, and often significantly faster
(up to $7.4 \times$ faster in some cases).
\begin{figure}
\begin{center}
\mbox{ \subfigure[Speedup
CAQR]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_speedup_QR_TSQR_perf}\label{peta_spdCAQR}}
\subfigure[Speedup
PDGEQRF]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_speedup_QR_ScaLA_perf}\label{peta_spdQRF}}
} \subfigure[Comparison]
{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_compTSQRScaLA_perf}\label{peta_cmp}}
\end{center}
\caption{\label{fig:PerfComp_Peta}Performance prediction comparing
CAQR and PDGEQRF on Peta.}
\end{figure}
\begin{table}
\begin{tabular}{r|c|c}
$\log_{10} n$ & Best $\log_2 P$ for \lstinline!PDGEQRF! & CAQR speedup
\\ \hline
3.0 & 1 & 1 \\
3.5 & 2--3 & 1.1--1.5 \\
4.0 & 4--5 & 1.7--2.5 \\
4.5 & 7--10 & 2.7--6.6 \\
5.0 & 11--13 & 4.1--7.4 \\
5.5 & 13 & 3.0 \\
6.0 & 13 & 1.4 \\
\end{tabular}
\caption{Estimated runtime of \lstinline!PDGEQRF! divided by estimated
runtime of CAQR on a square $n \times n$ matrix, on the Peta platform,
for those values of $P$ (number of processors) for which
\lstinline!PDGEQRF! performs the best for that problem size.}
\label{tbl:CAQR:par:Peta:best}
\end{table}
\begin{figure}
\begin{center}
\mbox{ \subfigure[Optimal $b$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_TSQR_optb_perf}\label{peta_CAQRoptb}}
\subfigure[Optimal $P_r$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_TSQR_optPr_perf}\label{peta_CAQRoptPr}}
}
\mbox{ \subfigure[Fraction of time in
computation]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_TSQR_compR_perf}\label{peta_CAQRcompR}}
\subfigure[Fraction of time in latency]
{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_TSQR_latR_perf}\label{peta_CAQRlatR}}
}
\mbox{
\subfigure[Fraction of time in bandwidth]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_TSQR_bwR_perf}\label{peta_CAQRbwR}}
}
\end{center}
\caption{\label{fig:PerfCAQR_Peta}Performance prediction for CAQR on
Peta.}
\end{figure}
\begin{figure}
\begin{center}
\mbox{ \subfigure[Optimal $b$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_ScaLA_optb_perf}\label{peta_QRFoptb}}
\subfigure[Optimal $P_r$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_ScaLA_optPr_perf}\label{peta_QRFoptPr}}
}
\mbox{
\subfigure[Fraction of time in computation]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_ScaLA_compR_perf}\label{peta_QRFcompR}}
\subfigure[Fraction of time in latency]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_ScaLA_latR_perf}\label{peta_QRFlatR}}
}
\mbox{
\subfigure[Fraction of time in bandwidth]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Peta_QR_ScaLA_bwR_perf}\label{peta_QRFbwR}}
}
\end{center}
\caption{\label{fig:PerfPDGEQRF_Peta}Performance prediction for
PDGEQRF on Peta.}
\end{figure}
\subsection{Performance prediction on Grid}
The performance estimation obtained by CAQR and \lstinline!PDGEQRF! on
the Grid is displayed in Figures \ref{fig:PerfComp_Grid},
\ref{fig:PerfCAQR_Grid}, and \ref{fig:PerfPDGEQRF_Grid}. For small
values of $n$ both algorithms do not obtain any speedup, even on small
number of processors. Hence we discuss performance results for values
of $n$ bigger than $10^5$.
As displayed in Figures \ref{grid_CAQRoptb} and \ref{grid_QRFoptb},
the optimal block size for both algorithms is very often $200$, the
largest value in the allowed range. The optimal value of $P_r$ for
\lstinline!PDGEQRF! is equal to $1$ for most of the test cases (Figure
\ref{grid_QRFoptPr}), while CAQR tends to prefer a square grid (Figure
\ref{grid_CAQRoptPr}). This suggests that CAQR can successfully
exploit parallelism within block columns, unlike \lstinline!PDGEQRF!.
As can be seen in Figures \ref{grid_QRFcompR}, \ref{grid_QRFlatR}, and
\ref{grid_QRFbwR}, for small matrices, communication latency dominates
the total runtime of \lstinline!PDGEQRF!. For large matrices and
smaller numbers of processors, computation dominates the runtime. For
the test cases situated in the band going from the bottom left corner
to the upper right corner, bandwidth costs dominate the runtime. The
model of \lstinline!PDGEQRF! suggests that the best way to decrease
the latency cost with this algorithm is to use, in most test cases, a
block column cyclic distribution (the layout obtained when $P_r = 1$).
In this case the bandwidth cost becomes significant.
The division of time between computation, latency, and bandwidth has a
similar pattern for CAQR, as shown in Figures \ref{grid_CAQRcompR},
\ref{grid_CAQRlatR}, and \ref{grid_CAQRbwR}. However, unlike
\lstinline!PDGEQRF!, CAQR has as optimal grid shape a square or almost
square grid of processors, which suggests that CAQR is more scalable.
The best improvement is obtained for one of the largest matrix in our
test set $m=n=10^{6.5}$, where CAQR outperforms \lstinline!PDGEQRF! by
a factor of $5.3$ on $128$ processors. The speedup obtained by the
best CAQR compared to the best \lstinline!PDGEQRF! is larger than $4$,
and the best performance is obtained by CAQR on $128$ processors,
while the best performance of \lstinline!PDGEQRF! is obtained on $32$
processors.
CAQR is predicted to obtain reasonable speedups for large problems on
the Grid, as displayed in Figure \ref{grid_CAQRspdup}. For example,
for $n = 10^7$ we note a speedup of $33.4$ on $128$ processors
measured with respect to $2$ processors. This represents an
improvement of $1.6$ over \lstinline!PDGEQRF!. For the largest matrix
in the test set, $n=10^{7.5}$, we note a speedup of $6.6$ on $128$
processors, measured with respect to $16$ processors. This is an
improvement of $3.8$ with respect to \lstinline!PDGEQRF!.
As with the last model, for any fixed $n$, we can take the number of
processors $P$ for which \lstinline!PDGEQRF! would perform the best,
and measure the speedup of CAQR over \lstinline!PDGEQRF! using that
number of processors. We do this in Table
\ref{tbl:CAQR:par:Grid:best}, which shows that CAQR always is at
least as fast as \lstinline!PDGEQRF!, and often significantly faster
(up to $3.8 \times$ faster in some cases).
\begin{figure}
\begin{center}
\mbox{
\subfigure[Speedup CAQR]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_speedup_QR_TSQR_perf}\label{grid_CAQRspdup}}
\subfigure[Speedup PDGEQRF]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_speedup_QR_ScaLA_perf}}
}
\subfigure[Comparison]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_compTSQRScaLA_perf}}
\end{center}
\caption{\label{fig:PerfComp_Grid}Performance prediction comparing
CAQR and PDGEQRF on Grid.}
\end{figure}
\begin{table}
\begin{tabular}{r|c|c}
$\log_{10} n$ & Best $\log_2 P$ for \lstinline!PDGEQRF! & CAQR speedup
\\ \hline
6.0 & 3 & 1.4 \\
6.5 & 5 & 2.4 \\
7.0 & 7 & 3.8 \\
7.5 & 7 & 1.6 \\
\end{tabular}
\caption{Estimated runtime of \lstinline!PDGEQRF! divided by estimated
runtime of CAQR on a square $n \times n$ matrix, on the Grid
platform, for those values of $P$ (number of processors) for which
\lstinline!PDGEQRF! performs the best for that problem size.}
\label{tbl:CAQR:par:Grid:best}
\end{table}
\begin{figure}
\begin{center}%
\mbox{
\subfigure[Optimal $b$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_TSQR_optb_perf}\label{grid_CAQRoptb}}
\subfigure[Optimal $P_r$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_TSQR_optPr_perf}\label{grid_CAQRoptPr}}
}
\mbox{
\subfigure[Fraction of time in computation]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_TSQR_compR_perf}\label{grid_CAQRcompR}}
\subfigure[Fraction of time in latency]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_TSQR_latR_perf}\label{grid_CAQRlatR}}
}
\mbox{
\subfigure[Fraction of time in bandwidth]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_TSQR_bwR_perf}\label{grid_CAQRbwR}}
}
\end{center}
\caption{\label{fig:PerfCAQR_Grid}Performance prediction for CAQR on
Grid.}
\end{figure}
\begin{figure}
\begin{center}
\mbox{
\subfigure[Optimal $b$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_ScaLA_optb_perf}\label{grid_QRFoptb}}
\subfigure[Optimal $P_r$]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_ScaLA_optPr_perf}\label{grid_QRFoptPr}}
}
\mbox{
\subfigure[Fraction of time in computation]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_ScaLA_compR_perf}\label{grid_QRFcompR}}
\subfigure[Fraction of time in latency]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_ScaLA_latR_perf}\label{grid_QRFlatR}}
}
\mbox{
\subfigure[Fraction of time in bandwidth]{\includegraphics[scale=0.35]{../TechReport2007/PERF_PLOTS/Grid_QR_ScaLA_bwR_perf}\label{grid_QRFbwR}}
}
\end{center}
\caption{\label{fig:PerfPDGEQRF_Grid}Performance prediction for
PDGEQRF on Grid.}
\end{figure}
\endinput
\section{Conclusions and Open Problems}
\label{sec:Conclusions_optimal}
We have shown that known bandwidth lower bounds
for parallel and sequential $\Theta(n^3)$ matrix multiplication
imply latency lower bounds,
shown such bounds apply to both LU and QR algorithms,
presented some new and some old QR algorithms
that attain these bounds, and referred to LU
algorithms in the literature that attain at least
some of these bounds. Whether a sequential LU
algorithm exists attaining the latency lower
bound is an open question.
There are numerous ways in which one could hope to
extend these results. One natural conjecture is that
the bounds apply to other $\Theta(n^3)$ dense linear algebra
routines, such as eigenvalue problems, and if they do,
we would want to find algorithms that attain them. Another question
is finding analogous communication lower bounds for
asymptotically faster dense linear algebra algorithms
like thosed based on Strassen's algorithm,
or indeed of any matrix multiplication algorithm,
based on Raz's theorem converting any matrix
multiplication algorithm to be ``Strassen-like''
(bilinear noncommutative) \cite{raz2003complexity}.
But the following question is of more practical importance.
Our TSQR and CAQR algorithms have been described and analyzed
in most detail for simple machine models: either
sequential with two levels of memory hierarchy (fast and slow),
or a homogeneous parallel machine, where each processor is
itself sequential. Real computers are more complicated, with
many levels of memory hierarchy and many levels of parallelism
(multicore, multisocket, multinode, multirack, \dots) all with
different bandwidths and latencies. So it is natural to ask
whether our algorithms and optimality proofs can be extended
to these more general situations. We hinted at
how TSQR could be extended to general
reduction trees in Section~\ref{sec:TSQR_optimal}, which
could in turn be chosen depending on the architecture.
But we have not discussed CAQR, which we do here.
We again look at the simpler case of matrix multiplication
for inspiration. Consider the sequential case, with
$k$ levels of memory hierarchy instead of 2, where
level 1 is fastest and smallest with $W_1$ words of memory,
level 2 is slower and larger with $W_2$ words of memory,
and so on, with level $k$ being slowest and large enough
to hold all the data. By dividing this hierarchy into
two pieces, levels $k$ through $i+1$ ("slow")
and $i$ through 1 ("fast"), we can apply the theory in
Section~\ref{SS:MMlowerbounds} to get lower bounds
on bandwidth and latency for moving data between levels $i$
and $i+1$ of memory. So our goal expands to finding
a matrix multiplication algorithm that attains not just
1 set of lower bounds, but $k-1$ sets of lower bounds,
one for each level of the hierarchy.
Fortunately, as is well known, the standard approach to tiling
matrix multiplication achieves all these lower bounds simultaneously,
by simply applying it recursively: level $i+1$ holds submatrices
of dimension $\Theta(\sqrt{W_{i+1}})$, and multiplies them by tiling
them into submatrices of dimension $\Theta(\sqrt{W_i})$, and so on.
The analogous observation is true of parallel matrix multiplication
on a hierarchical parallel processor where each node in the parallel
processor is itself a parallel processor (multicore, multisocket,
multirack, \dots).
We believe that this same recursive hierarchical approach applies
to CAQR (and indeed much of linear algebra) but there is a catch:
Simple recursion does not work, because the subtasks are not all
simply smaller QR decompositions. Rather they are a mixture of
tasks, including smaller QR decompositions and operations like
matrix multiplication. Therefore we still expect that the same
hierarchical approach will work: if a subtask is matrix multiplication
then it will be broken into smaller matrix multiplications as
described above, and if it is QR decomposition, it will be broken into
smaller QR decompositions and matrix multiplications.
There are various obstacles to this simple approach.
First, the small QR decompositions generally have structure,
e.g., a pair of triangles. To exploit this structure fully
would complicate the recursive decomposition. (Or we could
ignore this structure, perhaps only on the smaller
subproblems, where the overhead would dominate.)
Second, it suggests that the data structure with which the
matrix is stored should be hierarchical as well, with
matrices stored as subblocks of subblocks \cite{elmroth2004recursive}.
This is certainly possible, but it differs significantly
from the usual data structures to which users are accustomed.
It also suggests that recent approaches based on decomposing
dense linear algebra operations into DAGs of subtasks \cite{buttari2007class,boboulin2008issues,kurzak2008qr,quintana-orti2008scheduling,quintana-orti2008design}
may need to be hierarchical, rather than have a single layer
of tasks. A single layer is a good match for the single socket
multicore architectures that motivate these systems, but may
not scale well to, e.g., petascale architectures.
Third, it is not clear whether this approach best
accommodates machines that mix hierarchies of parallelism
and memory. For example, a multicore / multisocket / multirack
computer will have also have disk, DRAM and various caches,
and it remains to be seen whether straightforward recursion
will minimize bandwidth and latency everywhere that
communication takes place within such an architecture.
Fourth and finally, all our analysis has assumed homogeneous machines,
with the same flop rate, bandwidth and latency in all components. This
assumption can be violated in many ways, for example, by asymmetric
read and write bandwidths, by having different bandwidth and latency
between racks, sockets, and cores on a single chip, or by having some
specialized floating point units like GPUs.
It is most likely that an adaptive, ``autotuning'' approach
will be needed to deal with some of these issues, just
as it has been used for the simpler case of a matrix
multiplication. Addressing all these issues is future work.
\endinput
\section{Introduction}\label{S:introduction}
The large and increasing costs of communication motivate redesigning
algorithms to avoid it whenever possible.
In the parallel case, communication refers to messages between processors,
which may be sent over a network or via a shared memory.
In the sequential case, communication refers
to data movement between different levels of the memory hierarchy.
In both the parallel and sequential cases we model the
time to communicate a message of $n$ words as
$\alpha + \beta n$, where $\alpha$ is the latency and
$\beta$ is the reciprocal bandwidth.
Many authors have pointed out technology trends causing
floating point to become faster at an exponentially
higher rate than bandwidth, and bandwidth at an exponentially
higher rate than latency (see e.g., Graham et al.\ \cite{graham2005getting}).
We present parallel and sequential dense QR factorization algorithms
that are both \emph{optimal} (sometimes only up to polylogarithmic factors)
in the amount of communication (latency and bandwidth) they require,
and just as \emph{numerically stable} as conventional Householder QR.
Some of the algorithms are novel, and some extend earlier work.
The first set of algorithms, ``Tall Skinny QR'' (TSQR),
are for matrices with many more rows than columns, and the second set,
``Communication-Avoiding QR'' (CAQR), are for general rectangular
matrices. The algorithms have significantly lower latency cost in the
parallel case, and significantly lower latency and bandwidth costs in
the sequential case, than existing algorithms in LAPACK and ScaLAPACK.
It will be easy to see that our parallel and sequential TSQR
implementations communicate as little as possible.
To prove optimality of CAQR,
we extend known lower bounds on
communication bandwidth for sequential and
parallel versions of conventional $\Theta(n^3)$ matrix multiplication
(see Hong and Kung \cite{hong1981io}
and Irony, Toledo, and Tiskin \cite{irony2004communication})
to also provide latency lower bounds, and show that these
bounds also apply to $\Theta(n^3)$ implementations of dense LU and QR
decompositions. Showing that the bounds apply to LU is easy,
but QR is more subtle.
We show that CAQR attains these lower bounds
(sometimes only up to polylogarithmic factors).
Implementations of TSQR and CAQR demonstrating significant speedups
over LAPACK and ScaLAPACK will be presented in other work
\cite{PRACTICE}; here we concentrate on proving optimality.
\begin{comment}
In practice, we have implemented parallel TSQR on several machines,
with significant speedups:
\begin{itemize}
\item up to $6.7\times$ on 16 processors of a Pentium III cluster, for
a $100,000 \times 200$ matrix; and
\item up to $4\times$ on 32 processors of a BlueGene/L, for a
$1,000,000 \times 50$ matrix.
\end{itemize}
Some of this speedup is enabled by TSQR being able to use a much
better local QR decomposition than ScaLAPACK can use, such as the
recursive variant by Elmroth and Gustavson (see
\cite{elmroth2000applying} and the performance results in Section
\ref{S:TSQR:perfres}). We have also implemented sequential TSQR on a
laptop for matrices that do not fit in DRAM, so that slow memory is
disk. This requires a special implementation in order to run at all,
since virtual memory does not accommodate matrices of the sizes we
tried. By extrapolating runtime from matrices that do fit in DRAM, we
can say that our out-of-DRAM implementation was as little as $2\times$
slower than the predicted runtime as though DRAM were infinite.
We have also modeled the performance of our parallel CAQR algorithm
(whose actual implementation and measurement is future work), yielding
predicted speedups over ScaLAPACK's \lstinline!PDGEQRF! of up to
$9.7\times$ on an IBM Power5, up to $22.9\times$ on a model Petascale
machine, and up to $5.3\times$ on a model of the Grid. The best
speedups occur for the largest number of processors used, and for
matrices that do not fill all of memory, since in this case latency
costs dominate. In general, when the largest possible matrices are
used, computation costs dominate the communication costs and improved
communication does not help.
\end{comment}
Tables~\ref{tbl:1-par-tsqr}--\ref{tbl:6-seq-caqr-square} summarize our
performance models and lower bounds for TSQR, CAQR, and
LAPACK's sequential and ScaLAPACK's parallel QR factorizations.
Our model of computation looks the same for the
parallel and sequential cases, with
running time = \#flops $\times$ time\_per\_flop +
\#words\_moved $\times$ (1/bandwidth) +
\#messages $\times$ latency,
where the last two terms constitute the communication.
We do not model overlap of communication and computation,
which while important in practice can at most improve the
running time by a factor of 2, whereas we are looking for
asymptotic improvements. In the tables we give the
\#flops, \#words moved and \#messages as functions of
the number of rows $m$ and columns $n$ (assuming $m \geq n$),
the number of
processors $P$ in the parallel case, and the size of
fast memory $W$ in the sequential case.
To make these tables easier to read, we omit most lower order terms,
make boldface the terms where the new algorithms differ significantly
from Sca/LAPACK,
and make the optimal choice of matrix layout for each parallel algorithm:
This means optimally choosing the block size $b$ as well as the
processor grid dimensions $P_r \times P_c$ in the 2-D block cyclic layout.
(See Section~\ref{sec:CAQR_optimal}
for discussion of these parameters, and detailed performance models for
general layouts.)
\begin{comment}
1. Parallel case for TSQR
3 columns: TSQR PDGETRF Lower Bound
3 rows: #flops, #words, #messages (dominant terms only)
2. Parallel case for CAQR (general rectangular case)
(optimal b, P_r, P_c etc chosen independently for CAQR and PDGETRF)
3 columns: CAQR PDGETRF Lower Bound
3 rows: #flops, #words, #messages (dominant terms only)
3. Parallel case for CAQR (square case)
(optimal b, P_r, P_c etc chosen independently for CAQR and PDGETRF)
3 columns: CAQR PDGETRF Lower Bound
3 rows: #flops, #words, #messages (dominant terms only)
4. Sequential case for TSQR
3 columns: TSQR DGEQRF Lower Bound
3 rows: #flops, #words, #messages (dominant terms only)
5. Sequential case for CAQR (general rectangular case)
(optimal b, etc chosen independently for CAQR and DGETRF)
3 columns: CAQR DGETRF Lower Bound
3 rows: #flops, #words, #messages (dominant terms only)
6. Sequential case for CAQR (square case)
(optimal b, etc chosen independently for CAQR and DGETRF)
3 columns: CAQR DGETRF Lower Bound
3 rows: #flops, #words, #messages (dominant terms only)
\end{comment}
Tables~\ref{tbl:1-par-tsqr}--\ref{tbl:3-par-caqr-square} present the
parallel performance models for TSQR, CAQR on general rectangular
matrices, and CAQR on square matrices, respectively. First, Table
\ref{tbl:1-par-tsqr} shows that parallel TSQR requires only $\log P$
messages, which is both optimal and a factor $2n$ fewer messages than
ScaLAPACK's parallel QR factorization \lstinline!PDGEQRF!. Table
\ref{tbl:2-par-caqr-general} shows that parallel CAQR needs only
$\Theta(\sqrt{nP/m})$ messages (ignoring polylogarithmic factors) on a
general $m \times n$ rectangular matrix, which is both optimal and a
factor $\Theta(\sqrt{mn/P})$ fewer messages than ScaLAPACK. Note that
$\sqrt{mn/P}$ is the square root of each processor's local memory
size, up to a small constant factor. Table
\ref{tbl:3-par-caqr-square} presents the same comparison for the
special case of a square $n \times n$ matrix.
Next, Tables \ref{tbl:4-seq-tsqr}--\ref{tbl:6-seq-caqr-square}
present the sequential performance models for TSQR, CAQR on general
rectangular matrices, and CAQR on square matrices, respectively.
Table \ref{tbl:4-seq-tsqr}
compares sequential TSQR with sequential blocked Householder QR. This
is LAPACK's QR factorization routine \lstinline!DGEQRF! when fast
memory is cache and slow memory is DRAM, and models ScaLAPACK's
out-of-DRAM QR factorization routine \lstinline!PFDGEQRF! when fast
memory is DRAM and slow memory is disk. Sequential TSQR transfers
fewer words between slow and fast memory: $2mn$, which is both optimal
and a factor $mn/(4W)$ fewer words than transferred by blocked
Householder QR. Note that $mn/W$ is how many times larger the matrix
is than the fast memory size $W$. Furthermore, TSQR requires fewer
messages: at most about $3mn/W$, which is close to optimal and $\Theta(n)$
times lower than Householder QR.
Table~\ref{tbl:5-seq-caqr-general} compares sequential CAQR and sequential
blocked Householder QR on a general rectangular matrix. Sequential
CAQR transfers fewer words between slow and fast memory:
$\Theta(mn^2/\sqrt{W})$, which is both optimal and a factor
$\Theta(m/\sqrt{W})$ fewer words transferred than blocked Householder
QR. Note that $m/\sqrt{W} = \sqrt{m^2/W}$ is the square root
of how many times larger a square $m \times m$ matrix is than the fast
memory size $W$. Sequential CAQR also requires fewer messages: $12 mn^2 /
W^{3/2}$, which is optimal. We note that our analysis of CAQR
applies for any $W$, whereas our analysis of the algorithms in
LAPACK and ScaLAPACK assume that at least 2 columns fit in fast memory,
that is $W \geq 2m$; otherwise they may communicate even more.
Finally, Table~\ref{tbl:6-seq-caqr-square} presents the same comparison for the
special case of a square $n \times n$ matrix.
\newpage
\input{Tables/table1-par-tsqr-opt}
\noindent
\rule{4.75in}{.25mm}
\input{Tables/table2-par-caqr-general-opt}
\noindent
\rule{4.75in}{.25mm}
\input{Tables/table3-par-caqr-square-opt}
\newpage
\input{Tables/table4-seq-tsqr-opt}
\noindent
\rule{4.75in}{.25mm}
\input{Tables/table5-seq-caqr-general-opt}
\noindent
\rule{4.75in}{.25mm}
\input{Tables/table6-seq-caqr-square-opt}
Finally, we note that although our new algorithms perform
slightly more floating point operations than LAPACK and
ScaLAPACK, they have the same highest order terms in
their floating point operation counts.
(For TSQR, which is intended for the case $m \gg n$,
only the term containing $m$ is highest order.)
In fact we prove a matching lower bound on the amount of arithmetic,
assuming we avoid ``Strassen-like'' algorithms in a way
made formal later.
\begin{comment}
We have concentrated on the cases of a homogeneous parallel computer
and a sequential computer with a two-level memory hierarchy. But real
computers are obviously more complicated, combining many levels of
parallelism and memory hierarchy, perhaps heterogeneously.
We partially address this more difficult problem in two ways.
First, we show that our parallel and sequential TSQR designs correspond
to the two simplest cases of reduction trees (binary and flat, respectively),
and that different choices of reduction trees will let us optimize
TSQR for more general architectures.
Second, we describe how to apply TSQR and CAQR recursively
to accommodate hierarchical architectures, analogously to
multiple levels of blocking for matrix multiplication.
\end{comment}
Now we briefly describe related work and our contributions.
The tree-based QR idea itself is not novel (see for example,
\cite{buttari2007class,buttari2007parallel,cunha2002new,golub1988parallel,gunter2005parallel,kurzak2008qr,pothen1989distributed,quintana-orti2008scheduling,rabani2001outcore}),
but we have a number of optimizations and generalizations:
\begin{itemize}
\item Our algorithm can perform almost all its floating-point
operations using any fast sequential QR factorization routine.
For example, we can use blocked Householder transformation
exploiting BLAS3, or invoke Elmroth
and Gustavson's recursive QR (see
\cite{elmroth1998new,elmroth2000applying}).
\item We use TSQR as a building block for CAQR, for the parallel
resp.\ sequential factorization of arbitrary rectangular matrices in
a two-dimensional block cyclic layout.
\item Most significantly, we prove optimality for both our parallel
and sequential algorithms, with a 1-D layout for TSQR and 2-D block
layout for CAQR, i.e., that they minimize bandwidth and latency costs.
This assumes $\Theta(n^3)$ (non-Strassen-like) algorithms, and is usually
shown in a Big-Oh sense, sometimes modulo polylogarithmic terms.
\item We describe special cases in which existing sequential algorithms
by Elmroth and Gustavson \cite{elmroth2000applying} and also LAPACK's DGEQRF
attain minimum bandwidth. In particular, with the correct choice of
block size, Elmroth's and Gustavson's RGEQRF algorithm attains minimum
bandwidth and flop count, though not minimum latency.
\item We observe that there are alternative LU algorithms in
the literature that attain at least some of these communication
lower bounds: \cite{grigori2008calu} describes a parallel LU algorithm
attaining both bandwidth and latency lower bounds, and
\cite{toledo1997locality} describes a sequential LU algorithm that
at least attains the bandwidth lower bound.
\item
We outline how to extend both algorithms and optimality results
to certain kinds of hierarchical architectures, either with multiple
levels of memory hierarchy, or multiple levels of parallelism
(e.g., where each node in a parallel machine consists of other parallel
machines, such as multicore).
In the case of TSQR we do this by adapting it to work on general
reduction trees.
\end{itemize}
It is possible to do a stable QR factorization (or indeed most any
dense linear algebra operation) at the same asymptotic speed as
matrix multiplication (e.g., in $\Theta(n^{\log_2 7})$ operations using
Strassen) \cite{FastLinearAlgebraIsStable} and so with less
communication as well, but we do not discuss these algorithms in
this paper.
We note that the $Q$ factor will be represented as a tree of smaller $Q$
factors, which differs from the traditional layout. Many previous
authors did not explain in detail how to apply a stored TSQR $Q$
factor, quite possibly because this is not required for solving
a single least squares problem:
Adjoining the right-hand side(s) to the matrix $A$,
and taking the QR factorization of the result, requires only the $R$
factor. Previous authors discuss this optimization. However, many of
our applications require storing and working with the implicit
representation of the $Q$ factor.
Our performance models show that applying this tree-structured $Q$
has about the same cost as the traditionally represented $Q$.
The rest of this report is organized as follows.
Section~\ref{sec:TSQR_optimal} presents TSQR,
describing its parallel and sequential optimizations,
performance models, comparisons to LAPACK and ScaLAPACK,
and how it can be adapted to other architectures.
Section~\ref{sec:CAQR_optimal} presents CAQR analogously.
(This paper is based on the technical report
\cite{TSQR_technical_report}, to which we leave many of the detailed
derivations of the performance models.)
Section~\ref{sec:LowerBounds_TSQR} presents our lower bounds
for TSQR, and
Section~\ref{sec:LowerBounds_CAQR} for CAQR (as well as LU).
Section~\ref{S:related-work} describes related work.
Section~\ref{sec:Conclusions_optimal} summarizes
and describes open problems and future work.
\begin{comment}
Section~\ref{S:abbrev} first gives a list of terms and abbreviations.
We then begin the discussion of Tall Skinny QR by Section~\ref{S:motivation},
which motivates the algorithm, giving a variety of applications where
it is used, beyond as a building block for general QR.
Section~\ref{S:TSQR:algebra} introduces the TSQR algorithm and shows how the
parallel and sequential versions correspond to different reduction or
all-reduction trees.
After that, Section~\ref{S:reduction}
illustrates how TSQR is actually a reduction, introduces corresponding
terminology, and discusses some design choices.
Section~\ref{S:TSQR:localQR} shows how the local QR decompositions in TSQR can
be further optimized, including ways that current ScaLAPACK cannot
exploit. We also explain how to apply the $Q$ factor from TSQR
efficiently, which is needed both for general QR and other
applications.
Section~\ref{S:perfmodel} explains about our parallel and
sequential machine models, and what parameters we use to describe them.
Next, Sections~\ref{S:TSQR:perfcomp} and \ref{S:TSQR:stability}
describe other "tall skinny QR" algorithms, such as CholeskyQR and
Gram-Schmidt, and compare their cost (Section~\ref{S:TSQR:perfcomp})
and numerical stability (Section~\ref{S:TSQR:stability}) to that of
TSQR. These sections show that TSQR is the only algorithm that
simultaneously minimizes communication and is numerically stable.
Section~\ref{S:TSQR:platforms} describes the platforms used for
testing TSQR, and Section~\ref{S:TSQR:perfres} concludes the
discussion of TSQR proper by describing the TSQR performance results.
Our discussion of CAQR presents both the parallel and the sequential
CAQR algorithms for the QR factorization of general rectangular matrices.
Section~\ref{S:CAQR} describes the parallel CAQR algorithm
and constructs a performance model.
Section~\ref{S:CAQR-seq} does the same for sequential CAQR.
Next, Section~\ref{S:CAQR-counts} compares
the performance of parallel CAQR and ScaLAPACK's \lstinline!PDGEQRF!,
showing CAQR to be superior, for the same choices of block sizes and
data layout parameters, as well as when these parameters are chosen
optimally and independently for CAQR and \lstinline!PDGEQRF!.
After that, Section~\ref{S:CAQR:perfest} presents performance predictions
comparing CAQR to \lstinline!PDGEQRF!. Future work includes actual
implementation and measurements.
The next two sections in the body of the text concern theoretical
results about CAQR and other parallel and sequential QR factorizations.
Section~\ref{S:lowerbounds} describes how to extend
known lower bounds on communication for matrix multiplication to QR,
and shows that these are attained (modulo polylogarithmic factors) by
TSQR and CAQR.
Section~\ref{S:limits-to-par} reviews known lower
bounds on parallelism for QR, using a PRAM model of parallel
computation.
The final section, Section~\ref{S:hierarchies} briefly outlines how
to extend the algorithms and optimality results to hierarchical architectures,
either with several levels of memory hierarchy, or several levels
of parallelism.
The Appendices provide details of operation counts and other results
summarized in previous sections. Appendix~\ref{S:localQR-flops}
presents flop counts for optimizations of local QR decompositions
described in Section~\ref{S:localQR}.
Appendices~\ref{S:TSQR-seq-detailed}, \ref{S:CAQR-seq-detailed},
\ref{S:TSQR-par-detailed}, and \ref{S:CAQR-par-detailed} give details
of performance models for sequential TSQR, sequential CAQR, parallel
TSQR and parallel CAQR, respectively.
Appendix~\ref{S:PFDGEQRF} models sequential QR based on ScaLAPACK's out-of-DRAM
routine \lstinline!PFDGEQRF!.
Finally, Appendix~\ref{S:CommLowerBoundsFromCalculus}
proves communication lower bounds
needed in Section~\ref{S:lowerbounds}.
\end{comment}
\begin{comment}
\subsection{Future work}
Implementations of sequential and parallel TSQR and CAQR will be
discussed in separate publications.
Optimization of the TSQR reduction tree for more general,
practical architectures (such as multicore, multisocket, or GPUs) is
future work, as well as optimization of the rest of CAQR to the
most general architectures.
Elmroth and Gustavson proposed a recursive QR factorization
(see \cite{elmroth2000applying}) which can also take advantage
of memory hierarchies. It is future work to analyze whether
their algorithm satisfies the same lower bounds on communication
as does sequential CAQR.
It is natural to ask to how much of dense linear algebra one
can extend the results of this paper, that is finding algorithms that
attain communication lower bounds.
Toledo \cite{ToledoLU} presents a recursive implementation of LU
decomposition that attains the bandwidth lower bound presented
in this paper (at least when $W > n$), but not the latency
lower bound {\em need to check this}; attaining the latency lower
bound remains an open problem.
In the case of parallel LU with
pivoting, refer to the technical report by Grigori, Demmel, and Xiang
\cite{grigori2008calu} for an algorithm that attains both lower bounds,
albeit for a different kind of pivoting than partial pivoting.
More broadly, we hope to extend the
results of this paper to the rest of linear algebra, including
two-sided factorizations (such as reduction to symmetric tridiagonal,
bidiagonal, or (generalized) upper Hessenberg forms). Once
a matrix is symmetric tridiagonal (or bidiagonal) and so takes
little memory, fast algorithms for the eigenproblem (or SVD)
are available. Most challenging is likely to be find
eigenvalues of a matrix in upper Hessenberg form (or of
a matrix pencil).
\end{comment}
\endinput
\section*{Appendix}
\newpage
\bibliographystyle{siam}
\section{Related work}\label{S:related-work}
The central idea in this paper is factoring tall skinny matrices using
a tree-based Householder QR algorithm. A number of authors previously
figured out the special case of a binary reduction tree for parallel
QR. As far as we know, Golub et al.\ \cite{golub1988parallel} were the
first to suggest it, but their formulation requires $n \log P$
messages for QR of an $m \times n$ matrix on $P$ processors. Pothen
and Raghavan \cite{pothen1989distributed} were the first, as far as we
can tell, to implement parallel TSQR using only $\log P$ messages. Da
Cunha et al.\ \cite{cunha2002new} independently rediscovered parallel
TSQR.
Other authors have worked out variations of the algorithm we call
``sequential TSQR''
\cite{buttari2007class,buttari2007parallel,gunter2005parallel,kurzak2008qr,quintana-orti2008scheduling,rabani2001outcore}.
They do not use it by itself, but rather as the panel factorization
step in the QR decomposition of general matrices. The references
\cite{buttari2007class,buttari2007parallel,gunter2005parallel,kurzak2008qr,quintana-orti2008scheduling}
refer to the latter algorithm as ``tiled QR,'' which is the same as
our sequential CAQR with square blocks. However, they use it in
parallel on shared-memory platforms, especially single-socket
multicore. They do this by exploiting the parallelism implicit in the
directed acyclic graph of tasks. Often they use dynamic task
scheduling, which we could use but do not discuss in this paper.
Since the cost of communication in the single-socket multicore regime
is low, these authors are less concerned than we are about minimizing
latency; thus, they are not concerned about the latency bottleneck in
the panel factorization, which motivates our parallel CAQR algorithm.
We also model and analyze communication costs in more detail than
previous authors did.
Here are recent examples of related work on sequential CAQR.
Gunter and van de Geijn develop a parallel out-of-DRAM QR
factorization algorithm that uses a flat tree for the panel
factorizations \cite{gunter2005parallel}. Buttari et al.\ suggest
using a QR factorization of this type to improve performance of
parallel QR on commodity multicore processors \cite{buttari2007class}.
Quintana-Orti et al.\ develop two variations on block QR factorization
algorithms, and use them with a dynamic task scheduling system to
parallelize the QR factorization on shared-memory machines
\cite{quintana-orti2008scheduling}. Kurzak and Dongarra use similar
algorithms, but with static task scheduling, to parallelize the QR
factorization on Cell processors \cite{kurzak2008qr}.
As far as we know, parallel CAQR is novel. Nevertheless, there is a
body of work on theoretical bounds on exploitable parallelism in QR
factorizations. These bounds apply to both parallel TSQR and parallel
CAQR if one replaces ``matrix element'' in the authors' work with
``block'' in ours. Cosnard, Muller, and Robert proved lower bounds on
the critical path length $Opt(m,n)$ of any parallel QR algorithm of an
$m \times n$ matrix based on Givens rotations \cite{cosnard86}; it is
believed that these apply to any QR factorization based on Householder
or Givens rotations. Leoncini et al.\ show that any QR factorization
based on Householder reductions or Givens rotations is P-complete
\cite{leoncini1999parallel}. The only known QR factorization
algorithm in arithmetic NC (see \cite{csanky1976fast}) is numerically
highly unstable \cite{demmel1992trading}, and no work suggests that a
stable arithmetic NC algorithm exists.
Hong and Kung \cite{hong1981io} and Irony, Toledo, and Tiskin
\cite{irony2004communication} proved lower bounds on communication for
sequential and parallel matrix multiplication. We are, as far as we
know, the first to attempt extending these bounds to LU and QR factorization.
Elmroth and Gustavson proposed a recursive QR factorization (see
\cite{elmroth1998new,elmroth2000applying}) which can also take
advantage of memory hierarchies. It is future work to analyze whether
their algorithm satisfies the same lower bounds on communication as
does sequential CAQR. It is natural to ask to how much of dense
linear algebra one can extend the results of this paper, that is
finding algorithms that attain communication lower bounds.
For parallel LU with pivoting, see the technical report by
Grigori, Demmel, and Xiang \cite{grigori2008calu}, and for
sequential LU, see \cite{toledo1997locality}.
Block iterative methods frequently compute the QR factorization of a
tall and skinny dense matrix. This includes algorithms for solving
linear systems $Ax = B$ with multiple right-hand sides (such as
variants of GMRES, QMR, or CG
\cite{vital:phdthesis:90,Freund:1997:BQA,oleary:80}), as well as block
iterative eigensolvers (for a summary of such methods, see
\cite{templatesEigenBai,templatesEigenLehoucq}). In practice,
modified Gram-Schmidt orthogonalization is usually used when a
(reasonably) stable
QR factorization is desired. Sometimes unstable methods (such as
CholeskyQR) are used when performance considerations outweigh
stability. Eigenvalue computation is particularly sensitive to the
accuracy of the orthogonalization; two recent papers suggest that
large-scale eigenvalue applications require a stable QR factorization
\cite{lehoucqORTH,andrewORTH}. Many block iterative methods have
widely used implementations, on which a large community of scientists
and engineers depends for their computational tasks. Examples include
TRLAN (Thick Restart Lanczos), BLZPACK (Block Lanczos), Anasazi
(various block methods), and PRIMME (block Jacobi-Davidson methods)
\cite{TRLANwebpage,BLZPACKwebpage,BLOPEXwebpage,irbleigs,TRILINOSwebpage,PRIMMEwebpage}.
\subsection{Other Bandwidth Minimizing Sequential QR Algorithms}
\label{sec:seq_qr_other}
In this section we describe special cases in which previous
sequential QR algorithms also minimize bandwidth, although
they do not minimize latency.
In particular, we discuss
two variants of Elmroth's and Gustavson's recursive
QR (RGEQR3 and RGEQRF \cite{elmroth2000applying}),
as well as LAPACK's DGEQRF.
The fully recursive routine RGEQR3 is analogous to Toledo's
fully recursive LU routine \cite{toledo1997locality}: Both
routines factor the left half of the matrix (recursively),
use the resulting factorization of the left half to update
the right half, and then factor the right half (recursively again).
The base case consists of a single column. The output of
RGEQR3 applied to an $m$-by-$n$ matrix returns the $Q$
factor in the form $I-YTY^T$, where $Y$ is the $m$-by-$n$
lower triangular matrix of Householder vectors,
and $T$ is an $n$-by-$n$ upper triangular matrix.
A simple recurrence for the number of memory references
of either RGEQR3 or Toledo's algorithm is
\begin{eqnarray}
\label{eqn:RGEQR3}
B(m,n) & = & \left\{ \begin{array}{ll}
B(m,\frac{n}{2}) + B(m-\frac{n}{2},\frac{n}{2}) +
O(\frac{mn^2}{\sqrt{W}})
& {\rm if} \; mn > W \; {\rm and} \; n>1 \\
mn & {\rm if} \; mn \leq W \\
m & {\rm if} \; m > W \; {\rm and} \; n=1
\end{array} \right. \nonumber \\
& \leq & \left\{ \begin{array}{ll}
2B(m,\frac{n}{2}) +
O(\frac{mn^2}{\sqrt{W}})
& {\rm if} \; mn > W \; {\rm and} \; n>1 \\
mn & {\rm if} \; mn \leq W \\
m & {\rm if} \; m > W \; {\rm and} \; n=1
\end{array} \right. \nonumber \\
& = & O(\frac{mn^2}{\sqrt{W}}) + mn
\end{eqnarray}
So RGEQR3 attains our bandwidth lower bound.
(The $mn$ term must be included to account for the case
when $n<\sqrt{W}$, since each of the $mn$ matrix entries
must be accessed at least once.)
However, RGEQR3 does
a factor greater than one
times as many floating point operations
as sequential Householder QR.
Now we consider RGEQRF and DGEQRF, which are both
right-looking algorithms and differ only in how
they perform the panel factorization (by RGEQR3
and DGEQR2, resp.). Let $b$ be the width of
the panel in either algorithm. It is easy to
see that a reasonable estimate of the number of
memory references just for the updates by all the panels
is the number of panels $\frac{n}{b}$ times the minimum
number of memory references for the average
size update $\Theta(\max(mn,\frac{mnb}{\sqrt{W}}))$,
or $\Theta(\max(\frac{mn^2}{b},\frac{mn^2}{\sqrt{W}}))$.
Thus we need to pick $b$ at least about as large
as $\sqrt{W}$ to attain the desired lower bound
$O(\frac{mn^2}{\sqrt{W}})$.
Concentrating now on RGEQRF, we get from
inequality~(\ref{eqn:RGEQR3})
that the $\frac{n}{b}$ panel factorizations using RGEQR3
cost at most an additional \linebreak
$O(\frac{n}{b} \cdot [\frac{mb^2}{\sqrt{W}} + mb] )
= O( \frac{mnb}{\sqrt{W}} + mn)$
memory references, or $O(mn)$ if we pick $b=\sqrt{W}$.
Thus the total number of memory references for RGEQRF
with $b= \sqrt{W}$ is $O(\frac{mn^2}{\sqrt{W}} + mn)$
which attains the desired lower bound.
Next we consider LAPACK's DGEQRF.
In the worst case, a panel factorization by DGEQR2 will incur
one slow memory access per arithmetic operation,
and so $O(\frac{n}{b} \cdot mb^2 ) = O(mnb)$ for all panel factorizations.
For the overall algorithm to be guaranteed to attain
minimal bandwidth, we need $mnb = O(\frac{mn^2}{\sqrt{W}})$,
or $b = O(\frac{n}{\sqrt{W}})$. Since $b$ must also be at
least about $\sqrt{W}$, this means $W = O(n)$,
or that fast memory size may be at most large enough
to hold a few rows of the matrix, or may be much smaller.
RGEQR3 does not alway minimize latency. For example,
considering applying RGEQR3 to a single panel
with $n=\sqrt{W}$ columns and $m>W$ rows, stored
in a block-column layout with $\sqrt{W}$-by-$\sqrt{W}$
blocks stored columnwise, as above. Then a recurrence
for the number of messages RGEQR3 requires is
\begin{eqnarray*}
\label{eqn:RGEQR3_latency}
L(m,n) & = & \left\{ \begin{array}{ll}
L(m,\frac{n}{2}) + L(m-\frac{n}{2},\frac{n}{2}) +
O(\frac{m}{\sqrt{W}})
& {\rm if} \; n>1 \\
O(\frac{m}{\sqrt{W}}) & {\rm if} \; n = 1
\end{array} \right. \nonumber \\
& = & O(\frac{mn}{\sqrt{W}}) = O(m) \; {\rm when} \; n = \sqrt{W}
\end{eqnarray*}
which is larger than the minimum $O(\frac{mn}{W}) = O(\frac{m}{\sqrt{W}})$
attained by sequential TSQR when $n = \sqrt{W}$.
In contrast to DGEQRF, RGEQRF, and RGEQR3,
CAQR minimizes flops, bandwidth and latency
for all values of $W$.
\section{Lower Bounds for TSQR}
\label{sec:LowerBounds_TSQR}
We present communication lower bounds for TSQR.
As we already mentioned for the sequential case,
it is obviously necessary to read $mn$ words
from from slow to fast memory (the input),
and write $mn$ words from fast to slow memory (the output),
for a lower bound of $2mn$ words moved. Sequential
TSQR attains this trivial lower bound.
Since the size of a message is bounded by the size of
fast memory $W$, it clearly requires at least $\frac{2mn}{W}$
messages to send this much data. Since TSQR sends
$\frac{2mn}{\widetilde{W}} = \frac{2mn}{W - \frac{n(n+1)}{2}} \stackrel{<}{\approx} \frac{3mn}{W}$
messages, it attains this bound to within a constant factor, and is very close when
$W \gg n^2$.
For parallel TSQR, the lower bound on latency is obviously $\log P$,
since TSQR needs to compute a nontrivial function of data that
is spread over $P$ processors, and a binary reduction tree
of depth $\log P$ clearly minimizes latency (by using the
butterfly variant). Parallel TSQR attains this lower bound too.
Bandwidth lower bounds for parallel TSQR are more interesting.
We analyze this in a way that applies to more general situations,
starting with the following:
Suppose processor 1 and processor 2 each own some of the arguments
of a function $f$ that processor 1 wants to compute. What is the least
volume of communication required to compute the function?
We are interested in smooth functions of real or complex arguments,
and so will use techniques from calculus rather than modeling
the arguments as bit strings.
In this way, we will derive necessary conditions on the function $f$
for it to be evaluable by communicating fewer than all of its arguments
to one processor. We will apply these conditions to various linear
algebra operations to capture our intuition that it is in fact necessary
to move all the arguments to one processor for correct evaluation of $f$:
Subsection~\ref{sec:LowerBounds_TSQR_ss1} will show that
if $f$ is a bijection as a function of the $n$ arguments on processor 2,
and if processor 2 can only send one message to processor 1, then it
indeed has to send all $n$ arguments
(part 3 of Lemma~\ref{lemma:bijection}).
Subsection~\ref{sec:LowerBounds_TSQR_ss2} extends this to reduction
operations where each processors sends one message to its parent in
a reduction tree, which is the case we are considering in this paper.
Subsection~\ref{sec:LowerBounds_TSQR_ss3} goes a step further and
asks whether less data can be sent overall by allowing processors 1 and 2
to exchange multiple but smaller messages; the answer is sometimes yes, but
again not for the reduction operations we consider.
\subsection{Communication lower bounds for one-way communication between
2 processors}
\label{sec:LowerBounds_TSQR_ss1}
Suppose $x^{(m)} \in {\mathbb{R}}^m$ is owned by processor 1 (P1) and
$y^{(n)} \in {\mathbb{R}}^n$ is owned by P2; we use superscripts to
remind the reader of the dimension of each vector-valued variable or function.
Suppose P1 wants to compute
$f^{(r)}(x^{(m)},y^{(n)}): {\mathbb{R}}^{m} \times {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^r$.
We first ask how much information P2 has to send to P1, assuming
it is allowed to send one message, consisting of ${\underline{n}} \leq n$ real
numbers, which themselves could be functions of $y^{(n)}$.
In other words, we ask if functions $h^{({\underline{n}} )} (y^{(n)}): {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{{\underline{n}}}$
and $F^{(r)} (x^{(m)}, z^{({\underline{n}})}) : {\mathbb{R}}^{m} \times {\mathbb{R}}^{{\underline{n}}} \rightarrow {\mathbb{R}}^r$,
exist such that
$f^{(r)}(x^{(m)},y^{(n)}) = F^{(r)} (x^{(m)}, h^{({\underline{n}})} (y^{(n)}))$.
When ${\underline{n}} = n$, the obvious choice is to send the original data $y^{(n)}$,
so that $h^{({\underline{n}})} (y^{(n)}) = y^{(n)}$ is the identity function and
$f^{(r)} = F^{(r)}$. The interesting question is whether we can send
less information, i.e. ${\underline{n}} < n$.
Unless we make further restrictions on the function $h$ we are allowed
to use, it is easy to see that we can always choose ${\underline{n}} =1$, i.e. send
the least possible amount of information: We do this by using a
space-filling curve \cite{sagan1994space} to represent each $y^{(n)} \in {\mathbb{R}}^{(n)}$ by
one of several preimages $\tilde{y} \in {\mathbb{R}}$. In other words,
$h^{(1)} (y^{(n)})$
maps $y^{(n)}$ to a scalar $\tilde{y}$ that P1 can map back to
$y^{(n)}$ by a space filling curve.
This is obviously unreasonable, since it implies we could try to
losslessly compress $n$ 64-bit floating point numbers into one 64-bit
floating point number.
However, by placing some reasonable smoothness restrictions on
the functions we use, since we can only hope to evaluate (piecewise) smooth
functions in a practical way anyway, we will see that we can draw useful
conclusions about practical computations.
To state our results, we use the notation $J_x f(x,y)$ to denote the
$r \times m$ Jacobian matrix of $f^{(r)}$ with respect to the arguments
$x^{(m)}$. Using the above notation, we state
\lemma{
Suppose it is possible to compute
$f^{(r)}(x^{(m)},y^{(n)})$ on P1 by communicating ${\underline{n}} < n$ words
$h^{({\underline{n}} )} (y^{(n)})$ from P2 to P1, and evaluating
$f^{(r)}(x^{(m)},y^{(n)}) = F^{(r)} (x^{(m)}, h^{({\underline{n}} )} (y^{(n)}))$.
Suppose $h^{({\underline{n}} )}$ and $F^{(r)}$ are continuously differentiable
on open sets. Then necessary conditions for this to be possible
are as follows.
\begin{enumerate}
\item Given any fixed $y^{(n)}$ in the open set, then for all
$x^{(m)}$ in the open set,
the rows of $J_y f(x,y)$ must lie
in a fixed subspace of ${\mathbb{R}}^n$
of dimension at most ${\underline{n}} < n$.
\item Given any fixed $\tilde{y}^{({\underline{n}})} \in {\mathbb{R}}^{{\underline{n}}}$ satisfying
$\tilde{y}^{({\underline{n}})} = h^{({\underline{n}})} (y^{(n)})$ for some $y^{(n)}$ in
the interior of the open set, there is
a set $C \subset {\mathbb{R}}^{n}$ containing $y^{(n)}$,
of dimension at least $n-{\underline{n}}$,
such that for each
$x$, $f(x,y)$ is constant for $y \in C$.
\item If $r=n$, and for each fixed $x$, $f^{(r)}(x,y^{(n)})$ is a bijection,
then it is necessary and sufficient to send $n$ words from P2 to P1
to evaluate $f$.
\end{enumerate}
}
\label{lemma:bijection}
\rm
\begin{proof}
Part 1 is proved simply by differentiating, using the chain rule,
and noting the dimensions of the Jacobians being multiplied:
\[
J_y^{(r \times n)} f^{(r)}(x,y) = J_h^{(r \times {\underline{n}})} F^{(r)}(x,h)
\cdot J_y^{({\underline{n}} \times n)} h^{({\underline{n}})} (y)
\]
implying that for all $x$, each row of $J_y^{(r \times m)} f^{(r)} (x,y)$
lies in the space spanned by the ${\underline{n}}$ rows of
$J_y^{({\underline{n}} \times n)} h^{({\underline{n}})} (y)$.
Part 2 is a consequence of the implicit function theorem.
Part 3 follows from part 2, since if the function is a bijection,
then there is no set $C$ along which $f$ is constant.
\end{proof} \rm
Either part of the lemma can be used to derive
lower bounds on the volume of communication needed to compute $f(x,y)$,
for example
by choosing an ${\underline{n}}$ equal to the lower bound minus 1, and
confirming that either necessary condition in the Lemma is
violated, at least in some open set.
We illustrate this for a simple matrix factorization problem.
\corollary{
Suppose P1 owns the $r_1 \times c$ matrix $A_1$, and
P2 owns the $r_2 \times c$ matrix $A_2$, with
$r_2 \geq c$. Suppose P1 wants to compute the
$c \times c$ Cholesky factor $R$ of
$R^T \cdot R = A_1^T \cdot A_1 + A_2^T \cdot A_2$,
or equivalently the $R$ factor in the $QR$ decomposition
of $\left[ \begin{array}{c} A_1 \\ A_2 \end{array} \right]$. Then P2 has to communicate
at least $c(c+1)/2$ words to P1, and it is possible to
communicate this few, namely either the entries
on and above the diagonal of the symmetric $c \times c$ matrix $A_2^T \cdot A_2$,
or the entries of its
Cholesky factor $R$, so that $R^T \cdot R = A_2^T \cdot A_2$
(equivalently, the $R$ factor of the $QR$ factorization of $A_2$).
}
\rm
\begin{proof}
That it is sufficient to communicate the $c(c+1)/2$ entries described
above is evident. We use Corollary~1 to prove that these many words are
necessary. We use the fact that mapping between
the entries on and above the diagonal of the symmetric positive definite
matrix and its Cholesky factor is a bijection
(assuming positive diagonal entries of the Cholesky factor).
To see that for any fixed $A_1$, $f(A_1,R) = $ the Cholesky factor
of $A_1^T \cdot A_1 + R^T \cdot R$ is a bijection,
note that it is a composition of three bijections:
the mapping from $R$ to the entries on and above the
diagonal of $Y = A_2^T \cdot A_2$, the entries on and
above the diagonal of $Y$ and those on and above the diagonal
of $X = A_1^T \cdot A_1 + Y$, and the mapping between the entries on
and above the diagonal of $X$ and its Cholesky factor $f(A_1,R)$.
\end{proof} \rm
\subsection{Reduction operations}
\label{sec:LowerBounds_TSQR_ss2}
We can extend this result slightly to make it apply to the case of
more general reduction operations, where one processor P1 is trying to
compute a function of data initially stored on multiple other processors
P2 through P$s$. We suppose that there is a tree
of messages leading from these processors eventually reaching P1.
Suppose each P$i$ only sends data up the tree, so that the communication
pattern forms a DAG (directed acylic graph) with all paths ending at P1.
Let P$i$'s data be denoted $y^{(n)}$.
Let all the variables on P1 be denoted $x^{(m)}$,
and treat all the other variables on the other processors as constants.
Then exactly the same analysis as above applies, and we can conclude that
{\em every} message along the unique path from P$i$ to P1 has the same
lower bound on its size, as determined by Lemma~1.
This means Corollary~1 extends to include reduction operations where
each operation is a bijection between one input (the other being fixed)
and the output. In particular, it applies to TSQR.
We emphasize again that using a real number model to draw conclusions about
finite precision computations must be done with care. For example,
a bijective function depending on many variables could hypothetically
round to the same floating point output for all floating point inputs,
eliminating the need for any communication or computation
for its evaluation. But this is not the
case for the functions we are interested in.
Finally, we note that the counting must be done slightly
differently for the QR decomposition of complex data,
because the diagonal entries $R_{i,i}$ are generally
taken to be real. Alternatively, there is a degree of
freedom in choosing each row of $R$, which can be
multiplied by an arbitrary complex number of absolute
value 1.
\subsection{Extensions to two-way communication}
\label{sec:LowerBounds_TSQR_ss3}
While the result of the previous subsection is adequate for the results
of this paper,
we note that it may be extended as follows. For motivation, suppose that
P1 owns the scalar $x$, and wants to evaluate the polynomial
$\sum_{i=1}^{n} y_i x^{i-1}$, where P2 owns the vector $y^{(n)}$.
The above results can be used to show that P2 needs to send $n$
words to P1 (all the coefficients of the polynomial, for example).
But there is an obvious way to communicate just 2 words:
(1) P1 sends $x$ to P2, (2) P2 evaluates the polynomial, and
(3) P2 sends the value of the polynomial back to P1.
More generally, one can imagine $k$ phases, during each of which
P1 sends one message to P2 and then P2 sends one message to P1.
The contents of each message can be any smooth functions of all
the data available to the sending processor, either originally
or from prior messages. At the end of the $k$-th phase, P1 then
computes $f(x,y)$.
More specifically, the computation and communication proceeds as
follows:
\begin{itemize}
\item In Phase 1, P1 sends $g_1^{(m_1)} (x^{(m)})$ to P2
\item In Phase 1, P2 sends $h_1^{(n_1)} (y^{(n)}, g_1^{(m_1)} (x^{(m)}))$ to P1
\item In Phase 2, P1 sends $g_2^{(m_2)} (x^{(m)}, h_1^{(n_1)} (y^{(n)}, g_1^{(m_1)} (x^{(m)})))$ to P2
\item In Phase 2, P2 sends
$h_2^{(n_2)} (y^{(n)},
g_1^{(m_1)} (x^{(m)}),
g_2^{(m_2)} (x^{(m)}, h_1^{(n_1)} ( y^{(n)}, g_1^{(m_1)} (x^{(m)}))))$ to P1
\item $\dots$
\item In Phase $k$, P1 sends
$g_k^{(m_k)} (x^{(m)}, h_1^{(n_1)} (\dots), h_2^{(n_2)} (\dots) , \dots ,
h_{k-1}^{(n_{k-1})} (\dots) )$ to P2
\item In Phase $k$, P2 sends
$h_k^{(n_k)} (y^{(n)}, g_1^{(m_1)} (\dots), g_2^{(m_2)} (\dots) , \dots ,
g_{k}^{(m_k)} (\dots) )$ to P1
\item P1 computes
\begin{eqnarray*}
f^{(r)} (x^{(m)} , y^{(n)} ) & = &
F^{(r)} ( x^{(m)}, h_1^{(n_1)} ( y^{(n)}, g_1^{(m_1)} (x^{(m)})),
\\ & &
h_2^{(n_2)} ( y^{(n)},
g_1^{(m_1)} (x^{(m)}),
g_2^{(m_2)} (x^{(m)}, h_1^{(n_1)} ( y^{(n)}, g_1^{(m_1)} (x^{(m)})))),
\\ & & \dots
\\ & & h_k^{(n_k)} (y^{(n)}, g_1^{(m_1)} (\dots), g_2^{(m_2)} (\dots) , \dots ,
g_{k}^{(m_k)} (\dots) ))
\end{eqnarray*}
\end{itemize}
\lemma{
Suppose it is possible to compute
$f^{(r)}(x^{(m)},y^{(n)})$ on P1 by the scheme described above.
Suppose all the functions involved are continuously differentiable
on open sets. Let ${\underline{n}} = \sum_{i=1}^k n_i$ and
${\underline{m}} = \sum_{i=1}^k m_i$.
Then necessary conditions for this to be possible
are as follows.
\begin{enumerate}
\item
Suppose ${\underline{n}} < n$ and ${\underline{m}} \leq m$, ie. P2 cannot communicate
all its information to P1, but P1 can potentially send its information
to P2.
Then there is a set $C_x \subset {\mathbb{R}}^m$
of dimension at least $m-{\underline{m}}$
and a set $C_y \subset {\mathbb{R}}^n$
of dimension at least $n-{\underline{n}}$ such that
for $(x,y) \in C = C_x \times C_y$,
the value of $f(x,y)$ is independent of $y$.
\item
If $r=n=m$, and for each fixed $x$ or fixed $y$,
$f^{(r)}(x^{(m)},y^{(n)})$ is a bijection,
then it is necessary and sufficient to send $n$ words from P2 to P1
to evaluate $f$.
\end{enumerate}
}
\rm
\begin{proof}
We define the sets $C_x$ and $C_y$ by the following constraint equations,
one for each communication step in the algorithm:
\begin{itemize}
\item
$\tilde{g}_1^{(m_1)} = g_1^{(m_1)} (x^{(m)})$ is a fixed constant,
placing $m_1$ smooth constraints on $x^{(m)}$.
\item
In addition to the previous constraint,
$\tilde{h}_1^{(n_1)} = h_1^{(n_1)} (y^{(n)}$, $g_1^{(m_1)} (x^{(m)}))$
is a fixed constant,
placing $n_1$ smooth constraints on $y^{(n)}$.
\item
In addition to the previous constraints, \linebreak
$\tilde{g}_2^{(m_2)} = g_2^{(m_2)} (x^{(m)}, h_1^{(n_1)} (y^{(n)}, g_1^{(m_1)} (x^{(m)})))$
is a fixed constant, placing $m_2$ more smooth constraints on $x^{(m)}$.
\item
In addition to the previous constraints, \linebreak
$\tilde{h}_2^{(n_2)} = h_2^{(n_2)} (y^{(n)},
g_1^{(m_1)} (x^{(m)}),
g_2^{(m_2)} (x^{(m)}, h_1^{(n_1)} ( y^{(n)}, g_1^{(m_1)} (x^{(m)}))))$
is a fixed constant,
placing $n_2$ more smooth constraints on $y^{(n)}$.
\item
\dots
\item
In addition to the previous constraints, \linebreak
$\tilde{g}_k^{(m_k)} = g_k^{(m_k)} (x^{(m)}, h_1^{(n_1)} (\dots), h_2^{(n_2)} (\dots) , \dots ,
h_{k-1}^{(n_{k-1})} (\dots) )$ is a fixed constant,
placing $m_k$ more smooth constraints on $x^{(m)}$.
\item
In addition to the previous constraints, \linebreak
$\tilde{h}_k^{(n_k)} = h_k^{(n_k)} (y^{(n)}, g_1^{(m_1)} (\dots), g_2^{(m_2)} (\dots) , \dots ,
g_{k}^{(m_k)} (\dots) )$ is a fixed constant,
placing $n_k$ more smooth constraints on $y^{(n)}$.
\end{itemize}
Altogether, we have placed
${\underline{n}} = \sum_{i=1}^k n_i < n$ smooth constraints on $y^{(n)}$ and
${\underline{m}} = \sum_{i=1}^k m_i \leq m$ smooth constraints on $x^{(m)}$,
which by the implicit function theorem define surfaces
$C_y (\tilde{h}_1^{(n_1)} , \dots , \tilde{h}_k^{(n_k)} )$
and
$C_x (\tilde{g}_1^{(m_1)} , \dots , \tilde{g}_k^{(m_k)} )$,
of dimensions at least
$n - {\underline{n}} > 0$ and $m - {\underline{m}} \geq 0$, respectively,
and parameterized by
$\{\tilde{h}_1^{(n_1)} , \dots , \tilde{h}_k^{(n_k)} \}$ and
$\{\tilde{g}_1^{(m_1)} , \dots , \tilde{g}_k^{(m_k)} \}$,
respectively.
For $x \in C_x$ and $y \in C_y$, the values communicated by
P1 and P2 are therefore constant. Therefore, for $x \in C_x$
and $y \in C_y$, $f(x,y) = F(x,h_1, \dots, h_k)$ depends only on $x$,
not on $y$. This completes the first part of the proof.
For the second part, we know that if $f(x,y)$ is a bijection
in $y$ for each fixed $x$, then by the first part
we cannot have ${\underline{n}} < n$, because otherwise
$f(x,y)$ does not depend on $y$ for certain values of $x$,
violating bijectivity. But if we can send ${\underline{n}} = n$ words from
P2 to P1, then it is clearly possible to compute $f(x,y)$ by
simply sending every component of $y^{(n)}$ from P2 to P1 explicitly.
\end{proof}
\rm
\corollary{
Suppose P1 owns the $c$-by-$c$ upper triangular matrix $R_1$,
and P2 owns the $c$-by-$c$ upper triangular matrix $R_2$, and
P1 wants to compute the R factor in the QR decomposition of
$\left[ \begin{array}{c} R_1 \\ R_2 \end{array} \right]$. Then it is necessary and sufficient
to communicate $c(c+1)/2$ words from P2 to P1 (in particular, the
entries of $R_2$ are sufficient).
}
\rm
We leave extensions to general communication patterns among
multiple processors to the reader.
\section{Tall-Skinny QR - TSQR}
\label{sec:TSQR_optimal}
In this section, we present the TSQR algorithm for
computing the QR factorization of an $m \times n$ matrix $A$,
stored in a 1-D block row layout.
We assume $m \geq n$, and typically $m \gg n$.
(See \cite{scalapackusersguide} for a description of 1D and 2D layouts.)
Subsection~\ref{sec:TSQR_optimal_tree} describes
parallel TSQR on a binary tree,
sequential TSQR on a ``flat'' tree, and then TSQR
as a reduction on an arbitrary tree.
Subsection~\ref{sec:TSQR_optimal_perfmodel} describes
performance models,
and Subsection~\ref{sec:TSQR_optimal_comparison}
compares TSQR to alternative algorithms,
both stable and unstable;
we will see that TSQR does asymptotically less
communication than the stable alternatives, and is
about as fast as the fastest unstable alternative
(CholeskyQR).
\subsection{TSQR as a reduction operation}
\label{sec:TSQR_optimal_tree}
We will describe a family of algorithms that takes
an $m$-by-$n$ matrix $A = [A_0 ; A_1 ; \cdots ; A_{p-1}]$
and produces the $R$ factor of its QR decomposition.
Here we use Matlab notation, so that the $A_i$ are
stacked atop one another, and we assume $A_i$ is
$m_i$-by-$n$. In later sections we will assume
$m_i \geq n$, but that is not necessary here.
The basic operation in our examples is to take
two or more matrices stacked atop one another,
like $\hat{A} = [A_0; A_1]$, and replace them by
the $R$ factor of $\hat{A}$. As long as
more than one matrix remains in the stack, the
reduction continues until one $R$ factor is left,
which we claim is the $R$ factor of the original $A$.
The pattern of which pairs (or larger groups) of matrices
are combined in one step forms what we will call a reduction tree.
We write this out explicitly for TSQR performed
on a binary tree starting with $p=4$ blocks.
We start by replacing each $A_i$ by its own
individual $R$ factor:
\begin{equation}
\label{eqn:TSQR_binarytree_1}
A =
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\begin{pmatrix}
Q_{0} R_{0} \\
Q_{1} R_{1} \\
Q_{2} R_{2} \\
Q_{3} R_{3} \\
\end{pmatrix}.
\end{equation}
Proceeding with the first set of reductions, we write
\begin{equation}
\label{eqn:TSQR_binarytree_2}
\begin{pmatrix}
R_{0} \\
R_{1} \\ \hline
R_{2} \\
R_{3} \\
\end{pmatrix}
=
\begin{pmatrix}
\begin{pmatrix}
R_{0} \\
R_{1} \\
\end{pmatrix} \\ \hline
\begin{pmatrix}
R_{2} \\
R_{3} \\
\end{pmatrix}
\end{pmatrix}
=
\begin{pmatrix}
Q_{01} R_{01} \\ \hline
Q_{23} R_{23} \\
\end{pmatrix}
\end{equation}
Thus $[R_0;R_1]$ is replaced by $R_{01}$
and $[R_2;R_3]$ is replaced by $R_{23}$.
Here and later, the subscripts on a matrix like $R_{ij}$ refer to
the original $A_i$ and $A_j$ on which they depend.
The next and last reduction is
\begin{equation}
\label{eqn:TSQR_binarytree_3}
\begin{pmatrix}
R_{01} \\
R_{23} \\
\end{pmatrix}
=
Q_{0123} R_{0123}.
\end{equation}
We claim that $R_{0123}$ is the $R$ factor
of the original $A=[A_0;A_1;A_2;A_3]$.
To see this, we combine
equations~(\ref{eqn:TSQR_binarytree_1}),
(\ref{eqn:TSQR_binarytree_2}) and
(\ref{eqn:TSQR_binarytree_3}) to write
\begin{equation}
\label{eqn:TSQR_binarytree_4}
A
=
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\left(
\begin{array}{c | c | c | c}
Q_{0} & & & \\ \hline
& Q_{1} & & \\ \hline
& & Q_{2} & \\ \hline
& & & Q_{3} \\
\end{array}
\right)
\cdot
\left(
\begin{array}{c | c}
Q_{01} & \\ \hline
& Q_{23} \\
\end{array}
\right)
\cdot
Q_{0123} \cdot R_{0123}
\end{equation}
\begin{comment}
\begin{eqnarray*}
A
& = &
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\left(
\begin{array}{c | c | c | c}
Q_{0} & & & \\ \hline
& Q_{1} & & \\ \hline
& & Q_{2} & \\ \hline
& & & Q_{3} \\
\end{array}
\right)
\cdot
\begin{pmatrix}
R_0 \\
R_1 \\
R_2 \\
R_3 \\
\end{pmatrix}
\\
& = &
\left(
\begin{array}{c | c | c | c}
Q_{0} & & & \\ \hline
& Q_{1} & & \\ \hline
& & Q_{2} & \\ \hline
& & & Q_{3} \\
\end{array}
\right)
\cdot
\left(
\begin{array}{c | c}
Q_{01} & \\ \hline
& Q_{23} \\
\end{array}
\right)
\cdot
\begin{pmatrix}
R_{01} \\ \hline
R_{23} \\
\end{pmatrix} \\
& = &
\left(
\begin{array}{c | c | c | c}
Q_{0} & & & \\ \hline
& Q_{1} & & \\ \hline
& & Q_{2} & \\ \hline
& & & Q_{3} \\
\end{array}
\right)
\cdot
\left(
\begin{array}{c | c}
Q_{01} & \\ \hline
& Q_{23} \\
\end{array}
\right)
\cdot
Q_{0123} \cdot R_{0123}
\; \; .
\end{eqnarray*}
\end{comment}
For this product to make sense, we must
choose the dimensions of the $Q$ factors consistently:
They can all be square,
or when all $m_i \geq n$,
they can all have $n$ columns (in which case each $R$ factor
will be $n$-by-$n$). (The usual representation of $Q$ factors by
Householder vectors encodes both possibilities.)
In either case, we have expressed $A$ as a product of
(block diagonal) orthogonal matrices
(which must therefore also be orthogonal), and the triangular
matrix $R_{0123}$. By uniqueness of the
QR decomposition (modulo signs of diagonal
entries of $R_{0123}$), this is the QR decomposition
of $A$. We note that we will not multiply the various
$Q$ factors together, but leave them represented by the
``tree of $Q$ factors'' implied by
equation~(\ref{eqn:TSQR_binarytree_4}).
We abbreviate this algorithm with the following simple
notation, which makes the binary tree apparent:
\begin{center}
\setlength{\unitlength}{.5cm}
\begin{picture}(7,4)
\put(0.5,0.5){$A_3$}
\put(0.5,1.5){$A_2$}
\put(0.5,2.5){$A_1$}
\put(0.5,3.5){$A_0$}
\put(1.5,0.5){$\rightarrow$}
\put(1.5,1.5){$\rightarrow$}
\put(1.5,2.5){$\rightarrow$}
\put(1.5,3.5){$\rightarrow$}
\put(2.5,0.5){$R_3$}
\put(2.5,1.5){$R_2$}
\put(2.5,2.5){$R_1$}
\put(2.5,3.5){$R_0$}
\put(3.5,0.65){$\nearrow$}
\put(3.5,1.35){$\searrow$}
\put(3.5,2.65){$\nearrow$}
\put(3.5,3.35){$\searrow$}
\put(4.5,1.0){$R_{23}$}
\put(4.5,3.0){$R_{01}$}
\put(5.6,1.5){$\nearrow$}
\put(5.6,2.5){$\searrow$}
\put(6.5,2.0){$R_{0123}$}
\end{picture}
\end{center}
The notation has the following meaning: if one or more
arrows point to the same matrix, that matrix
is the $R$ factor of the matrix obtained by stacking
all the matrices at the other ends of the arrows atop
one another.
This notation not only makes the parallelism in the algorithm
apparent (all QR decompositions at the same depth in the
tree can potentially be done in parallel), but implies that
{\em any} tree leads to a valid QR decomposition. For example, conventional
QR decomposition may be expressed as the trivial tree
\begin{center}
\setlength{\unitlength}{.5cm}
\begin{picture}(4,4)
\put(0.5,0.5){$A_3$}
\put(0.5,1.5){$A_2$}
\put(0.5,2.5){$A_1$}
\put(0.5,3.5){$A_0$}
\put(1.5,0.85){\vector(3,2){1.4}}
\put(1.5,1.70){\vector(3,1){1.4}}
\put(1.5,2.75){\vector(3,-1){1.4}}
\put(1.5,3.55){\vector(3,-2){1.4}}
\put(3.0,2.0){$R_{0123}$}
\end{picture}
\end{center}
The tree we will use for sequential TSQR with limited
fast memory $W$ is the following so-called ``flat tree'':
\begin{center}
\setlength{\unitlength}{.5cm}
\begin{picture}(7,4)
\put(0.5,0.5){$A_3$}
\put(0.5,1.5){$A_2$}
\put(0.5,2.5){$A_1$}
\put(0.5,3.5){$A_0$}
\put(1.5,1.0){\vector(3,1){7}}
\put(1.5,1.75){\vector(3,1){5}}
\put(1.5,2.75){\vector(4,1){3}}
\put(1.5,3.75){\vector(1,0){1}}
\put(2.5,3.5){$R_0$}
\put(3.5,3.75){\vector(1,0){1}}
\put(4.5,3.5){$R_{01}$}
\put(5.55,3.75){\vector(1,0){.8}}
\put(6.5,3.5){$R_{012}$}
\put(8.0,3.75){\vector(1,0){.5}}
\put(8.5,3.5){$R_{0123}$}
\end{picture}
\end{center}
The idea of sequential TSQR is that if fast memory can only hold a little more than a fraction $m/p$
of the rows of $A$ (a little more than $m/4$ for the above tree), then the algorithm proceeds
by reading in the first $m/p$ rows of $A$, doing its QR decomposition,
keeping $R_0$ in fast memory but writing the representation of $Q_0$
back to slow memory,
and then repeatedly reading in the next $m/p$ rows, doing the QR decomposition
of them stacked below the $R$ factor already in memory,
and writing out the representation of the new $Q$ factor.
This way the entire matrix is read into fast memory once, and
the representation of all the $Q$ factors is written out to fast memory once,
which is clearly the minimal amount of data movement possible.
For an example of yet another TSQR reduction tree more suitable for
a hybrid parallel / out-of-core factorization, see
\cite[Section 4.3]{TSQR_technical_report}.
It is evident that all these variants of TSQR are numerically stable,
since they just involve repeated applications of orthogonal
transformations. Note also that the local QR factorizations in both
the parallel and sequential TSQR algorithms can avoid storing and
performing arithmetic with zeros in the triangular factors. This
optimization still allows the use of high-performance QR algorithms
(such as the BLAS 3 $Y T Y^T$ representation of Schreiber and Van Loan
\cite{schreiber1989storage} and the recursive QR factorization of
Elmroth and Gustavson \cite{elmroth2000applying}) for the local
computations. For details, see Demmel et al.\ \cite[Section
7]{TSQR_technical_report}.
We close this subsection by observing that the general theory of
reduction operations applied to associative operators
(e.g., optimizing the shape of the reduction tree
\cite{nishtala2008performance}, or how to compute prefix sums
of $a_1 \star a_2 \star \cdots \star a_p$
where $\star$ could be scalar addition, matrix multiplication, etc.)
applies to QR decomposition as well, because the mapping from
$[A_0;A_1]$ to its $R$ factor is associative (modulo roundoff and
the choice of the signs of the diagonal entries).
\begin{comment}
\subsection{Optimizing local factorizations in TSQR}
\label{sec:TSQR_optimal_localopt}
Most of the QR factorizations performed during TSQR involve
matrices consisting of one or more triangular factors
stacked atop one another. We can ignore this zero structure and
still get a correct factorization, but if we do we will do
several times as many floating point operations as necessary
(up to 5$\times$ in the parallel case and 2$\times$ in the
sequential case). Previous authors have suggested using
Givens rotations to avoid this \cite{pothen1989distributed},
but this would make it hard to achieve BLAS3 performance.
Our observation is that not only is it possible to used
blocked Householder transformations that both do minimal
arithmetic and permit BLAS3 performance, but in fact we
can organize the algorithm to get {\em better} BLAS3
performance than conventional QR decomposition.
The empirical data justifying this claim appears
elsewhere (\cite[Section 12]{TSQR_technical_report}), but we
outline the algorithm here.
We illustrate with the QR decomposition of
a pair $[R_0 ; R_1 ]$ of 5-by-5 triangular matrices.
Their sparsity pattern, and that of the Householder
vectors from their QR decomposition are shown below:
\begin{equation}\label{eq:fact_2rs}
\begin{pmatrix}
R_0 \\
R_1\\
\end{pmatrix}
=
\begin{pmatrix}
x & x & x & x & x \\
& x & x & x & x \\
& & x & x & x \\
& & & x & x \\
& & & & x \\
x & x & x & x & x \\
& x & x & x & x \\
& & x & x & x \\
& & & x & x \\
& & & & x \\
\end{pmatrix}
\Longrightarrow
{\rm Householder} =
\begin{pmatrix}
1 & & & & \\
& 1 & & & \\
& & 1 & & \\
& & & 1 & \\
& & & & 1 \\
x & x & x & x & x \\
& x & x & x & x \\
& & x & x & x \\
& & & x & x \\
& & & & x \\
\end{pmatrix}.
\end{equation}
This picture suggests that it is straightforward to adapt both the unblocked
Householder decomposition and its blocked version in \cite{schreiber1989storage},
by storing the Householder vectors on top of the zeroed-out entries as usual,
and simply by changing the lengths of the vectors involved in updates
of the trailing matrix.
For the case of two $n$-by-$n$ triangular matrices, exploiting
this structure lowers the operation count to $\frac{2}{3}n^3$
from about $\frac{10}{3}n^3$.
It is also possible to do this when $q$ triangles are
stacked atop one another, generating $(q-1)n(n+1)/2$
parameters.
See \cite[Section 6]{TSQR_technical_report}) for details.
Most importantly, we can apply Elmroth and Gustavson's
recursive QR algorithm \cite{elmroth2000applying} to the matrices
in fast memory (in the sequential case) or local processor
memory (in the parallel case). Various authors have observed
that Elmroth's and Gustavson's algorithm works well on tall-skinny
matrices, but this applies only if the data fits in fast (or local)
memory, which is what we can guarantee.
\end{comment}
\subsection{Performance models for TSQR}
\label{sec:TSQR_optimal_perfmodel}
We present performance models for parallel and sequential TSQR.
We outline their derivations, which are straightforward based
on the previous descriptions, and leave details to
\cite[Section 8]{TSQR_technical_report}.
In the next section we will compare the models for TSQR
with alternative algorithms.
The runtimes will be functions of $m$ and $n$.
In the parallel case, the runtime will also depend on
the number of processors $P$, where we assume each
processor stores $m/P$ rows of the input matrix $A$.
(It is easiest to think of the rows as contiguous,
but if they are not, we simply get the QR decomposition
of a row-permutation of $A$, which is still just the QR
decomposition). In the sequential case the runtime will
depend on $W$, the size of fast memory.
We assume fast memory is large enough to
contain at least $n$ rows of $A$, and an $R$ factor,
i.e. $W \stackrel{>}{\approx} \frac{3}{2}n^2$.
In both parallel and sequential cases, we let
$\gamma = $ time per flop, $\beta = $ reciprocal
bandwidth (time per word) and $\alpha = $ latency
(time per message). We assume no overlap of
communication and computation (as said before, this
could speed up the algorithm at most 2$\times$).
All logarithms are in base 2.
A parallel TSQR factorization on a binary reduction tree performs the
following computations along the critical path: one local QR
factorization of a fully dense $m/P \times n$ matrix, and $\log P$
factorizations, each of a $2n \times n$ matrix consisting of two $n
\times n$ upper triangular matrices. The factorization requires
$\frac{2mn^2}{P} + \frac{2n^3}{3} \log P$
flops (ignoring lower order terms here and elsewhere) and
$\log P$ messages, and transfers a total of
$\frac{1}{2} n^2 \log P$
words between processors.
Thus, the total run time is
\begin{equation}
\label{eqn:TSQR_par_runtime}
\text{Time}_{\text{Par.\ TSQR}}(m,n,P) =
\left(
\frac{2mn^2}{P} + \frac{2n^3}{3} \log P
\right) \gamma +
\left(
\frac{1}{2} n^2 \log P
\right) \beta +
\left( \log P \right) \alpha
\; \; .
\end{equation}
Now we consider sequential TSQR.
To first order, TSQR performs the same number of floating point operations
as standard Householder QR, namely $2mn^3 - 2n^3 / 3$.
As described before, sequential TSQR moves $2mn$ words by dividing
$A$ into submatrices that are as large as possible, i.e., $m'$ rows
each such that $m' \cdot n + \frac{n(n+2)}{2} \leq W$,
or $m' \approx (W - \frac{n(n+1)}{2})/n = \widetilde{W}/n$,
where $\widetilde{W} = W - \frac{n(n+1)}{2}$. Assuming
$A$ is stored so that groups of $m'$ rows are in contiguous
memory locations, the number of messages sequential TSQR needs
to send is $\frac{2mn}{m' n} = \frac{2mn}{\widetilde{W}}$.
Thus the runtime for sequential TSQR is
\begin{equation}
\label{eqn:TSQR_seq_runtime}
\text{Time}_{\text{Seq.\ TSQR}}(m,n,W) =
\left(
2mn^2 - \frac{2n^3}{3}
\right) \gamma +
\left( 2mn \right) \beta +
\left(
\frac{2mn}{\widetilde{W}}
\right) \alpha
\; \; .
\end{equation}
We note that $\widetilde{W} \stackrel{>}{\approx} 2 W / 3$, so that
the number of messages
$2mn / \widetilde{W} \stackrel{<}{\approx} 3mn / W$.
\subsection{Comparison of TSQR to alternative algorithms}
\label{sec:TSQR_optimal_comparison}
We compare parallel and sequential QR to alternative algorithms, both
stable and unstable: Classical Gram-Schmidt (CGS), Modified Gram-Schmidt (MGS),
Cholesky QR, and Householder QR, as implemented in LAPACK and ScaLAPACK;
only the latter are numerically stable in all cases.
In summary, TSQR not only has the lowest complexity (comparing highest order terms),
but has asymptotically lower communication complexity than the only
numerically stable alternatives.
We outline our approach and leave details of counting to
\cite[Section 9]{TSQR_technical_report}.
MGS and CGS can be either right-looking or left-looking. For CGS either
alternative has the same communication complexity, but for MGS the right-looking
variant has much less latency, so we present its performance model.
Cholesky QR forms $A^TA$, computes its upper triangular Cholesky
factor $R$, and forms $Q = A R^{-1}$. It can obviously be unstable, but
is frequently used when $A$ is expected to be well-conditioned
(see section~\ref{S:related-work}).
\begin{comment}
In contrast to parallel TSQR, parallel Householder QR needs $\log P$ messages
for each of the $n$ columns in order to compute the $n$
Householder vectors. A straightforward analysis of the
right-looking ScaLAPACK algorithm PDGEQRF
with the same 1D layout as TSQR (the left-looking version is similar)
yields a runtime of
\begin{equation}
\label{eqn:1D_ScaLAPACK_par_runtime}
Time_{1D ScaLAPACK}(m,n,P) =
(\frac{2mn^2}{P} - \frac{2n^3}{3}) \gamma +
(\frac{1}{2} n^2 \log P) \beta +
(2n \log P) \alpha
\; \; .
\end{equation}
We see that while TSQR does more flops than ScaLAPACK, it is a lower order term
(since we assume $m \gg n$).
{\em (Need to fix Tables 1 and/or 10 in tech report.)}
TSQR has the same bandwidth requirement as
ScaLAPACK. Most importantly, TSQR sends $2n$ times fewer messages.
\end{comment}
We need to say a little more about sequential Householder QR.
LAPACK's right-looking DGEQRF repeatedly sweeps over the entire
matrix, potentially leading to proportionally as much memory traffic
as there are floating point operations, a factor $\Theta (n)$
more than sequential TSQR; a left-looking version of
DGEQRF would be similar.
To make a fairer comparison, we model the performance
of a left-looking QR algorithm that was optimized to
minimize memory movement in an out-of-DRAM environment,
i.e., where fast memory is DRAM and slow memory is disk.
This routine, PFDGEQRF \cite{dazevedo1997design} was
designed to combine ScaLAPACK's parallelism with minimal
disk accesses.
As originally formulated, it uses ScaLAPACK's
parallel QR factorization PDGEQRF to perform the current
panel factorization in DRAM,
but we assume here that it is running sequentially since we are
only interested in modeling the traffic between slow and fast memory.
PFDGEQRF is a left-looking method, as usual with out-of-DRAM algorithms
(left-looking schemes do fewer writes than right-looking schemes,
since writes are often more expensive.)
PFDGEQRF keeps two panels in memory: a left
panel of fixed width $b$, and the current panel being factored, whose
width $c$ can expand to fill the available memory.
Details of the algorithm and analysis may be found in
\cite{dazevedo1997design} and
\cite[Appendix F]{TSQR_technical_report}, where we choose
$b$ and $c$ to minimize disk traffic;
we summarize the performance model in Table~\ref{tbl:TSQR:perfcomp:seq}.
\begin{comment}
\begin{eqnarray}
\label{eqn:1D_PFDGEQRF_seq_runtime}
Time_{1D seq. PFDGEQRF}(m,n,W)
& = &
(2mn^2 - \frac{2n^3}{3}) \gamma
\nonumber \\
& &
+
(\frac{mn^2}{2W}(m - \frac{n}{3}) +\frac{3n}{2}(m - \frac{n}{2})) \beta
\nonumber \\
& &
+ (\frac{mn}{2W}(n+4)) \alpha
\; \; .
\end{eqnarray}
In other words, it performs the same number of floating point operations
as sequential TSQR (or DGEQRF), but moves about $\frac{mn}{4W}$ times as
many words. Note that $\frac{mn}{W}$ is how many times larger the matrix
is than fast memory. It also sends about $\frac{n}{4}$ times as many
messages as sequential TSQR.
\end{comment}
\begin{table}
\centering
\begin{tabular}{l|c|c|c}
Parallel algorithm & \# flops & \# messages & \# words \\ \hline
TSQR & $\frac{2mn^2}{P} + \frac{2n^3}{3} \log(P)$
& $\log(P)$
& $\frac{n^2}{2} \log(P)$ \\
\lstinline!PDGEQRF!
& $\frac{2mn^2}{P} - \frac{2n^3}{3P}$
& $2n \log(P)$
& $\frac{n^2}{2} \log(P)$ \\
MGS & $\frac{2mn^2}{P}$
& $2n \log(P)$
& $\frac{n^2}{2}\log(P)$ \\
CGS & $\frac{2mn^2}{P}$
& $2n \log(P)$
& $\frac{n^2}{2}\log(P)$ \\
CholeskyQR & $\frac{2mn^2}{P} + \frac{n^3}{3}$
& $\log(P)$
& $\frac{n^2}{2}\log(P)$ \\
\end{tabular}
\caption{Performance models of various parallel QR
algorithms for "tall-skinny" matrices, i.e. with $m \gg n$.
We show only the best-performing versions of MGS (right-looking)
and CGS (left-looking).}
\label{tbl:TSQR:perfcomp:par}
\end{table}
\begin{table}
\small
\centering
\begin{tabular}{l|c|c|c}
Sequential algorithm & \# flops & \# messages & \# words \\\hline
TSQR & $2mn^2 - \frac{2n^3}{3}$
& $\frac{2mn}{\widetilde{W}}$
& $2mn - \frac{n(n+1)}{2}
+ \frac{mn^2}{\widetilde{W}}$ \\
\lstinline!PFDGEQRF!
& $2mn^2 - \frac{2 n^3}{3}$
& $\frac{2mn}{W} + \frac{mn^2}{2W}$
& $\frac{m^2 n^2}{2W} - \frac{mn^3}{6W}
+ \frac{3mn}{2} - \frac{3n^2}{4}$ \\
MGS & $2mn^2$
& $\frac{2mn^2}{\widetilde{W}}$
& $\frac{3mn}{2} + \frac{m^2 n^2}{2 \widetilde{W}}$ \\
CholeskyQR & $2mn^2 + \frac{n^3}{3}$
& $\frac{6mn}{W}$
& $3mn$ \\
\end{tabular}
\caption{Performance models of various sequential QR
algorithms for "tall-skinny" matrices, i.e. with $m \gg n$.
\lstinline!PFDGEQRF! is our model of ScaLAPACK's
out-of-DRAM QR factorization; $W$ is the fast memory size, and
$\widetilde{W} = W - n(n+1)/2$. Lower-order terms omitted.}
\label{tbl:TSQR:perfcomp:seq}
\end{table}
Examining Table~\ref{tbl:TSQR:perfcomp:par}, we see that
all parallel algorithms have the same highest order term in their
flop counts, $\frac{2mn^2}{P}$, and also use the same bandwidth, $\frac{n^2}{2} \log P$,
but that parallel TSQR sends
$2n$ times fewer messages than the only stable alternative
(PDGEQRF), and is about as fast as the fastest unstable method
(Cholesky QR). In other words, only parallel TSQR is simultaneously
fastest and stable.
Examining Table~\ref{tbl:TSQR:perfcomp:seq}, we see a similar
story, with sequential TSQR sending about $\frac{mn}{4W}$ times fewer
words and $\frac{n}{4}$ times fewer messages than the only
stable alternative, PFDGEQRF. Note that $\frac{mn}{W}$ is how
many times larger the entire matrix is than fast memory.
Since we assume $W \geq n^2$, the number of words TSQR sends is
less than the number of words CholeskyQR sends.
\endinput
\endinput
\section{Other ``tall skinny'' QR algorithms}\label{S:TSQR:perfcomp}
There are many other algorithms besides TSQR for computing the QR
factorization of a tall skinny matrix. They differ in terms of
performance and accuracy, and may store the $Q$ factor in different
ways that favor certain applications over others. In this section, we
model the performance of the following competitors to TSQR:
\begin{itemize}
\item Four different Gram-Schmidt variants
\item CholeskyQR (see \cite{stwu:02})
\item Householder QR, with a block row layout
\end{itemize}
Each includes parallel and sequential versions. For Householder QR,
we base our parallel model on the ScaLAPACK routine
\lstinline!PDGEQRF!, and the sequential model on left-looking blocked
Householder. Our left-looking blocked Householder implementation is
modeled on the out-of-core ScaLAPACK routine \lstinline!PFDGEQRF!,
which is left-looking instead of right-looking in order to minimize
the number of writes to slow memory (the total amount of data moved
between slow and fast memory is the same for both left-looking and
right-looking blocked Householder QR). See Appendix \ref{S:PFDGEQRF}
for details. In the subsequent Section \ref{S:TSQR:stability}, we
summarize the numerical accuracy of these QR factorization methods,
and discuss their suitability for different applications.
In the parallel case, CholeskyQR and TSQR have comparable numbers of
messages and communicate comparable numbers of words, but CholeskyQR
requires a constant factor fewer flops along the critical path.
However, the $Q$ factor computed by TSQR is always numerically
orthogonal, whereas the $Q$ factor computed by CholeskyQR loses
orthogonality proportionally to $\kappa_2(A)^2$. The variants of
Gram-Schmidt require at best a factor $n$ more messages than these two
algorithms, and lose orthogonality at best proportionally to
$\kappa_2(A)$.
\input{gram-schmidt}
\subsection{CholeskyQR}\label{SS:TSQR:perfcomp:CholeskyQR}
\begin{algorithm}[h]
\caption{CholeskyQR factorization}\label{Alg:CholeskyQR}
\begin{algorithmic}[1]
\Require{$A$: $m \times n$ matrix with $m \geq n$}
\State{$W := A^T A$}\Comment{(All-)reduction}
\State{Compute the Cholesky factorization $L \cdot L^T$ of $W$}
\State{$Q := A L^{-T}$}
\Ensure{$[Q, L^T]$ is the QR factorization of $A$}
\end{algorithmic}
\end{algorithm}
CholeskyQR (Algorithm \ref{Alg:CholeskyQR}) is a QR factorization that
requires only one all-reduction \cite{stwu:02}. In the parallel case, it
requires $\log_2 P$ messages, where $P$ is the number of processors.
In the sequential case, it reads the input matrix only once. Thus, it
is optimal in the same sense that TSQR is optimal. Furthermore, the
reduction operator is matrix-matrix addition rather than a QR
factorization of a matrix with comparable dimensions, so CholeskyQR
should always be faster than TSQR. Section \ref{S:TSQR:perfres}
supports this claim with performance data on a cluster. Note that in
the sequential case, $P$ is the number of blocks, and we assume
conservatively that fast memory must hold $2mn/P$ words at once (so
that $W = 2mn/P$).
\begin{table}
\centering
\begin{tabular}{l|c|c|c}
Algorithm & \# flops & \# messages & \# words \\ \hline
Parallel CholeskyQR & $\frac{2mn^2}{P} + \frac{n^3}{3}$
& $\log(P)$
& $\frac{n^2}{2}\log(P)$ \\
Sequential CholeskyQR & $2mn^2 + \frac{n^3}{3}$
& $\frac{6mn}{W}$
& $3mn$ \\
\end{tabular}
\caption{Performance model of the parallel and sequential
CholeskyQR factorization. We assume $W = 2mn/P$ in the sequential
case, where $P$ is the number of blocks and $W$ is the number of
floating-point words that fit in fast memory. Lower-order terms
omitted. All parallel terms are counted along the critical path.}
\label{tbl:CholeskyQR:counts}
\end{table}
CholeskyQR begins by computing half of the symmetric matrix $A^T A$.
In the parallel case, each processor $i$ computes half of its
component $A_i^T A_i$ locally. In the sequential case, this happens
one block at a time. Since this result is a symmetric $n \times n$
matrix, the operation takes only $mn^2/P + O(mn/P)$ flops. These local
components are then summed using a(n) (all-)reduction, which can also
exploit symmetry. The final operation, the Cholesky factorization,
requires $n^3/3 + O(n^2)$ flops. (Choosing a more stable or robust
factorization does not improve the accuracy bound, as the accuracy has
already been lost by computing $A^T A$.) Finally, the $Q := A L^{-T}$
operation costs $mn^2/P + O(mn/P)$ flops per block of $A$. Table
\ref{tbl:CholeskyQR:counts} summarizes both the parallel and
sequential performance models. In Section \ref{S:TSQR:stability}, we
compare the accuracy of CholeskyQR to that of TSQR and other ``tall
skinny'' QR factorization algorithms.
\subsection{Householder QR}\label{SS:TSQR:perfcomp:HQR}
\label{SS:TSQR:perfcomp:ScaLAPACK}
Householder QR uses orthogonal reflectors to reduce a matrix to upper
tridiagonal form, one column at a time (see e.g., \cite{govl:96}). In
the current version of LAPACK and ScaLAPACK, the reflectors are
coalesced into block columns (see e.g., \cite{schreiber1989storage}).
This makes trailing matrix updates more efficient, but the panel
factorization is still standard Householder QR, which works one column
at a time. These panel factorizations are an asymptotic latency
bottleneck in the parallel case, especially for tall and skinny
matrices. Thus, we model parallel Householder QR without considering
block updates. In contrast, we will see that operating on blocks of
columns can offer asymptotic bandwidth savings in sequential
Householder QR, so it pays to model a block column version.
\subsubsection{Parallel Householder QR}
ScaLAPACK's parallel QR factorization routine, \lstinline!PDGEQRF!,
uses a right-looking Householder QR approach \cite{lawn80}. The cost
of \lstinline!PDGEQRF! depends on how the original matrix $A$ is
distributed across the processors. For comparison with TSQR, we
assume the same block row layout on $P$ processors.
\lstinline!PDGEQRF! computes an explicit representation of the $R$
factor, and an implicit representation of the $Q$ factor as a sequence
of Householder reflectors. The algorithm overwrites the upper
triangle of the input matrix with the $R$ factor. Thus, in our case,
the $R$ factor is stored only on processor zero, as long as $m/P \geq
n$. We assume $m/P \geq n$ in order to simplify the performance
analysis.
Section \ref{SS:TSQR:localQR:BLAS3structured} describes BLAS 3
optimizations for Householder QR. \lstinline!PDGEQRF! exploits these
techniques in general, as they accelerate the trailing matrix updates.
We do not count floating-point operations for these optimizations
here, since they do nothing to improve the latency bottleneck in the
panel factorizations.
In \lstinline!PDGEQRF!, some processors may need to perform fewer
flops than other processors, because the number of rows in the current
working column and the current trailing matrix of $A$ decrease by one
with each iteration. With the assumption that $m/P \geq n$, however,
all but the first processor must do the same amount of work at each
iteration. In the tall skinny regime, ``flops on the critical path''
(which is what we count) is a good approximation of ``flops on each
processor.'' We count floating-point operations, messages, and words
transferred by parallel Householder QR on general matrix layouts in
Section \ref{S:CAQR-counts}; in particular, Equation
\eqref{Eq:ScaLAPACK:time} in that section gives a performance model.
\input{Tables/tsqr-alt-par}
Table \ref{tbl:QR:perfcomp:par} compares the performance of all the
parallel QR factorizations discussed here. We see that 1-D TSQR and
CholeskyQR save both messages and bandwidth over MGS\_R and
ScaLAPACK's \lstinline!PDGEQRF!, but at the expense of a higher-order
$n^3$ flops term.
\subsubsection{Sequential Householder QR}
LAPACK Working Note \#118 describes a left-looking out-of-DRAM QR
factorization \lstinline!PFDGEQRF!, which is implemented as an
extension of ScaLAPACK \cite{dazevedo1997design}.
It uses ScaLAPACK's
parallel QR factorization \lstinline!PDGEQRF! to perform the current
panel factorization in DRAM. Thus, it is able to exploit parallelism.
We assume here, though, that it is running sequentially, since we are
only interested in modeling the traffic between slow and fast memory.
\lstinline!PFDGEQRF! is a left-looking method, as usual with
out-of-DRAM algorithms. The code keeps two panels in memory: a left
panel of fixed width $b$, and the current panel being factored, whose
width $c$ can expand to fill the available memory. Appendix
\ref{S:PFDGEQRF} describes the method in more detail with performance
counts, and Algorithm \ref{Alg:PFDGEQRF:outline} in the Appendix gives
an outline of the code.
See Equation \eqref{eq:PFDGEQRF:runtime:W} in Appendix
\ref{S:PFDGEQRF} for the following counts. The \lstinline!PFDGEQRF!
algorithm performs
\[
2mn^2 - \frac{2n^3}{3}
\]
floating-point arithmetic operations, just like any sequential
Householder QR factorization. (Here and elsewhere, we omit
lower-order terms.) It transfers a total of about
\[
\frac{m^2 n^2}{2W}
- \frac{m n^3}{6W}
+ \frac{3mn}{2}
- \frac{3n^2}{4}
\]
floating-point words between slow and fast memory, and accesses slow
memory (counting both reads and writes) about
\[
\frac{mn^2}{2W}
+ \frac{2mn}{W}
- \frac{n}{2}
\]
times. In contrast, sequential TSQR only requires
\[
\frac{2mn}{\widetilde{W}}
\]
slow memory accesses, where $\widetilde{W} = W - n(n+1)/2$, and only
transfers
\[
2mn
- \frac{n(n+1)}{2}
+ \frac{mn^2}{\widetilde{W}}
\]
words between slow and fast memory (see Equation
\eqref{eq:TSQR:seq:modeltimeW:factor} in Appendix
\ref{S:TSQR-seq-detailed}). We note that we expect $W$ to be a
reasonably large multiple of $n^2$, so that $\widetilde{W} \approx W$.
Table \ref{tbl:QR:perfcomp:seq} compares the performance of the
sequential QR factorizations discussed in this section, including our
modeled version of \lstinline!PFDGEQRF!.
\input{Tables/tsqr-alt-seq}
\begin{comment}
\subsection{Applying the $Q$ or $Q^T$ factor}
The various algorithms in this section compute different
representations of the $Q$ factor. CholeskyQR, CGS, and MGS each form
an explicit version of the ``thin'' $Q$ factor. ``Thin'' means that
it has dimensions $m \times n$ and does not specify a basis for
$\mathcal{N}(A^T)$ (the left nullspace of $A$). In contrast, TSQR and
Householder QR each compute the ``full'' $Q$ factor as an implicit
operator; ``full'' means that the operator has dimensions $m \times n$
(implicitly in this case) and does specify a particular basis for
$\mathcal{N}(A^T)$. TSQR stores $Q$ in a tree structure, whereas
Householder QR stores it as a collection of Householder transforms,
distributed over blocks of the matrix.
\end{comment}
\endinput
\section{TSQR implementation}\label{S:TSQR:impl}
In this section, we describe the TSQR factorization algorithm in
detail. We also build a performance model of the algorithm, based on
the machine model in Section \ref{S:perfmodel} and the operation
counts of the local QR factorizations in Section \ref{S:TSQR:localQR}.
Parallel TSQR performs $2mn^2/P + \frac{2n^3}{3}\log P$ flops,
compared to the $2mn^2/P - 2n^3/(3P)$ flops performed by ScaLAPACK's
parallel QR factorization \lstinline!PDGEQRF!, but requires $2n$ times
fewer messages. The sequential TSQR factorization performs the same
number of flops as sequential blocked Householder QR, but requires
$\Theta(n)$ times fewer transfers between slow and fast memory, and a
factor of $\Theta(m/\sqrt{W})$ times fewer words transferred, in which
$W$ is the fast memory size.
\subsection{Reductions and all-reductions}
In Section \ref{S:reduction}, we gave a detailed description of
(all-)reductions. We did so because the TSQR factorization is itself an
(all-)reduction, in which additional data (the components of the $Q$
factor) is stored at each node of the (all-)reduction tree. Applying
the $Q$ or $Q^T$ factor is also a(n) (all-)reduction.
If we implement TSQR with an all-reduction, then we get the final $R$
factor replicated over all the processors. This is especially useful
for Krylov subspace methods. If we implement TSQR with a reduction,
then the final $R$ factor is stored only on one processor. This
avoids redundant computation, and is useful both for block column
factorizations for 2-D block (cyclic) matrix layouts, and for solving
least squares problems when the $Q$ factor is not needed.
\subsection{Factorization}
We now describe the parallel and sequential TSQR factorizations for
the 1-D block row layout. (We omit the obvious generalization to a
1-D block cyclic row layout.)
Parallel TSQR computes an $R$ factor which is duplicated over all the
processors, and a $Q$ factor which is stored implicitly in a
distributed way. The algorithm overwrites the lower trapezoid of
$A_{i}$ with the set of Householder reflectors for that block, and the
$\tau$ array of scaling factors for these reflectors is stored
separately. The matrix $R_{i,k}$ is stored as an $n \times n$ upper
triangular matrix for all stages $k$. Algorithm \ref{Alg:TSQR:par}
shows an implementation of parallel TSQR, based on an all-reduction.
(Note that running Algorithm \ref{Alg:TSQR:par} on a matrix stored in
a 1-D block cyclic layout still works, though it performs an implicit
block row permutation on the $Q$ factor.)
\begin{algorithm}[h]
\caption{Parallel TSQR}
\label{Alg:TSQR:allred:blkrow}
\label{Alg:TSQR:par}
\begin{algorithmic}[1]
\Require{$\Pi$ is the set of $P$ processors}
\Require{All-reduction tree with height $L$. If $P$ is a power of two
and we want a binary all-reduction tree, then $L = \log_2 P$.}
\Require{$i \in \Pi$: my processor's index}
\Require{The $m \times n$ input matrix $A$ is distributed in a 1-D
block row layout over the processors; $A_{i}$ is the block of rows
belonging to processor $i$}.
\State{Compute $[Q_{i,0}, R_{i,0}] := qr(A_{i})$ using sequential
Householder QR}\label{Alg:TSQR:allred:blkrow:QR1}
\For{$k$ from 1 to $L$}
\If{I have any neighbors in the all-reduction tree at this level}
\State{Send (non-blocking) $R_{i,k-1}$ to each neighbor not myself}
\State{Receive (non-blocking) $R_{j,k-1}$ from each neighbor $j$ not myself}
\State{Wait until the above sends and receives
complete}\Comment{Note: \emph{not} a global barrier.}
\State{Stack the upper triangular $R_{j,k-1}$ from all neighbors
(including my own $R_{i,k-1}$), by order of processor ids, into
a $qn \times n$ array $C$, in which $q$ is the number of
neighbors.}
\State{Compute $[Q_{i,k}, R_{i,k}] := qr(C)$ using Algorithm
\ref{Alg:QR:qnxn} in Section \ref{SS:TSQR:localQR:structured}}
\Else
\State{$R_{i,k} := R_{i,k-1}$}
\State{$Q_{i,k} := I_{n \times n}$}\Comment{Stored implicitly}
\EndIf
\State{Processor $i$ has an implicit representation of its block column
of $Q_{i,k}$. The blocks in the block column are $n \times n$
each and there are as many of them as there are neighbors at stage
$k$ (including $i$ itself). We don't need to compute the blocks
explicitly here.}
\EndFor
\Ensure{$R_{i,L}$ is the $R$ factor of $A$, for all processors $i \in \Pi$.}
\Ensure{The $Q$ factor is implicitly represented by $\{Q_{i,k}\}$:
$i \in \Pi$, $k \in \{0, 1, \dots, L\}\}$.}
\end{algorithmic}
\end{algorithm}
Sequential TSQR begins with an $m \times n$ matrix $A$ stored in slow
memory. The matrix $A$ is divided into $P$ blocks $A_0$, $A_1$,
$\dots$, $A_{P-1}$, each of size $m/P \times n$. (Here, $P$ has
nothing to do with the number of processors.) Each block of $A$ is
loaded into fast memory in turn, combined with the $R$ factor from the
previous step using a QR factorization, and the resulting $Q$ factor
written back to slow memory. Thus, only one $m/P \times n$ block of
$A$ resides in fast memory at one time, along with an $n \times n$
upper triangular $R$ factor. Sequential TSQR computes an $n \times n$
$R$ factor which ends up in fast memory, and a $Q$ factor which is
stored implicitly in slow memory as a set of blocks of Householder
reflectors. Algorithm \ref{Alg:TSQR:seq} shows an implementation of
sequential TSQR.
\begin{algorithm}[h]
\caption{Sequential TSQR}\label{Alg:TSQR:seq}
\begin{algorithmic}[1]
\Require{The $m \times n$ input matrix $A$, stored in slow memory,
is divided into $P$ row blocks $A_0$, $A_1$, $\dots$, $A_{P-1}$}
\State{Load $A_0$ into fast memory}
\State{Compute $[Q_{00}, R_{00}] := qr(A_{0})$ using standard
sequential QR. Here, the $Q$ factor is represented implicitly by
an $m/P \times n$ lower triangular array of Householder reflectors
$Y_{00}$ and their $n$ associated scaling factors $\tau_{00}$}
\State{Write $Y_{00}$ and $\tau_{00}$ back to slow memory; keep
$R_{00}$ in fast memory}
\For{$k = 1$ to $P - 1$}
\State{Load $A_k$}
\State{Compute $[Q_{01}, R_{01}] = qr([R_{0,k-1}; A_k])$
using the structured method analyzed in Appendix
\ref{SSS:localQR-flops:seq:2blocks}. Here, the $Q$ factor
is represented implicitly by a full $m/P \times n$ array of
Householder reflectors $Y_{0k}$ and their $n$ associated
scaling factors $\tau_{0k}$.}
\State{Write $Y_{0k}$ and $\tau_{0k}$ back to slow memory;
keep $R_{0k}$ in fast memory}
\EndFor
\Ensure{$R_{0,P-1}$ is the $R$ factor in the QR factorization of $A$,
and is in fast memory}
\Ensure{The $Q$ factor is implicitly represented by $Q_{00}$,
$Q_{01}$, $\dots$, $Q_{0,P-1}$, and is in slow memory}
\end{algorithmic}
\end{algorithm}
\subsubsection{Performance model}
In Appendix \ref{S:TSQR-par-detailed}, we develop a performance model
for parallel TSQR on a binary tree. Appendix \ref{S:TSQR-seq-detailed}
does the same for sequential TSQR on a flat tree.
A parallel TSQR factorization on a binary reduction tree performs the
following computations along the critical path: One local QR
factorization of a fully dense $m/P \times n$ matrix, and $\log P$
factorizations, each of a $2n \times n$ matrix consisting of two $n
\times n$ upper triangular matrices. The factorization requires
\[
\frac{2mn^2}{P} + \frac{2n^3}{3} \log P
\]
flops and $\log P$ messages, and transfers a total of $(1/2) n^2 \log
P$ words between processors. In contrast, parallel Householder QR
requires
\[
\frac{2mn^2}{P} - \frac{2n^3}{3}
\]
flops and $2n \log P$ messages, but also transfers $(1/2) n^2 \log P$
words between processors. For details, see Table
\ref{tbl:QR:perfcomp:par} in Section \ref{S:TSQR:perfcomp}.
Sequential TSQR on a flat tree performs the same number of flops as
sequential Householder QR, namely
\[
\frac{2mn^2}{P} - \frac{2n^3}{3}
\]
flops.
However, sequential TSQR only transfers
\[
2mn - \frac{n(n+1)}{2} + \frac{mn^2}{\widetilde{W}}
\]
words between slow and fast memory, in which $\widetilde{W} = W -
n(n+1)/2$, and only performs
\[
\frac{2mn}{\widetilde{W}}
\]
transfers between slow and fast memory. In contrast, blocked
sequential Householder QR transfers
\[
\frac{m^2 n^2}{2W}
- \frac{mn^3}{6W}
+ \frac{3mn}{2}
- \frac{3n^2}{4}
\]
words between slow and fast memory, and only performs
\[
\frac{2mn}{W} + \frac{mn^2}{2W}
\]
transfers between slow and fast memory. For details, see Table
\ref{tbl:QR:perfcomp:seq} in Section \ref{S:TSQR:perfcomp}.
\begin{comment}
The reduction version of Algorithm \ref{Alg:TSQR:allred:blkrow} is
like the above algorithm, except that child processors send only to
the one parent at each stage, and the parent does all the local
factorizations. The reduction version does no redundant work and only
stores the intermediate $Q$ factors at the parent nodes at each level.
The algorithm is performed in place. During the TSQR, in the
trapezoidal lower $m/p \times n$ matrix, processor $i$ stores the
Householder vectors $Y_{i0}$ corresponding to the local QR
factorization of its leaf node. In the upper triangular part, it
stores first the $R_{i0}$ matrix corresponding to the local QR
factorization. For each level $k$ of the tree at which processor $i$
participates, it will store the $R_{level(i,k),k}$ factor. At the
last QR factorization, it will store the Householder vectors
$Y_{level(i,k),k}$.
\subsection{1-D block cyclic row layout}
Suppose now that the $m \times n$ matrix $A$ is distributed in a 1-D
block cyclic layout over the $p$ processors. There are a total of $R$
blocks, each of size $m/R \times n$. On each processor, there are $b
= R/p$ blocks. (We could imagine a more general layout but this would
make the analysis unnecessarily complicated.) The block $A_{i,j}$
is the $j$-th such block on processor $i$.
TSQR with the block cyclic layout follows naturally from TSQR on the
block row layout. Divide each of the actual processors conceptually
into $b$ ``virtual processors,'' for a total of $R$ virtual
processors. Then, run the same TSQR algorithm as above on the
``virtual processors,'' except that messages between two physical
processors on each level of the all-reduce tree are coalesced into one
message. For example, consider $p = 4$, $R = 8$, with a binary
all-reduce tree. Physical processors 0 and 1 exchange two messages at
the first level of the all-reduce, one message for each virtual
processor on a physical processor. We coalesce those two messages,
each of size $n(n+1)/2$, into one message of size $n(n+1)$. Since $p$
divides $R$, the block cyclic layout ensures that we can always
coalesce messages between virtual processors into a single message
between physical processors. Algorithm
\ref{Alg:TSQR:allred:blkcycrow} demonstrates TSQR with a block cyclic
layout. Note that the procedure needs an additional step at the end
in order to have a single $R$ factor on each physical processor,
instead of $b$ of them (one for each virtual processor). We've chosen
to implement this step as a single $bn \times n$ QR factorization; it
could also be done in the same manner as standard TSQR (except
serialized instead of parallel).
\begin{algorithm}[h]
\caption{TSQR, block cyclic row layout}\label{Alg:TSQR:allred:blkcycrow}
\begin{algorithmic}[1]
\Require{All-reduction tree with height $L = \log_2 p$}
\Require{$i \in \Pi$: my processor's index}
\Require{The $m \times n$ matrix $A$ is distributed in
a block cyclic row layout over the processors $\Pi$.}
\For{$j = 0$ to $b-1$}
\State{Compute $[Q_{i,0,j}, R_{i,0,j}] :=
qr(A_{i,j})$}
\EndFor
\For{$k$ from 1 to $L$}
\If{I have any neighbors at this level}
\State{Send (non-blocking) $R_{i,k-1,0:b-1}$ to each neighbor not myself}
\State{Receive (non-blocking) $R_{r,k-1,0:b-1}$ from each neighbor
$r$ not myself}
\State{Wait until the above sends and receives
complete}\Comment{Note: \emph{not} a global barrier.}
\For{$j = 1$ to $b$}
\State{Stack the $R_{r,k-1,j}$ (including $r = i$), by order of
processor ids, into an array $C_j$ of dimension $qn \times n$}
\State{Compute $[Q_{i,k,j}, R_{i,k,j}] = qr(C_j)$}
\EndFor
\Else
\For{$j = 0$ to $b-1$}
\State{$R_{i,k,j} := R_{i,k-1,j}$}
\State{$Q_{i,k,j} := I_{n \times n}$}\Comment{Stored implicitly}
\EndFor
\EndIf
\EndFor
\If{$b > 1$}\Comment{All-reduce between virtual processors on the same
physical processor}
\State{Stack the $R_{i,j}^{(L)}$ into an array $C$ of dimension
$bn \times n$}
\State{Compute $[Q_{i}^{(L+1)}, R_{i}^{(L+1)}] = qr(C)$}
\Else
\State{$Q_{i}^{(L+1)} := I_{n \times n}$}\Comment{Stored implicitly}
\State{$R_{i}^{(L+1)} := R_{i,0}^{(L)}$}
\EndIf
\Ensure{$R_{i}^{(L+1)}$ is the $R$ factor of $A$, for all $i \in
\Pi$.}
\Ensure{The $Q$ factor is implicitly represented by $\{Q_{i,j}^{(i)}$:
$i \in \Pi$, $i \in \{0, 1, \dots, L\}\}$, $j \in 0, 1, \dots,
b-1$, and also $Q_{i}^{(L+1)}$.}
\end{algorithmic}
\end{algorithm}
\end{comment}
\subsection{Applying $Q$ or $Q^T$ to vector(s)}\label{SS:TSQR:application}
Just like Householder QR, TSQR computes an implicit representation of
the $Q$ factor. One need not generate an explicit representation of
$Q$ in order to apply the $Q$ or $Q^T$ operators to one or more
vectors. In fact, generating an explicit $Q$ matrix requires just as
many messages as applying $Q$ or $Q^T$. (The performance model for
applying $Q$ or $Q^T$ is an obvious extension of the factorization
performance model; the parallel performance model is developed in
Appendix \ref{SS:TSQR-par-detailed:apply} and the sequential
performance model in Appendix \ref{SS:TSQR-seq-detailed:apply}.)
Furthermore, the implicit representation can be updated or downdated,
by using standard techniques (see e.g., \cite{govl:96}) on the local
QR factorizations recursively. The $s$-step Krylov methods mentioned
in Section \ref{S:motivation} employ updating and downdating
extensively.
In the case of the ``thin'' $Q$ factor (in which the vector input is
of length $n$), applying $Q$ involves a kind of broadcast operation
(which is the opposite of a reduction). If the ``full'' $Q$ factor is
desired, then applying $Q$ or $Q^T$ is a kind of all-to-all (like the
fast Fourier transform). Computing $Q \cdot x$ runs through the nodes
of the (all-)reduction tree from leaves to root, whereas computing
$Q^T \cdot y$ runs from root to leaves.
\endinput
\endinput
\begin{comment}
\begin{algorithm}[h]
\caption{Sequential QR factorization of $qn \times n$ matrix
$A$, with structure as in Equation \eqref{eq:fact_2rs}}
\label{Alg:QR:qnxn}
\begin{algorithmic}[1]
\For{$j = 1$ to $n$}
\State{Let $\mathcal{I}_j$ be the index set $\{j$, $n+1 : n+j$,
$\dots$, $(q-1)n + 1 : (q-1)n + j\}$}
\State{$w := A(\mathcal{I}_j, j)$}\Comment{Gather pivot column of $A$ into $w$}
\State{$[\tau_j, v] := \House(w)$}\Comment{Compute Householder
reflection, normalized so that $v(1) = 1$}
\State{$X := A(\mathcal{I}_j, j+1:n)$}\Comment{Gather from $A$ into
$X$. One would normally perform the update in place; we use a
copy to improve clarity.}
\State{$X := (I - \tau_j v v^T) X$}\Comment{Apply Householder
reflection}
\State{$A(\mathcal{I}_j \setminus \{j\}, j) :=
v(2:end)$}\Comment{Scatter $v(2:end)$ back into $A$}
\State{$A(\mathcal{I}_j, j+1:n) := X$}\Comment{Scatter $X$ back into $A$}
\EndFor
\end{algorithmic}
\end{algorithm}
Algorithm \ref{Alg:QR:qnxn} shows a standard, column-by-column
sequential QR factorization of the $qn \times n$ matrix of upper
triangular $n \times n$ blocks, using structured Householder
reflectors. To analyze the cost, consider the components:
\begin{enumerate}
\item $\House(w)$: the cost of this is dominated by finding the norm of
the vector $w$ and scaling it.
\item Applying a length $n$ Householder reflector, whose vector
contains $k$ nonzeros, to an $n \times b$ matrix $A$. This is an
operation $(I - \tau v v^T) A = A - v (\tau (v^T A))$.
\end{enumerate}
Appendix \ref{S:localQR-flops} counts the arithmetic operations in
detail. There, we find that the total cost is about
\[
\frac{2}{3}(q-1)n^3
\]
flops, to factor a $qn \times n$ matrix (we showed the specific case
$q = 2$ above). The flop count increases by about a factor of $3
\times$ if we ignore the structure of the inputs.
\subsection{BLAS 3 structured Householder QR}\label{SS:TSQR:localQR:BLAS3structured}
\begin{algorithm}[h]
\caption{Computing $Y$ and $T$ in the $(Y,T)$ representation
of a collection of $n$ Householder reflectors. Modification
of an algorithm in \cite{schreiber1989storage} so that
$P_j = I - \tau_j v_j v_j^T$.}
\label{Alg:YT:scaled}
\begin{algorithmic}[1]
\Require{$n$ Householder reflectors $\rho_j = I - \tau_j v_j v_j^T$}
\For{$j = 1$ to $n$}
\If{$j = 1$}
\State{$Y := [ v_1 ]$}
\State{$T := [ -\tau_j ]$}
\Else
\State{$z := -\tau_j (T (Y^T v_j))$}
\State{$Y := \begin{pmatrix} Y & v_j \end{pmatrix}$}
\State{$T := \begin{pmatrix} T & z \\ 0 & -\tau_j \\ \end{pmatrix}$}
\EndIf
\EndFor
\Ensure{$Y$ and $T$ satisfy $\rho_1 \cdot \rho_2 \cdot \dots \rho_n
= I + Y T Y^T$}
\end{algorithmic}
\end{algorithm}
Alg:YT:scaled
-------------
for j = 2 : n
for i = 1 : j - 1
dot (1 + (q-1)(i-1))
endfor
for i = 1 : j - 1
dot (i)
endfor
scale (j-1)
endfor
Cost: qn^3 / 3 - qn^2 + 2nq/3 + 3n^2 / 2 - 3n/2.
Representing the local $Q$ factor as a collection of Householder
transforms means that the local QR factorization is dominated by BLAS
2 operations (dense matrix-vector products). A number of authors have
shown how to reformulate the standard Householder QR factorization so
as to coalesce multiple Householder reflectors into a block, so that
the factorization is dominated by BLAS 3 operations. For example,
Schreiber and Van Loan describe a so-called YT representation of a
collection of Householder reflectors \cite{schreiber1989storage}.
BLAS 3 transformations like this are now standard in LAPACK and
ScaLAPACK.
We can adapt these techniques in a straightforward way in order to
exploit the structured Householder vectors depicted in Equation
\eqref{eq:house:2nxn}. Schreiber and Van Loan use a slightly
different definition of Householder reflectors: $\rho_j = I - 2v_j
v_j^T$, rather than LAPACK's $\rho_j = I - \tau_j v_j v_j^T$.
Schreiber and Van Loan's $Y$ matrix is the matrix of Householder
vectors $Y = [v_1\, v_2\, \dots \, v_n]$; its construction requires no
additional computation as compared with the usual approach. However,
the $T$ matrix must be computed, which increases the flop count by a
constant factor. The cost of computing the $T$ factor for the $qn
\times n$ factorization above is about $qn^3 / 3$. Algorithm
\ref{Alg:YT:scaled} shows the resulting computation. Note that the
$T$ factor requires $n(n-1)/2$ additional storage per processor on
which the $T$ factor is required.
\subsection{Recursive Householder QR}\label{SS:TSQR:localQR:recursive}
In Section \ref{S:TSQR:perfres}, we show large performance gains
obtained by using Elmroth and Gustavson's recursive algorithm for the
local QR factorizations \cite{elmroth2000applying}. The authors
themselves observed that their approach works especially well with
``tall thin'' matrices, and others have exploited this effect in their
applications (see e.g., \cite{rabani2001outcore}). The recursive
approach outperforms LAPACK because it makes the panel factorization a
BLAS 3 operation. In LAPACK, the panel QR factorization consists only
of matrix-vector and vector-vector operations. This suggests why
recursion helps especially well with tall, thin matrices. Elmroth and
Gustavson's basic recursive QR does not perform well when $n$ is
large, as the flop count grows cubically in $n$, so they opt for a
hybrid approach that divides the matrix into panels of columns, and
performs the panel QR factorizations using the recursive method.
Elmroth and Gustavson use exactly the same representation of the $Q$
factor as Schreiber and Van Loan \cite{schreiber1989storage}, so the
arguments of the previous section still apply.
\subsection{Trailing matrix update}\label{SS:TSQR:localQR:trailing}
Section \ref{S:CAQR} will describe how to use TSQR to factor matrices
in general 2-D layouts. For these layouts, once the current panel
(block column) has been factored, the panels to the right of the
current panel cannot be factored until the transpose of the current
panel's $Q$ factor has been applied to them. This is called a
\emph{trailing matrix update}. The update lies along the critical
path of the algorithm, and consumes most of the floating-point
operations in general. This holds regardless of whether the
factorization is left-looking, right-looking, or some hybrid of the
two.\footnote{For descriptions and illustrations of the difference
between left-looking and right-looking factorizations, see e.g.,
\cite{dongarra1996key}.} Thus, it's important to make the updates
efficient.
The trailing matrix update consists of a sequence of applications of
local $Q^T$ factors to groups of ``neighboring'' trailing matrix
blocks. (Section \ref{S:reduction} explains the meaning of the word
``neighbor'' here.) We now explain how to do one of these local $Q^T$
applications. (Do not confuse the local $Q$ factor, which we label
generically as $Q$, with the entire input matrix's $Q$ factor.)
Let the number of rows in a block be $M$, and the number of columns
in a block be $N$. We assume $M \geq N$. Suppose that we want to
apply the local $Q^T$ factor from the above $qN \times N$ matrix
factorization, to two blocks $C_0$ and $C_1$ of a trailing matrix
panel. (This is the case $q = 2$, which we assume for simplicity.)
We divide each of the $C_i$ into a top part and a bottom part:
\[
C_i =
\begin{pmatrix}
C_i(1:N, :) \\
C_i(N+1 : M, :)
\end{pmatrix} =
\begin{pmatrix}
C_i' \\
C_i''
\end{pmatrix}.
\]
Our goal is to perform the operation
\[
\begin{pmatrix}
R_0 & C_0' \\
R_1 & C_1' \\
\end{pmatrix}
=
\begin{pmatrix}
QR & C_0' \\
& C_1' \\
\end{pmatrix}
=
Q \cdot
\begin{pmatrix}
R & \hat{C}_0' \\
& \hat{C}_1' \\
\end{pmatrix},
\]
in which $Q$ is the local $Q$ factor and $R$ is the local $R$ factor
of $[R_0; R_1]$. Implicitly, the local $Q$ factor has the dimensions
$2M \times 2M$, as Section \ref{S:TSQR:algebra} explains. However, it
is not stored explicitly, and the implicit operator that is stored has
the dimensions $2N \times 2N$. We assume that processors $P_0$ and
$P_1$ each store a redundant copy of $Q$, that processor $P_2$ has
$C_0$, and that processor $P_3$ has $C_1$. We want to apply $Q^T$ to
the matrix
\[
C =
\begin{pmatrix}
C_0 \\
C_1 \\
\end{pmatrix}.
\]
First, note that $Q$ has a specific structure. If stored explicitly,
it would have the form
\[
Q =
\begin{pmatrix}
\begin{matrix}
U_{00} & \\
& I_{M - N}
\end{matrix} &
\begin{matrix}
U_{01} & \\
& \mathbf{0}_{M - N}
\end{matrix} \\
\begin{matrix}
U_{10} & \\
& \mathbf{0}_{M - N}
\end{matrix} &
\begin{matrix}
U_{11} & \\
& I_{M - N}
\end{matrix} \\
\end{pmatrix},
\]
in which the $U_{ij}$ blocks are each $N \times N$. This makes the
only nontrivial computation when applying $Q^T$ the following:
\begin{equation}\label{eq:localQTupdate}
\begin{pmatrix}
\hat{C}_0' \\
\hat{C}_1' \\
\end{pmatrix}
:=
\begin{pmatrix}
U_{00}^T & U_{10}^T \\
U_{01}^T & U_{11}^T
\end{pmatrix}
\cdot
\begin{pmatrix}
C_0' \\
C_1' \\
\end{pmatrix}.
\end{equation}
We see, in particular, that only the uppermost $N$ rows of each block
of the trailing matrix need to be read or written. Note that it is
not necessary to construct the $U_{ij}$ factors explicitly; we need
only operate on $C_0'$ and $C_1'$ with $Q^T$.
If we are using a standard Householder QR factorization (without BLAS
3 optimizations), then computing Equation \eqref{eq:localQTupdate} is
straightforward. When one wishes to exploit structure (as in Section
\ref{SS:TSQR:localQR:structured}) and use a local QR factorization
that exploits BLAS 3 operations (as in Section
\ref{SS:TSQR:localQR:BLAS3structured}), more interesting load balance
issues arise. We will discuss these in the following section.
\subsubsection{Trailing matrix update with structured BLAS 3 QR}
An interesting attribute of the YT representation is that the $T$
factor can be constructed using only the $Y$ factor and the $\tau$
multipliers. This means that it is unnecessary to send the $T$ factor
for updating the trailing matrix; the receiving processors can each
compute it themselves. However, one cannot compute $Y$ from $T$
and $\tau$ in general.
When the YT representation is used, the update of the trailing
matrices takes the following form:
\[
\begin{pmatrix}
\hat{C_0}' \\
\hat{C_1}' \\
\end{pmatrix}
:=
\begin{pmatrix}
I -
\begin{pmatrix}
I \\
Y_1 \\
\end{pmatrix}
\cdot
\begin{array}{cc}
T^T \\
\\
\end{array}
\cdot
\begin{pmatrix}
I \\
Y_1 \\
\end{pmatrix}^T
\end{pmatrix}
\begin{pmatrix}
C_0' \\
C_1' \\
\end{pmatrix}.
\]
Here, $Y_1$ starts on processor $P_1$, $C_0'$ on processor $P_2$, and
$C_1'$ on processor $P_3$. The matrix $T$ must be computed from
$\tau$ and $Y_1$; we can assume that $\tau$ is on processor $P_1$.
The updated matrices $\hat{C_0}'$ and $\hat{C_1}'$ are on processors
$P_2$ resp.\ $P_3$.
There are many different ways to perform this parallel update. The
data dependencies impose a directed acyclic graph (DAG) on the flow of
data between processors. One can find the the best way to do the
update by realizing an optimal computation schedule on the DAG. Our
performance models can be used to estimate the cost of a particular
schedule.
Here is a straightforward but possibly suboptimal schedule. First,
assume that $Y_1$ and $\tau$ have already been sent to $P_3$. Then,
\begin{multicols}{2}
$P_2$'s tasks:
\begin{itemize}
\item Send $C_0'$ to $P_3$
\item Receive $W$ from $P_3$
\item Compute $\hat{C_0}' = C_0' - W$
\end{itemize}
\vspace{1cm}
$P_3$'s tasks:
\begin{itemize}
\item Compute the $T$ factor and $W := T^T (C_0' + Y_1^T C_1')$
\item Send $W$ to $P_2$
\item Compute $\hat{C_1}' := C_1' - Y_1 W$
\end{itemize}
\end{multicols}
However, this leads to some load imbalance, since $P_3$ performs more
computation than $P_2$. It does not help to compute $T$ on $P_0$ or
$P_1$ before sending it to $P_3$, because the computation of $T$ lies
on the critical path in any case. We will see in Section \ref{S:CAQR}
that part of this computation can be overlapped with the
communication.
For $q \geq 2$, we can write the update operation as
\[
\begin{pmatrix}
\hat{C_0}' \\
\hat{C_1}' \\
\vdots \\
\hat{C_{q-1}}' \\
\end{pmatrix}
:=
\left( I -
\begin{pmatrix}
I_{N \times N} \\
Y_1 \\
\vdots \\
Y_{q-1} \\
\end{pmatrix}
T^T
\begin{pmatrix}
I_{N \times N} & Y_1^T & \dots & Y_{q-1}^T \\
\end{pmatrix}
\right)
\begin{pmatrix}
C_0' \\
C_1' \\
\vdots \\
C_{q-1}' \\
\end{pmatrix}.
\]
If we let
\[
D := C_0' + Y_1^T C_1' + Y_2^T C_2' + \dots + Y_{q-1}^T C_{q-1}'
\]
be the ``inner product'' part of the update operation formulas, then
we can rewrite the update formulas as
\[
\begin{aligned}
\hat{C_0}' &:= C_0' - T^T D, \\
\hat{C_1}' &:= C_1' - Y_1 T^T D, \\
\vdots & \\
\hat{C_{q-1}}' &:= C_{q-1}' - Y_{q-1} T^T D.
\end{aligned}
\]
As the branching factor $q$ gets larger, the load imbalance becomes
less of an issue. The inner product $D$ should be computed as an
all-reduce in which the processor owning $C_i$ receives $Y_i$ and
$T$. Thus, all the processors but one will have the same
computational load.
\end{comment}
\begin{comment}
\subsection{Coalescing trailing matrix updates}
Usually, one panel factorization results in a whole sequence of
updates of the trailing matrix. One could exploit the fact that
consecutive updates are performed to the trailing matrix, and combine
them. Elmroth and Gustavson describe how to coalesce multiple
trailing matrix updates, in order to increase computational intensity
\cite{elmroth1998new}.
Suppose there are two consecutive transformations to apply, $(I - Y_1
T_1 Y_1^T)$ and $(I - Y_2 T_2 Y_2^T)$. They can be applied together
as $(I - Y T Y^T)$, where $Y$ is the concatenation of the two sets of
Householder vectors $Y_1$ and $Y_2$ (such that $Y = (Y_1 Y_2)$) and
$T$ is as follows:
\[
T =
\begin{pmatrix}
T_1 & -T_1 Y_1^T Y_2 T_2 \\
& T_2 \\
\end{pmatrix}.
\]
We have not yet investigated whether applying this technique would
improve performance in our case.
\end{comment}
\endinput
\begin{comment}
-------------------------------------
We begin with
parallel TSQR on a binary tree of four processors ($P = 4$), and later
show sequential TSQR on a flat tree with four blocks.
\subsection{Parallel TSQR on a binary tree}
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.7]{../TechReport2007/FIGURES/tsqr-binary-tree-4procs}
\caption{Execution of the parallel TSQR factorization on a binary
tree of four processors. The gray boxes indicate where local QR
factorizations take place. The $Q$ and $R$ factors each have
two subscripts: the first is the sequence number within that stage,
and the second is the stage number.}
\label{Fi:TSQR:algebra:par4}
\end{center}
\end{figure}
The basic idea of using a reduction on a binary tree to compute a tall
skinny QR factorization has been rediscovered more than once (see
e.g., \cite{cunha2002new,pothen1989distributed}). (TSQR was also
suggested by Golub et al.\ \cite{golub1988parallel}, but they did not
reduce the number of messages from $n \log P$ to $\log P$.) We repeat
it here in order to show its generalization to a whole space of
algorithms. First, we decompose the $m \times n$ matrix $A$ into four
$m/4 \times n$ block rows:
\[
A =
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}.
\]
Then, we independently compute the QR factorization of each block row:
\[
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\begin{pmatrix}
Q_{00} R_{00} \\
Q_{10} R_{10} \\
Q_{20} R_{20} \\
Q_{30} R_{30} \\
\end{pmatrix}.
\]
This is ``stage 0'' of the computation, hence the second subscript 0
of the $Q$ and $R$ factors. The first subscript indicates the block
index at that stage. (Abstractly, we use the Fortran convention that
the first index changes ``more frequently'' than the second index.)
Stage 0 operates on the $P = 4$ leaves of the tree. We can write this
decomposition instead as a block diagonal orthogonal matrix times a
column of blocks:
\[
A =
\begin{pmatrix}
Q_{00} R_{00} \\
Q_{10} R_{10} \\
Q_{20} R_{20} \\
Q_{30} R_{30} \\
\end{pmatrix}
=
\left(
\begin{array}{c | c | c | c}
Q_{00} & & & \\ \hline
& Q_{10} & & \\ \hline
& & Q_{20} & \\ \hline
& & & Q_{30} \\
\end{array}
\right)
\cdot
\begin{pmatrix}
R_{00} \\ \hline
R_{10} \\ \hline
R_{20} \\ \hline
R_{30} \\
\end{pmatrix},
\]
although we do not have to store it this way. After this stage 0,
there are $P = 4$ of the $R$ factors. We group them into successive
pairs $R_{i,0}$ and $R_{i+1,0}$, and do the QR factorizations of
grouped pairs in parallel:
\[
\begin{pmatrix}
R_{00} \\
R_{10} \\ \hline
R_{20} \\
R_{30} \\
\end{pmatrix}
=
\begin{pmatrix}
\begin{pmatrix}
R_{00} \\
R_{10} \\
\end{pmatrix} \\ \hline
\begin{pmatrix}
R_{20} \\
R_{30} \\
\end{pmatrix}
\end{pmatrix}
=
\begin{pmatrix}
Q_{01} R_{01} \\ \hline
Q_{11} R_{11} \\
\end{pmatrix}.
\]
As before, we can rewrite the last term as a block diagonal orthogonal
matrix times a column of blocks:
\[
\begin{pmatrix}
Q_{01} R_{01} \\ \hline
Q_{11} R_{11} \\
\end{pmatrix}
=
\left(
\begin{array}{c | c}
Q_{01} & \\ \hline
& Q_{11} \\
\end{array}
\right)
\cdot
\begin{pmatrix}
R_{01} \\ \hline
R_{11} \\
\end{pmatrix}.
\]
This is stage 1, as the second subscript of the $Q$ and $R$ factors
indicates. We iteratively perform stages until there is only one $R$
factor left, which is the root of the tree:
\[
\begin{pmatrix}
R_{01} \\
R_{11} \\
\end{pmatrix}
=
Q_{02} R_{02}.
\]
Equation \eqref{eq:TSQR:algebra:par4:final} shows the whole
factorization:
\begin{equation}\label{eq:TSQR:algebra:par4:final}
A =
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\left(
\begin{array}{c | c | c | c}
Q_{00} & & & \\ \hline
& Q_{10} & & \\ \hline
& & Q_{20} & \\ \hline
& & & Q_{30} \\
\end{array}
\right)
\cdot
\left(
\begin{array}{c | c}
Q_{01} & \\ \hline
& Q_{11} \\
\end{array}
\right)
\cdot
Q_{02} \cdot R_{02},
\end{equation}
in which the product of the first three matrices has orthogonal
columns, since each of these three matrices does. Note the binary
tree structure in the nested pairs of $R$ factors.
Figure \ref{Fi:TSQR:algebra:par4} illustrates the binary tree on which
the above factorization executes. Gray boxes highlight where local QR
factorizations take place. By ``local,'' we refer to a factorization
performed by any one processor at one node of the tree; it may involve
one or more than one block row. If we were to compute all the above
$Q$ factors explicitly as square matrices, each of the $Q_{i0}$ would
be $m/P \times m/P$, and $Q_{ij}$ for $j > 0$ would be $2n \times 2n$.
The final $R$ factor would be upper triangular and $m \times n$, with
$m - n$ rows of zeros. In a ``thin QR'' factorization, in which the
final $Q$ factor has the same dimensions as $A$, the final $R$ factor
would be upper triangular and $n \times n$. In practice, we prefer to
store all the local $Q$ factors implicitly until the factorization is
complete. In that case, the implicit representation of $Q_{i0}$ fits
in an $m/P \times n$ lower triangular matrix, and the implicit
representation of $Q_{ij}$ (for $j > 0$) fits in an $n \times n$ lower
triangular matrix (due to optimizations that will be discussed in
Section \ref{S:TSQR:localQR}).
Note that the maximum per-processor memory requirement is $\max\{mn/P,
n^2 + O(n)\}$, since any one processor need only factor two $n \times n$
upper triangular matrices at once, or a single $m/P \times n$ matrix.
\subsection{Sequential TSQR on a flat tree}
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.7]{../TechReport2007/FIGURES/tsqr-linear-tree-4procs}
\caption{Execution of the sequential TSQR factorization on a flat
tree with four submatrices. The gray boxes indicate where local
QR factorizations take place The $Q$ and $R$ factors each have
two subscripts: the first is the sequence number for that stage,
and the second is the stage number.}
\label{Fi:TSQR:algebra:seq4}
\end{center}
\end{figure}
Sequential TSQR uses a similar factorization process, but with a
``flat tree'' (a linear chain). It may also handle the leaf nodes of
the tree slightly differently, as we will show below. Again, the
basic idea is not new; see e.g.,
\cite{buttari2007class,buttari2007parallel,gunter2005parallel,kurzak2008qr,quintana-orti2008scheduling,rabani2001outcore}.
(Some authors (e.g.,
\cite{buttari2007class,kurzak2008qr,quintana-orti2008scheduling})
refer to sequential TSQR as ``tiled QR.'' We use the phrase
``sequential TSQR'' because both our parallel and sequential
algorithms could be said to use tiles.) In particular, Gunter and van
de Geijn develop a parallel out-of-DRAM QR factorization algorithm
that uses a flat tree for the panel factorizations
\cite{gunter2005parallel}. Buttari et al.\ suggest using a QR
factorization of this type to improve performance of parallel QR on
commodity multicore processors \cite{buttari2007class}. Quintana-Orti
et al.\ develop two variations on block QR factorization algorithms,
and use them with a dynamic task scheduling system to parallelize the
QR factorization on shared-memory machines
\cite{quintana-orti2008scheduling}. Kurzak and Dongarra use similar
algorithms, but with static task scheduling, to parallelize the QR
factorization on Cell processors \cite{kurzak2008qr}.
The reason these authors use what we call sequential TSQR in a parallel
context ...
We will show that the basic idea of sequential TSQR fits into the same
general framework as the parallel QR decomposition illustrated above,
and also how this generalization expands the tuning space of QR
factorization algorithms. In addition, we will develop detailed
performance models of sequential TSQR and the current sequential QR
factorization implemented in LAPACK.
We start with the same block row decomposition as with parallel TSQR
above:
\[
A =
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
\]
but begin with a QR factorization of $A_0$, rather than of all the
block rows:
\[
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\begin{pmatrix}
Q_{00} R_{00} \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}.
\]
This is ``stage 0'' of the computation, hence the second subscript 0
of the $Q$ and $R$ factor. We retain the first subscript for
generality, though in this example it is always zero. We can write
this decomposition instead as a block diagonal matrix times a column
of blocks:
\[
\begin{pmatrix}
Q_{00} R_{00} \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\left(
\begin{array}{c | c | c | c}
Q_{00} & & & \\ \hline
& I & & \\ \hline
& & I & \\ \hline
& & & I \\
\end{array}
\right)
\cdot
\begin{pmatrix}
R_{00} \\ \hline
A_1 \\ \hline
A_2 \\ \hline
A_3 \\
\end{pmatrix}.
\]
We then combine $R_{00}$ and $A_1$ using a QR factorization:
\[
\begin{pmatrix}
R_{00} \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\begin{pmatrix}
R_{00} \\
A_1 \\ \hline
A_2 \\
A_3 \\
\end{pmatrix}
=
\begin{pmatrix}
Q_{01} R_{01} \\ \hline
A_2 \\
A_3 \\
\end{pmatrix}
\]
This can be rewritten as a block diagonal matrix times a column of blocks:
\[
\begin{pmatrix}
Q_{01} R_{01} \\ \hline
A_2 \\
A_3 \\
\end{pmatrix}
=
\left(
\begin{array}{c | c | c}
Q_{01} & & \\ \hline
& I & \\ \hline
& & I \\
\end{array}
\right)
\cdot
\begin{pmatrix}
R_{01} \\ \hline
A_2 \\ \hline
A_3 \\
\end{pmatrix}.
\]
We continue this process until we run out of $A_i$ factors. The
resulting factorization has the following structure:
\begin{equation}\label{eq:TSQR:algebra:seq4:final}
\begin{pmatrix}
A_0 \\
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}
=
\left(
\begin{array}{c | c | c | c}
Q_{00} & & & \\ \hline
& I & & \\ \hline
& & I & \\ \hline
& & & I \\
\end{array}
\right)
\cdot
\left(
\begin{array}{c | c | c}
Q_{01} & & \\ \hline
& I & \\ \hline
& & I \\
\end{array}
\right)
\cdot
\left(
\begin{array}{c | c | c}
I & & \\ \hline
& Q_{02} & \\ \hline
& & I \\
\end{array}
\right)
\cdot
\left(
\begin{array}{c | c | c}
I & & \\ \hline
& I & \\ \hline
& & Q_{03} \\
\end{array}
\right)
R_{30}.
\end{equation}
Here, the $A_i$ blocks are $m/P \times n$. If we were to compute all
the above $Q$ factors explicitly as square matrices, then $Q_{00}$
would be $m/P \times m/P$ and $Q_{0j}$ for $j > 0$ would be $2m/P
\times 2m/P$. The above $I$ factors would be $m/P \times m/P$. The
final $R$ factor, as in the parallel case, would be upper triangular
and $m \times n$, with $m - n$ rows of zeros. In a ``thin QR''
factorization, in which the final $Q$ factor has the same dimensions
as $A$, the final $R$ factor would be upper triangular and $n \times
n$. In practice, we prefer to store all the local $Q$ factors
implicitly until the factorization is complete. In that case, the
implicit representation of $Q_{00}$ fits in an $m/P \times n$ lower
triangular matrix, and the implicit representation of $Q_{0j}$ (for $j
> 0$) fits in an $m/P \times n$ lower triangular matrix as well (due
to optimizations that will be discussed in Section
\ref{S:TSQR:localQR}).
Figure \ref{Fi:TSQR:algebra:seq4} illustrates the flat tree on which
the above factorization executes. Gray boxes highlight where
``local'' QR factorizations take place.
The sequential algorithm differs from the parallel one in that it does
not factor the individual blocks of the input matrix $A$, excepting
$A_0$. This is because in the sequential case, the input matrix has
not yet been loaded into working memory. In the fully parallel case,
each block of $A$ resides in some processor's working memory. It then
pays to factor all the blocks before combining them, as this reduces
the volume of communication (only the triangular $R$ factors need to
be exchanged) and reduces the amount of arithmetic performed at the
next level of the tree. In contrast, the sequential algorithm never
writes out the intermediate $R$ factors, so it does not need to
convert the individual $A_i$ into upper triangular factors. Factoring
each $A_i$ separately would require writing out an additional $Q$
factor for each block of $A$. It would also add another level to the
tree, corresponding to the first block $A_0$.
Note that the maximum per-processor memory requirement is $mn/P +
n^2/2 + O(n)$, since only an $m/P \times n$ block and an $n \times n$
upper triangular block reside in fast memory at one time. We could
save some fast memory by factoring each $A_i$ block separately before
combining it with the next block's $R$ factor, as long as each block's
$Q$ and $R$ factors are written back to slow memory before the next
block is loaded. One would then only need to fit no more than two $n
\times n$ upper triangular factors in fast memory at once. However,
this would result in more writes, as each $R$ factor (except the last)
would need to be written to slow memory and read back into fact
memory, rather than just left in fast memory for the next step.
In both the parallel and sequential algorithms, a vector or matrix is
multiplied by $Q$ or $Q^T$ by using the implicit representation of the
$Q$ factor, as shown in Equation \eqref{eq:TSQR:algebra:par4:final}
for the parallel case, and Equation \eqref{eq:TSQR:algebra:seq4:final}
for the sequential case. This is analogous to using the Householder
vectors computed by Householder QR as an implicit representation of
the $Q$ factor.
\subsection{TSQR on general trees}
\label{SS:TSQR:GeneralTrees}
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.35]{../TechReport2007/FIGURES/tsqr-hybrid-parallel-ooc}
\caption{Execution of a hybrid parallel / out-of-core TSQR
factorization. The matrix has 16 blocks, and four processors
can execute local QR factorizations simultaneously. The gray
boxes indicate where local QR factorizations take place. We
number the blocks of the input matrix $A$ in hexadecimal to save
space (which means that the subscript letter A is the number
$10_{10}$, but the non-subscript letter $A$ is a matrix block).
The $Q$ and $R$ factors each have two subscripts: the first is
the sequence number for that stage, and the second is the stage
number.}
\label{Fi:TSQR:algebra:pooc16}
\end{center}
\end{figure}
The above two algorithms are extreme points in a large set of possible
QR factorization methods, parametrized by the tree structure. Our
version of TSQR is novel because it works on any tree. In general,
the optimal tree may depend on both the architecture and the matrix
dimensions. This is because TSQR is a reduction (as we will discuss
further in Section \ref{S:reduction}). Trees of types other than
binary often result in better reduction performance, depending on the
architecture (see e.g., \cite{nishtala2008performance}). Throughout
this paper, we discuss two examples -- the binary tree and the flat
tree -- as easy extremes for illustration. We will show that the
binary tree minimizes the number of stages and messages in the
parallel case, and that the flat tree minimizes the number and volume
of input matrix reads and writes in the sequential case. Section
\ref{S:reduction} shows how to perform TSQR on any tree. Methods for
finding the best tree in the case of TSQR are future work.
Nevertheless, we can identify two regimes in which a ``nonstandard''
tree could improve performance significantly: parallel memory-limited
CPUs, and large distributed-memory supercomputers.
The advent of desktop and even laptop multicore processors suggests a
revival of parallel out-of-DRAM algorithms, for solving cluster-sized
problems while saving power and avoiding the hassle of debugging on a
cluster. TSQR could execute efficiently on a parallel memory-limited
device if a sequential flat tree were used to bring blocks into
memory, and a parallel tree (with a structure that reflects the
multicore memory hierarchy) were used to factor the blocks. Figure
\ref{Fi:TSQR:algebra:pooc16} shows an example with 16 blocks executing
on four processors, in which the factorizations are pipelined for
maximum utilization of the processors. The algorithm itself needs no
modification, since the tree structure itself encodes the pipelining.
This is, we believe, a novel extension of the parallel out-of-core QR
factorization of Gunter et al.\ \cite{gunter2005parallel}.
TSQR's choice of tree shape can also be optimized for modern
supercomputers. A tree with different branching factors at different
levels could naturally accommodate the heterogeneous communication
network of a cluster of multicores. The subtrees at the lowest level
may have the same branching factor as the number of cores per node (or
per socket, for a multisocket shared-memory architecture).
Note that the maximum per-processor memory requirement of all TSQR
variations is bounded above by
\[
\frac{q n(n+1)}{2} + \frac{mn}{P},
\]
in which $q$ is the maximum branching factor in the tree.
\endinput
\end{comment}
| {
"timestamp": "2008-08-19T23:53:43",
"yymm": "0808",
"arxiv_id": "0808.2664",
"language": "en",
"url": "https://arxiv.org/abs/0808.2664",
"abstract": "We present parallel and sequential dense QR factorization algorithms that are both optimal (up to polylogarithmic factors) in the amount of communication they perform, and just as stable as Householder QR.We prove optimality by extending known lower bounds on communication bandwidth for sequential and parallel matrix multiplication to provide latency lower bounds, and show these bounds apply to the LU and QR decompositions. We not only show that our QR algorithms attain these lower bounds (up to polylogarithmic factors), but that existing LAPACK and ScaLAPACK algorithms perform asymptotically more communication. We also point out recent LU algorithms in the literature that attain at least some of these lower bounds.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Communication-optimal parallel and sequential QR and LU factorizations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399026119353,
"lm_q2_score": 0.815232489352,
"lm_q1q2_score": 0.79097109107497
} |
https://arxiv.org/abs/1809.08460 | Some notes on the signed bad number in bipartite graphs | In this paper, we deal with the signed bad number and the negative decision number of graphs. We show that two upper bounds concerning these two parameters for bipartite graphs in papers [Discrete Math. Algorithms Appl. 1 (2011), 33--41] and [Australas. J. Combin. 41 (2008), 263--272] are not true as they stand. We correct them by presenting more general bounds for triangle-free graphs by using the classic theorem of Mantel from the extremal graph theory and characterize all triangle-free graphs attaining these bounds. | \section{Introduction}
\ \ \ Throughout this paper, let $G$ be a finite graph with vertex set $V(G)$ and edge set $E(G)$. We use \cite{we} as a reference for terminology and notation which are not defined here. The {\em open neighborhood} of a vertex $v$ is denoted by $N(v)$, and the {\em closed neighborhood} of $v$ is $N[v]=N(v)\cup \{v\}$. The {\em corona} of two graphs $G_{1}$ and $G_{2}$ is the graph $G=G_{1}\circ G_{2}$ formed from one copy of $G_{1}$ and $|V(G_{1})|$ copies of $G_{2}$ where the $ith$ vertex of $G_{1}$ is adjacent to every vertex in the $ith$ copy of $G_{2}$.
Let $S \subseteq V(G)$. For a real-valued function $f:V(G)\rightarrow \mathbb{R}$ we define $f(S)=\sum_{v\in S}f(v)$. Also, $f(V(G))$ is the weight of $f$. A {\em signed bad function} ({\em bad function}), abbreviated SBF (BF), of $G$ is a function $f:V(G)\rightarrow \{-1,1\}$ such that $f(N[v])\leq1$ ($f(N(v))\leq1$), for every $v\in V(G)$. The {\em signed bad number} ({\em negative decision number}) is $\beta_{s}(G)=\max \{f(V) | f \mbox{\ is a SBF of}\ G \}$ ($\beta_{D}(G)=\max \{f(V) | f \mbox{\ is a BF of}\ G \}$). Indeed, the negative decision number can be considered as the total version of the signed bad number. These two graph parameters have been studied in \cite{gks} and \cite{w}, respectively.
In 2011, Ghameshlou et al. \cite{gks} gave the following upper bound on $\beta_{s}(G)$ of a bipartite graph $G$.
\begin{theorem}\label{T1.1}\emph{(\cite{gks})}
If $G$ is a bipartite graph of order n, then
$$\beta_{s}(G)\leq n+2-2\lceil\sqrt{n+2}\rceil.$$
\end{theorem}
In 2008, Wang \cite{w} exhibited the following upper bound on $\beta_{D}(G)$ of a bipartite graph $G$.
\begin{theorem}\label{T1.2}\emph{(\cite{w})}
If $G$ is a bipartite graph of order $n$, then
$$\beta_{D}(G)\leq n+3-\sqrt{4n+9}.$$
\end{theorem}
The above inequalities are not true as they stand. For example $P_{3}$ and the bistar $G=P_{2}\circ \overline{K_{3}}$ with $\beta_{s}(P_{3})=1$ and $\beta_{s}(G)=4$ are two counterexamples to Theorem \ref{T1.1} (we will show that this theorem is true for $n\neq3,8$. Indeed, Part (i) of Theorem \ref{T3.4} is a generalization and improvement of it, simultaneously). But an infinite family of counterexamples to Theorem \ref{T1.2} can be obtained as follows:\\
For any positive integer $p$, let $G$ be a bipartite graph formed from $G'=K_{p,p}\circ \overline{K_{p+1}}$ by joining two new vertices to each pendant vertex of $G'$. Then $n=6p^{2}+8p$. It is easy to see that $f(u)=-1$ and $f(v)=1$ for all $u\in V(K_{p,p})$ and $v\in V(G)\setminus V(K_{p,p})$, respectively, defines a maximum BF of $G$ with weight $\beta_{D}(G)=6p^{2}+4p> n+3-\sqrt{4n+9}$.
In this paper, we correct the theorems by exhibiting more general results for triangle-free graphs by using the classic theorem of Mantel from the extremal graph theory. Moreover, we characterize all triangle-free graphs attaining the upper bounds.
\section{Main Theorem}
\ \ \ We need the following useful lemma.
\begin{lemma}\label{L1}\emph{(\cite{m}) (Mantel's Theorem)}
If $G$ is a triangle-free graph of order $n$, then
$$|E(G)|\leq \lfloor n^{2}/4\rfloor$$
with equality if and only if $G$ is isomorphic to $K_{\lfloor\frac{n}{2}\rfloor,\lceil\frac{n}{2}\rceil}$.
\end{lemma}
Let $\Lambda$ be the family of all graphs formed from $K_{p,p}\circ \overline{K_{p+2}}$ by adding some new edges with end points in the vertices of the copies of $\overline{K_{p+2}}$ such that no triangle is induced and $\Delta(G[V(G)\setminus V(K_{p,p})])\leq1$, for some positive integer $p$.
Let $\Omega$ be the family of all graphs formed from $K_{p,p}\circ \overline{K_{p+1}}$ by adding some new edges with end points in the copies of the $\overline{K_{p+1}}$ such that no triangle is induced and $\Delta(G[V(G)\setminus V(K_{p,p})])\leq2$, for some positive integer $p$.\vspace{28mm}\\
\begin{picture}(269.518,188.518)(0,0)
\put(46,180){\circle*{6}}
\put(86,180){\circle*{6}}
\put(46,210){\circle*{6}}
\put(86,210){\circle*{6}}
\put(116,225){\circle*{6}}
\put(106,230){\circle*{6}}
\put(96,235){\circle*{6}}
\put(86,240){\circle*{6}}
\put(16,225){\circle*{6}}
\put(26,230){\circle*{6}}
\put(36,235){\circle*{6}}
\put(46,240){\circle*{6}}
\put(116,165){\circle*{6}}
\put(106,160){\circle*{6}}
\put(96,155){\circle*{6}}
\put(86,150){\circle*{6}}
\put(16,165){\circle*{6}}
\put(26,160){\circle*{6}}
\put(36,155){\circle*{6}}
\put(46,150){\circle*{6}}
\multiput(46,180)(.065,0.047){670}{\line(2,0){.9}}
\multiput(46,210)(.06,-0.045){670}{\line(2,0){.9}}
\multiput(46,180)(0,0.1){320}{\line(2,0){.9}}
\multiput(86,210)(0,-0.1){320}{\line(2,0){.9}}
\multiput(46,180)(-.044,-0.019){670}{\line(2,0){.9}}
\multiput(46,180)(-.034,-0.029){670}{\line(2,0){.9}}
\multiput(46,180)(-.017,-0.038){670}{\line(2,0){.9}}
\multiput(46,180)(-.001,-0.046){670}{\line(2,0){.9}}
\multiput(46,210)(-.029,0.027){670}{\line(2,0){.9}}
\multiput(46,210)(-.048,0.025){670}{\line(2,0){.9}}
\multiput(46,210)(-.015,0.038){670}{\line(2,0){.9}}
\multiput(46,210)(0,0.047){670}{\line(2,0){.9}}
\multiput(86,210)(0,0.047){670}{\line(2,0){.9}}
\multiput(86,210)(0.015,0.039){670}{\line(2,0){.9}}
\multiput(86,210)(0.031,0.033){670}{\line(2,0){.9}}
\multiput(86,210)(0.047,0.024){670}{\line(2,0){.9}}
\multiput(86,180)(0,-.044){670}{\line(2,0){.9}}
\multiput(86,180)(0.013,-.034){670}{\line(2,0){.9}}
\multiput(86,180)(0.032,-.032){670}{\line(2,0){.9}}
\multiput(86,180)(0.046,-.021){670}{\line(2,0){.9}}
\multiput(116,225)(-0.15,0){670}{\line(2,0){.9}}
\multiput(96,235)(-0.09,0){670}{\line(2,0){.9}}
\multiput(116,165)(-0.15,0){670}{\line(2,0){.9}}
\multiput(106,160)(-0.12,0){670}{\line(2,0){.9}}
\multiput(96,155)(-0.09,0){670}{\line(2,0){.9}}
\put(28,179){$-1$}
\put(89,178){$-1$}
\put(27,205){$-1$}
\put(88,205){$-1$}
\put(119,224){$1$}
\put(109,230){$1$}
\put(99,235){$1$}
\put(89,240){$1$}
\put(8,160){$1$}
\put(18,155){$1$}
\put(28,150){$1$}
\put(38,145){$1$}
\put(119,160){$1$}
\put(109,155){$1$}
\put(99,150){$1$}
\put(89,145){$1$}
\put(8,225){$1$}
\put(18,230){$1$}
\put(28,235){$1$}
\put(38,240){$1$}
\put(194,179){\circle*{6}}
\put(239,179){\circle*{6}}
\put(194,211){\circle*{6}}
\put(239,211){\circle*{6}}
\put(284,179){\circle*{6}}
\put(284,211){\circle*{6}}
\put(308,225){\circle*{6}}
\put(300,232){\circle*{6}}
\put(292,238){\circle*{6}}
\put(284,244){\circle*{6}}
\put(257,244){\circle*{6}}
\put(246,244){\circle*{6}}
\put(234,244){\circle*{6}}
\put(223,244){\circle*{6}}
\put(257,147){\circle*{6}}
\put(246,147){\circle*{6}}
\put(234,147){\circle*{6}}
\put(223,147){\circle*{6}}
\put(308,159){\circle*{6}}
\put(300,154){\circle*{6}}
\put(292,150){\circle*{6}}
\put(284,146){\circle*{6}}
\put(170,224){\circle*{6}}
\put(176,231){\circle*{6}}
\put(183,237){\circle*{6}}
\put(191,243){\circle*{6}}
\put(191,148){\circle*{6}}
\put(183,152){\circle*{6}}
\put(175,156){\circle*{6}}
\put(167,160){\circle*{6}}
\multiput(239,211)(.024,.048){670}{\line(2,0){.9}}
\multiput(239,211)(-.024,.047){670}{\line(2,0){.9}}
\multiput(194,179)(-.004,-.049){670}{\line(2,0){.9}}
\multiput(194,179)(-.04,-.026){670}{\line(2,0){.9}}
\multiput(239,179)(-.025,-.049){670}{\line(2,0){.9}}
\multiput(239,179)(.025,-.049){670}{\line(2,0){.9}}
\multiput(284,211)(.037,.022){670}{\line(2,0){.9}}
\multiput(284,211)(-.002,.052){670}{\line(2,0){.9}}
\multiput(194,211)(-.005,.052){670}{\line(2,0){.9}}
\multiput(194,211)(-.04,.021){670}{\line(2,0){.9}}
\multiput(283,179)(-.002,-.052){670}{\line(2,0){.9}}
\multiput(283,179)(.04,-.032){670}{\line(2,0){.9}}
\multiput(283,179)(0,.052){670}{\line(2,0){.9}}
\multiput(283,179)(-.069,.048){670}{\line(2,0){.9}}
\multiput(283,179)(-.14,.05){670}{\line(2,0){.9}}
\multiput(194,179)(0,.052){670}{\line(2,0){.9}}
\multiput(194,179)(.07,.052){670}{\line(2,0){.9}}
\multiput(194,179)(.142,.052){670}{\line(2,0){.9}}
\multiput(239,179)(-.076,.052){670}{\line(2,0){.9}}
\multiput(239,179)(-.001,.052){670}{\line(2,0){.9}}
\multiput(239,179)(.074,.052){670}{\line(2,0){.9}}
\multiput(194,179)(.058,-.045){670}{\line(2,0){.9}}
\multiput(194,179)(-.026,.077){670}{\line(2,0){.9}}
\multiput(239,179)(.081,-.043){670}{\line(2,0){.9}}
\multiput(239,179)(-.088,-.039){670}{\line(2,0){.9}}
\multiput(284,179)(-.056,-.046){670}{\line(2,0){.9}}
\multiput(284,179)(.023,.08){670}{\line(2,0){.9}}
\multiput(239,211)(-.088,.04){670}{\line(2,0){.9}}
\multiput(239,211)(.079,.04){670}{\line(2,0){.9}}
\multiput(284,211)(.023,-.088){670}{\line(2,0){.9}}
\multiput(284,211)(-.059,.051){670}{\line(2,0){.9}}
\multiput(194,211)(.06,.052){670}{\line(2,0){.9}}
\multiput(194,211)(-.03,-.084){670}{\line(2,0){.9}}
\multiput(257,244)(-.05,0){670}{\line(2,0){.9}}
\multiput(257,147)(-.05,0){670}{\line(2,0){.9}}
\multiput(308,225)(-.015,.012){670}{\line(2,0){.9}}
\multiput(292,238)(-.015,.012){670}{\line(2,0){.9}}
\multiput(308,159)(-.015,-.009){670}{\line(2,0){.9}}
\multiput(292,150)(-.015,-.009){670}{\line(2,0){.9}}
\multiput(168,224)(.014,.014){670}{\line(2,0){.9}}
\multiput(183,237)(.014,.014){670}{\line(2,0){.9}}
\multiput(191,148)(-.014,.008){670}{\line(2,0){.9}}
\multiput(175,156)(-.014,.008){670}{\line(2,0){.9}}
\put(199,175){$-1$}
\put(244,176){$-1$}
\put(199,209){$-1$}
\put(242,208){$-1$}
\put(264,174){$-1$}
\put(265,210){$-1$}
\put(309,227){$1$}
\put(301,234){$1$}
\put(293,240){$1$}
\put(285,246){$1$}
\put(257,247){$1$}
\put(246,247){$1$}
\put(234,247){$1$}
\put(223,247){$1$}
\put(259,138){$1$}
\put(247,138){$1$}
\put(235,138){$1$}
\put(224,138){$1$}
\put(308,150){$1$}
\put(300,145){$1$}
\put(292,141){$1$}
\put(284,137){$1$}
\put(165,228){$1$}
\put(170,235){$1$}
\put(180,241){$1$}
\put(187,247){$1$}
\put(191,136){$1$}
\put(183,142){$1$}
\put(175,146){$1$}
\put(167,150){$1$}
\end{picture}\vspace{-53mm}\\
\begin{center}
A member of $\Lambda$ for $p=2$\ \ \ \ \ \ \ \ \ \ \ \ \ A member of $\Omega$ for $p=3$
\end{center}\vspace{1.3mm}
\ \ \ For convenience, we make use of the following notation. Let $G$ be a graph and $f:V(G)\longrightarrow\{-1,1\}$ be a SBF or BF of $G$. Define $V_{+}=\{v\in V(G) \mid f(v)=1 \}$ and $V_{-}=\{v\in V(G) \mid f(v)=-1 \}$. Let $[V_{+},V_{-}]$ be the set of edges having one end point in $V_{+}$ and the other in $V_{-}$.
We are now in a position to present the main theorem of the paper.
\begin{theorem}\label{T3.4}
If $G$ is a triangle-free graph of order $n$, then\vspace{1mm}\\
\emph{(i)}\ If $\delta(G)\geq1$, then $\beta_{s}(G)\leq n+6-2\sqrt{9+2n}$.\vspace{1mm}\\
\emph{(ii)}\ If $\delta(G)\geq2$, then $\beta_{D}(G)\leq n+4-2\sqrt{4+2n}$.\vspace{1mm}\\
Furthermore, the first inequality holds with equality if and only if $G\in \Lambda$ and the second one holds with equality if and only if $G\in \Omega$.
\end{theorem}
\begin{proof}
(i)\ Let $f$ be a maximum SBF of $G$. It follows that, every vertex in $V_{+}$ has at least one neighbor in $V_{-}$. Also, $|N(v)\cap V_{+}|\leq|N(v)\cap V_{-}|+2$ for all $v\in V_{-}$. Furthermore, by Lemma \ref{L1} we have
\begin{equation}\label{EQ11}
\begin{array}{lcl}
|V_{+}|&\leq& |[V_{-},V_{+}]|=\sum_{v\in V_{-}}|N(v)\cap V_{+}|\leq \sum_{v\in V_{-}}(|N(v)\cap V_{-}|+2)\\
&=&2|E(G[V_{-}])|+2|V_{-}|\leq|V_{-}|^{2}/2+2|V_{-}|.
\end{array}
\end{equation}
Therefore,
$$|V_{-}|^2+6|V_{-}|-2n\geq0.$$
Solving the above inequality for $|V_{-}|$ we obtain
$$(n-\beta_{s}(G))/2=|V_{-}|\geq-3+\sqrt{9+2n},$$
implying the desired upper bound.
Let $G\in \Lambda$. We define $f:V(G)\rightarrow\{-1,1\}$ by,
$$f(v)=\left \{
\begin{array}{lll}
-1 & \mbox{;} & v\in V(K_{p,p}) \\
\ 1 & \mbox{;} & \mbox{otherwise}.
\end{array}
\right.$$
It is easy to check that $f$ is a SBF of $G$ with weight $f(V(G))=\beta_{s}(G)=n+6-2\sqrt{9+2n}$.
Now let $G$ be a graph for which the equality holds and $f$ be a maximum SBF of $G$. By the inequality (\ref{EQ11}), every vertex in $V_{+}$ must have exactly one neighbor in $V_{-}$. Also, $2|E(G[V_{-}])|=|V_{-}|^{2}/2$ along with Lemma \ref{L1} implies that $G[V_{-}]=K_{|V_{-}|/2,|V_{-}|/2}$. Moreover, $|N(v)\cap V_{+}|=|N(v)\cap V_{-}|+2$ for all $v\in V_{-}$ implies that every vertex in $V_{-}$ is adjacent to exactly $|V_{-}|/2+2$ vertices in $V_{+}$. Suppose to the contrary that, there exists a vertex $v$ of the subgraph $H$ induced by $V(G)\setminus V(K_{|V_{-}|/2,|V_{-}|/2})$ with deg$_{H}(v)\geq2$. Then $f(N[v])\geq2$, a contradiction.\\
(ii)\ The proof is almost along the lines of (i). Let $f$ be a maximum BF of $G$. Since $\delta(G)
\geq2$, every vertex in $V_{+}$ has at least one neighbor in $V_{-}$. Also, $|N(v)\cap V_{+}|\leq|N(v)
\cap V_{-}|+1$ for all $v\in V(G)$. Similar to the inequality chain (\ref{EQ11}) we conclude that $$|V_{-}|^2+4|V_{-}|-2n\geq0,$$
which implies the desired upper bound.
If $G\in \Omega$, then we can obtain a BF of $G$ with weight $\beta_{D}(G)=n+4-2\sqrt{4+2n}$ by assigning $-1$ to the vertices in $K_{p,p}$ and $1$ to the other ones.
We now let the graph $G$ attain the upper bound by the BF $f$ of it. Similar to the part (i), we have $G[V_{-}]=K_{|V_{-}|/2,|V_{-}|/2}$. Also, every vertex in $V_{+}$ has exactly one neighbor in $V_{-}$ and $|N(v)\cap V_{+}|=|N(v)\cap V_{-}|+1$ for all $v\in V_{-}$. Finally, since $f(N(v))\leq1$, $|N(v)\cap(V(G)\setminus V(K_{|V_{-}|/2,|V_{-}|/2}))|\leq2$ for each $v\in V(G)\setminus V(K_{|V_{-}|/2,|V_{-}|/2})$. Therefore, $G\in \Omega$. This completes the proof.
\end{proof}
\begin{rem}
The proof of Theorem \ref{T1.1} in \cite{gks} contains a gap. For the sake of completeness, we point it out. As it is presented in \cite{gks}:\\
"Let $f$ be a maximum SBF of the bipartite graph $G$ with bipartition $X_{1}$ and $X_{2}$. Define $X^-_{i}=X_{i}\cap V_{-}$ and $X^+_{i}=X_{i}\cap V_{+}$ for $i=1,2$." They claimed that $|X^+_{1}|\leq|X^-_{2}|+|X^-_{2}||X^-_{1}|$ and $|X^+_{2}|\leq|X^-_{1}|+|X^-_{2}||X^-_{1}|$. These two inequalities do not hold in general, as non of them are true for the bistar $P_{2}\circ \overline{K_{3}}$ and one of them is not true for $P_{3}$.
Considering Theorem \ref{T3.4}, we have
\begin{equation*}
\beta_{s}(G)\leq n+6-\lceil2\sqrt{9+2n}\rceil\leq n+2-2\lceil\sqrt{n+2}\rceil
\end{equation*}
holds for each integer $1<n\notin\{3,4,5,8,9,10,15,16\}$. So, Theorem \ref{T1.1} is true for all bipartite graphs of the identified orders. On the other hand, $\beta_{s}(G)$ and $n$ have the same parity. Therefore, Part (i) of Theorem \ref{T3.4} implies that $\beta_{s}(G)\leq 0,1,3,4,7,8$ for $n=4,5,9,10,15,16$, respectively. This coincides with the upper bounds on $\beta_{s}(G)$, of a bipartite graph $G$ of these orders, by Theorem \ref{T1.1}.
\end{rem}
| {
"timestamp": "2018-09-25T02:08:10",
"yymm": "1809",
"arxiv_id": "1809.08460",
"language": "en",
"url": "https://arxiv.org/abs/1809.08460",
"abstract": "In this paper, we deal with the signed bad number and the negative decision number of graphs. We show that two upper bounds concerning these two parameters for bipartite graphs in papers [Discrete Math. Algorithms Appl. 1 (2011), 33--41] and [Australas. J. Combin. 41 (2008), 263--272] are not true as they stand. We correct them by presenting more general bounds for triangle-free graphs by using the classic theorem of Mantel from the extremal graph theory and characterize all triangle-free graphs attaining these bounds.",
"subjects": "Combinatorics (math.CO)",
"title": "Some notes on the signed bad number in bipartite graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357253887771,
"lm_q2_score": 0.8056321959813274,
"lm_q1q2_score": 0.7909179083182819
} |
https://arxiv.org/abs/1607.00351 | A Comparison of Preconditioned Krylov Subspace Methods for Large-Scale Nonsymmetric Linear Systems | Preconditioned Krylov subspace (KSP) methods are widely used for solving large-scale sparse linear systems arising from numerical solutions of partial differential equations (PDEs). These linear systems are often nonsymmetric due to the nature of the PDEs, boundary or jump conditions, or discretization methods. While implementations of preconditioned KSP methods are usually readily available, it is unclear to users which methods are the best for different classes of problems. In this work, we present a comparison of some KSP methods, including GMRES, TFQMR, BiCGSTAB, and QMRCGSTAB, coupled with three classes of preconditioners, namely Gauss-Seidel, incomplete LU factorization (including ILUT, ILUTP, and multilevel ILU), and algebraic multigrid (including BoomerAMG and ML). Theoretically, we compare the mathematical formulations and operation counts of these methods. Empirically, we compare the convergence and serial performance for a range of benchmark problems from numerical PDEs in 2D and 3D with up to millions of unknowns and also assess the asymptotic complexity of the methods as the number of unknowns increases. Our results show that GMRES tends to deliver better performance when coupled with an effective multigrid preconditioner, but it is less competitive with an ineffective preconditioner due to restarts. BoomerAMG with proper choice of coarsening and interpolation techniques typically converges faster than ML, but both may fail for ill-conditioned or saddle-point problems while multilevel ILU tends to succeed. We also show that right preconditioning is more desirable. This study helps establish some practical guidelines for choosing preconditioned KSP methods and motivates the development of more effective preconditioners. | \section{Introduction}
Preconditioned Krylov subspace (KSP) methods are widely used for solving
large sparse linear systems, especially those arising from discretizations
of partial differential equations. For most modern applications, these
linear systems are nonsymmetric due to various reasons, such as the
multiphysics nature of the PDEs, some sophisticated boundary or jump
conditions, or the discretization methods themselves. Although for
symmetric systems, conjugate gradient (CG) \cite{Hestenes52CG} and
MINRES \cite{Paige75MINRES} are well recognized as the best KSP methods
\cite{FS12CG}, the situation is far less clear for nonsymmetric systems.
Various KSP methods have been developed, such as GMRES \cite{Saad86GMRES},
CGS \cite{Sonneveld89CGS}, QMR \cite{FN91QMR}, TFQMR \cite{Freund93TFQMR},
BiCGSTAB \cite{vanderVorst92BiCGSTAB}, QMRCGSTAB \cite{CGS94QMRCGS},
etc. Most of these methods are described in detail in textbooks such
as \cite{BBC94Templates,Saad03IMS,Van-der-Vorst:2003aa}, and their
implementations are readily available in software packages such as
PETSc \cite{petsc-user-ref} and MATLAB \cite{MATLAB}. However, each
of these methods has its own advantages and disadvantages. Therefore,
it is difficult for practitioners to choose the proper methods for
their specific applications. To make the matter worse, a KSP method
may perform well with one preconditioner but poorly with another preconditioner.
As a result, users often spend a significant amount of time on trial
and error to find a reasonable combination of the KSP solvers and
preconditioners, and yet the final choice may still be far from optimal.
Therefore, a systematic comparison of the preconditioned KSP methods
is an important subject.
In the literature, various comparisons of KSP methods have been reported
previously. In \cite{comparison_trefethen}, Nachtigal, Reddy and
Trefethen presented some theoretical analysis and comparison of the
convergence properties of CGN, GMRES, and CGS, which were the leading
methods for nonsymmetric systems in the early 1990s. They showed that
the convergence of CGN is governed by singular values, whereas that
of GMRES and CGS by eigenvalues and pseudo-eigenvalues, and each of
these methods may significantly outperform the others for different
matrices. Their work did not consider preconditioners. The work is
also outdated because newer methods have been introduced since then,
which are superior to CGN and CGS. In Saad's textbook \cite{Saad03IMS},
some comparisons of various KSP methods, including GMRES, BiCGSTAB,
QMR, and TFQMR, were given in terms of computational cost and storage
requirements. The importance of preconditioners was emphasized, but
no detailed comparison for the different combinations of the KSP methods
and preconditioners was given. The same is also true for other textbooks,
such as \cite{Van-der-Vorst:2003aa}. In terms of empirical comparison,
Meister reported a comparison of a few preconditioned KSP methods
for several inviscid and viscous flow problems \cite{MEISTER1998311}.
His study focused on incomplete LU factorization as the preconditioner.
Benzi and coworkers \cite{Benzi99CSS,BENZI02PTL} also compared a
number of preconditioners, also with a focus on incomplete factorization
and their block variants. What is notably missing in these previous
studies is the multigrid preconditioners, which have advanced significantly
in recent years.
The goal of this paper is to perform a systematic comparison and in
turn establish some practical guidelines in choosing the best combinations
of the preconditioned KSP solvers. Our study is in spirit similar
to the recent work of Feng and Saunders in \cite{FS12CG}, which compared
CG and MINRES for symmetric systems. However, we focus on nonsymmetric
systems with a heavy emphasis on preconditioners. We consider four
KSP solvers, GMRES, TFQMR, BiCGSTAB and QMRCGSTAB. Among these, the
latter three enjoy three-term recurrences. We also consider three
preconditioners, namely Gauss-Seidel, incomplete LU factorization
(ILU), and algebraic multigrid (AMG). Each of these KSP methods and
preconditioners has its advantages and disadvantages, so theoretical
analysis alone is insufficient in establishing their suitability for
different types of problems. We compare the methods empirically in
terms of convergence and timing results for linear systems constructed
from four different numerical discretization methods for PDEs in both
2D and 3D. The sizes of these linear systems range from $10^{5}$
to $10^{7}$ unknowns, which are representative of modern industrial
applications. We also assess the scalability of different preconditioned
KSP solvers as the number of unknowns increases. To the best of our
knowledge, this is the most comprehensive comparison of the preconditioned
KSP solvers to date for large, sparse, nonsymmetric linear systems.
Our results show that the smoothed-aggregation AMG typically delivers
better performance and exhibits better scalability than classical
AMG, but it is less robust, especially for ill-conditioned systems.
These results help establish some practical guidelines for choosing
preconditioned KSP methods. They also motivate the further development
of more effective, scalable, and robust multigrid preconditioners
for large, sparse, nonsymmetric, and potentially ill-conditioned linear
systems.
The remainder of the paper is organized as follows. In Section~\ref{sec:background},
we review some background knowledge of KSP methods and preconditioners,
and compare these KSP methods in terms of their Krylov subspaces and
the iteration procedures in computing their basis vectors. In Section~\ref{sec:Analysis-KSP},
we outline a few KSP methods and compare their main properties in
terms of asymptotic convergence, number of operations per iteration,
and the storage requirement. This theoretical background will help
us predict the relative performance of the various methods and interpret
the numerical results. In Section~\ref{sec:PDE-Discretization-Methods},
we summarize the PDE discretization methods, with an emphasis on the
various sources of the nonsymmetry of the linear systems. In Section
\ref{sec:Results}, we present the empirical comparisons of the preconditioned
KSP methods for a number of test problems. Finally, Section~\ref{sec:Conclusions-and-Future}
concludes the paper with a discussion on future work.
\section{Background\label{sec:background}}
In this section, we give a general overview of Krylov subspace methods
and preconditioners for solving a linear system
\begin{equation}
\vec{A}\vec{x}=\vec{b},\label{eq:linear_system}
\end{equation}
where $\vec{A}\in\mathbb{R}^{n\times n}$ is large, sparse and nonsymmetric,
and $\vec{b}\in\mathbb{R}{}^{n}$. We consider only real matrices,
because they are the most common in applications. However, all the
methods are applicable to complex matrices, by replacing the matrix
transposes with the conjugate transposes. We focus on the Krylov subspaces
and the procedure in constructing the basis vectors of the subspaces,
which are often the determining factors in the overall performance
of different types of KSP methods. We defer more detailed discussions
and analysis of the individual methods to Section~\ref{sec:Analysis-KSP}.
\subsection{Krylov Subspaces}
Given a matrix $\vec{A}\in\mathbb{R}^{n\times n}$ and a vector $\vec{v}\in\mathbb{R}^{n}$,
the $k$th \emph{Krylov subspace} generated by them, denoted by $\mathcal{K}_{k}(\vec{A},\vec{v})$,
is given by
\begin{equation}
\mathcal{K}_{k}(\vec{A},\vec{v})=\mbox{span}\{\vec{v},\vec{A}\vec{v},\vec{A}^{2}\vec{v},\dots,\vec{A}^{k-1}\vec{v}\}.\label{eq:Krylov}
\end{equation}
To solve the linear system (\ref{eq:linear_system}), let $\vec{x}_{0}$
be some initial guess to the solution, and $\vec{r}_{0}=\vec{b}-\vec{A}\vec{x}_{0}$
is the initial residual vector. A Krylov subspace method incrementally
finds approximate solutions within $\mathcal{K}_{k}(\vec{A},\vec{v})$,
sometimes through the aid of another Krylov subspace $\mathcal{K}_{k}(\vec{A}^{T},\vec{w})$,
where $\vec{v}$ and $\vec{w}$ typically depend on $\vec{r}_{0}$.
To construct the basis of the subspace $\mathcal{K}(\vec{A},\vec{v})$,
two procedures are commonly used: the (restarted) \emph{Arnoldi iteration}
\cite{Arnoldi51PMI}, and the \emph{bi-Lanczos iteration} \cite{Lan50,Van-der-Vorst:2003aa}
(a.k.a. Lanczos biorthogonalization \cite{Saad03IMS} or tridiagonal
biorthogonalization \cite{TB97NLA}).
\subsubsection{The Arnoldi Iteration}
The Arnoldi iteration is a procedure for constructing orthogonal basis
of the Krylov subspace $\mathcal{K}(\vec{A},\vec{v})$. Starting from
a unit vector $\vec{q}_{1}=\vec{v}/\Vert\vec{v}\Vert$, it iteratively
constructs
\begin{equation}
\vec{Q}_{k+1}=[\vec{q}_{1}\mid\vec{q}_{2}\mid\dots\mid\vec{q}_{k}\mid\vec{q}_{k+1}]\label{eq:Arnoldi_basis}
\end{equation}
with orthonormal columns by solving
\begin{equation}
h_{k+1,k}\vec{q}_{k+1}=\vec{A}\vec{q}_{k}-h_{1k}\vec{q}_{1}-\cdots-h_{kk}\vec{q}_{k},\label{eq:Arnoldi_core}
\end{equation}
where $h_{ij}=\vec{q}_{i}^{T}\vec{A}\vec{q}_{j}$ for $j\leq i$,
and $h_{k+1,k}=\Vert\vec{A}\vec{q}_{k}-h_{1k}\vec{q}_{1}-\cdots-h_{kk}\vec{q}_{k}\Vert$,
i.e., the norm of the right-hand side of (\ref{eq:Arnoldi_core}).
This is analogous to Gram-Schmidt orthogonalization. If $\mathcal{K}_{k}\neq\mathcal{K}_{k-1}$,
then the columns of $\vec{Q}_{k}$ form an orthonormal basis of $\mathcal{K}_{k}(\vec{A},\vec{v})$,
and
\begin{equation}
\vec{A}\vec{Q}_{k}=\vec{Q}_{k+1}\tilde{\vec{H}}_{_{k}},
\end{equation}
where $\tilde{\vec{H}}_{_{k}}$ is a $(k+1)\times k$ upper Hessenberg
matrix, whose entries $h_{ij}$ are those in (\ref{eq:Arnoldi_core})
for $i\leq j+1$, and $h_{ij}=0$ for $i>j+1$.
The KSP method GMRES \cite{Saad86GMRES} is based on the Arnoldi iteration,
with $\vec{v}=\vec{r}_{0}$. If $\vec{A}$ is symmetric, the Hessenberg
matrix $\tilde{\vec{H}}_{_{k}}$ reduces to a tridiagonal matrix,
and the Arnoldi iteration reduces to the Lanczos iteration. The Lanczos
iteration enjoys a three-term recurrence. In contrast, the Arnoldi
iteration has a $k$-term recurrence, so its computational cost increases
as $k$ increases. For this reason, one almost always need to restart
the Arnoldi iteration in practice, for example after every 30 iterations,
to build a new Krylov subspace from $\vec{v}=\vec{r}_{k}$ at restart.
Unfortunately, the restart may undermine the convergence of the KSP
methods.
\subsubsection{The Bi-Lanczos Iteration}
The bi-Lanczos iteration, also known as Lanczos biorthogonalization
or tridiagonal biorthogonalization, offers an alternative for constructing
the basis of the Krylov subspaces of $\mathcal{K}(\vec{A},\vec{v})$.
Unlike Arnoldi iterations, the bi-Lanczos iterations enjoy a three-term
recurrence. However, the basis will no longer be orthogonal, and we
need to use two matrix-vector multiplications per iteration, instead
of just one.
The bi-Lanczos iterations can be described as follows. Starting from
the vector $\vec{v}_{1}=\vec{v}/\Vert\vec{v}\Vert$, we iteratively
construct
\begin{equation}
\vec{V}_{k+1}=[\vec{v}_{1}\mid\vec{v}_{2}\mid\dots\mid\vec{v}_{k}\mid\vec{v}_{k+1}],\label{eq:nonorth_basis}
\end{equation}
by solving
\begin{equation}
\beta_{k}\vec{v}_{k+1}=\vec{A}\vec{v}_{k}-\gamma_{k-1}\vec{v}_{k-1}-\alpha_{k}\vec{v}_{k},\label{eq:beta_k}
\end{equation}
analogous to (\ref{eq:Arnoldi_core}). If $\mathcal{K}_{k}\neq\mathcal{K}_{k-1}$,
then the columns of $\vec{V}_{k}$ form a basis of $\mathcal{K}_{k}(\vec{A},\vec{v})$,
and
\begin{equation}
\vec{A}\vec{V}_{k}=\vec{V}_{k+1}\tilde{\vec{T}}_{_{k}},\label{eq:biorth_A}
\end{equation}
where
\begin{equation}
\tilde{\vec{T}}_{_{k}}=\begin{bmatrix}\alpha_{1} & \gamma_{1}\\
\beta_{1} & \alpha_{2} & \gamma_{2}\\
& \beta_{2} & \alpha_{3} & \ddots\\
& & \ddots & \ddots & \gamma_{k-1}\\
& & & \beta_{k-1} & \alpha_{k}\\
& & & & \beta_{k}
\end{bmatrix}
\end{equation}
is a $(k+1)\times k$ tridiagonal matrix. To determine the $\alpha_{i}$
and $\gamma_{i}$, we construct another Krylov subspace $\mathcal{K}(\vec{A}^{T},\vec{w})$,
whose basis is given by the column vectors of
\begin{equation}
\vec{W}_{k+1}=[\vec{w}_{1}\mid\vec{w}_{2}\mid\dots\mid\vec{w}_{k}\mid\vec{w}_{k+1}],\label{eq:biorth_basis_W}
\end{equation}
subject to the biorthogonality condition
\begin{equation}
\vec{W}_{k+1}^{T}\vec{V}_{k+1}=\vec{V}_{k+1}^{T}\vec{W}_{k+1}=\vec{I}_{k+1}.\label{eq:biorthogonal}
\end{equation}
Since
\begin{equation}
\vec{W}_{k+1}^{T}\vec{A}\vec{V}_{k}=\vec{W}_{k+1}^{T}\vec{V}_{k+1}\tilde{\vec{T}}_{_{k}}=\tilde{\vec{T}}_{_{k}}.
\end{equation}
It then follows that
\begin{equation}
\alpha_{k}=\vec{w}_{k}^{T}\vec{A}\vec{v}_{k}.\label{eq:alpha_k}
\end{equation}
Suppose $\vec{V}=\vec{V}_{n}$ and $\vec{W}=\vec{W}_{n}=\vec{V}^{-T}$
form complete basis vectors of $\mathcal{K}_{n}(\vec{A},\vec{v})$
and $\mathcal{K}_{n}(\vec{A}^{T},\vec{w})$, respectively. Let $\vec{T}=\vec{V}^{-1}\vec{A}\vec{V}$
and $\vec{S}=\vec{T}^{T}$. Then,
\begin{equation}
\vec{W}^{-1}\vec{A}^{T}\vec{W}=\vec{V}^{T}\vec{A}^{T}\vec{V}^{-T}=\vec{T}^{T}=\vec{S},
\end{equation}
and
\begin{equation}
\vec{A}^{T}\vec{W}_{k}=\vec{W}_{k+1}\tilde{\vec{S}}_{_{k}},\label{eq:biorth_At}
\end{equation}
where $\tilde{\vec{S}}_{k}$ is the leading $(k+1)\times k$ submatrix
of $\vec{S}$. Therefore,
\begin{equation}
\gamma_{k}\vec{w}_{k+1}=\vec{A}^{T}\vec{w}_{k}-\beta_{k-1}\vec{w}_{k-1}-\alpha_{k}\vec{w}_{k}.\label{eq:gamma_k}
\end{equation}
Starting from $\vec{v}_{1}$ and $\vec{w}_{1}$ with $\vec{v}_{1}^{T}\vec{w}_{1}=1$,
and let $\beta_{0}=\gamma_{0}=1$ and $\vec{v}_{0}=\vec{w}_{0}=\vec{0}$.
Then, $\alpha_{k}$ is uniquely determined by (\ref{eq:alpha_k}),
and $\beta_{k}$ and $\gamma_{k}$ are determined by (\ref{eq:beta_k})
and (\ref{eq:gamma_k}) by up to scalar factors, subject to $\vec{v}_{k+1}^{T}\vec{w}_{k+1}=1$.
A typical choice is to scale the right-hand sides of (\ref{eq:beta_k})
and (\ref{eq:gamma_k}) by scalars of the same modulus \cite[p. 230]{Saad03IMS}.
If $\vec{A}$ is symmetric and $\vec{v}_{1}=\vec{w}_{1}=\vec{v}/\Vert\vec{v}\Vert$,
then the bi-Lanczos iteration reduces to the classical Lanczos iteration
for symmetric matrices. Therefore, it can be viewed as a different
generalization of the Lanczos iteration to nonsymmetric matrices.
Unlike the Arnoldi iteration, the cost of bi-Lanczos iteration is
fixed per iteration, which may be advantageous in some cases. Some
KSP methods, in particular BiCG \cite{fletcher1976conjugate} and
QMR \cite{FN91QMR}, are based on bi-Lanczos iterations. A potential
issue of bi-Lanczos iteration is that it suffers from \emph{breakdown}
if $\vec{v}_{k+1}^{T}\vec{w}_{k+1}=0$ or \emph{near breakdown} if
$\vec{v}_{k+1}^{T}\vec{w}_{k+1}\approx0$. These can be resolved by
a \emph{look-ahead} strategy to build a block-tridiagonal matrix $\vec{T}$.
Fortunately, breakdowns are rare in practice, so look-ahead is rarely
implemented.
A disadvantage of the bi-Lanczos iteration is that it requires the
multiplication with $\vec{A}^{T}$. Although $\vec{A}^{T}$ is in
principle available in most applications, multiplication with $\vec{A}^{T}$
leads to additional difficulties in performance optimization and preconditioning.
Fortunately, in bi-Lanczos iteration, $\vec{V}_{k}$ can be computed
without forming $\vec{W}_{k}$ and vice versa. This observation leads
to the transpose-free variants of the KSP methods, such as TFQMR \cite{Freund93TFQMR},
which is a transpose-free variant of QMR, and CGS \cite{Sonneveld89CGS},
which is a transpose-free variant of BiCG. Two other examples include
BiCGSTAB \cite{vanderVorst92BiCGSTAB}, which is more stable than
CGS, and QMRCGSTAB \cite{CGS94QMRCGS}, which is a hybrid of QMR and
BiCGSTAB, with smoother convergence than BiCGSTAB. These transpose-free
methods enjoy three-term recurrences and require two multiplications
with $\vec{A}$ per iteration. Note that there is not a unique transpose-free
bi-Lanczos iteration. There are primarily two types, used by CGS and
QMR, and by BiCGSTAB and QMRCGSTAB, respectively. We will address
them in more detail in Section~\ref{sec:Analysis-KSP}.
\subsubsection{Comparison of the Iteration Procedures}
\begin{table}[tb]
\caption{\label{tab:KrylovSubspaces}Comparisons of KSP methods based on Krylov
subspaces and iteration procedures.}
\selectlanguage{american}%
\centering{}{\small{}}%
\begin{tabular}{c|>{\raggedright}p{2cm}|c|c|c}
\hline
\multirow{2}{*}{{\small{}Method}} & \multirow{2}{2cm}{{\small{}Iteration}} & \multicolumn{2}{c|}{{\small{}Matrix-Vector Prod.}} & \multirow{2}{*}{{\small{}Recurrence}}\tabularnewline
\cline{3-4}
& & {\small{}A}\textbf{\small{}$^{T}$} & {\small{}A} & \tabularnewline
\hline
\hline
\selectlanguage{english}%
{\small{}GMRES \cite{Saad86GMRES}}\selectlanguage{american}%
& {\small{}Arnoldi} & {\small{}0} & {\small{}1} & {\small{}$k$}\tabularnewline
\hline
\selectlanguage{english}%
BiCG \cite{fletcher1976conjugate}\selectlanguage{american}%
& \multirow{2}{2cm}{\foreignlanguage{english}{{\small{}bi-Lanczos}}} & \multirow{2}{*}{{\small{}1}} & \multirow{2}{*}{{\small{}1}} & \multirow{6}{*}{\selectlanguage{english}%
3\selectlanguage{american}%
}\tabularnewline
\cline{1-1}
\selectlanguage{english}%
{\small{}QMR \cite{FN91QMR}}\selectlanguage{american}%
& & & & \tabularnewline
\cline{1-4}
\selectlanguage{english}%
CGS \cite{Sonneveld89CGS}\selectlanguage{american}%
& \multirow{2}{2cm}{{\small{}transpose-free bi-Lanczos 1}} & \multirow{4}{*}{{\small{}~~~~~0~~~~~}} & \multirow{4}{*}{{\small{}2}} & \tabularnewline
\cline{1-1}
\selectlanguage{english}%
{\small{}TFQMR \cite{Freund93TFQMR}}\selectlanguage{american}%
& & & & \tabularnewline
\cline{1-2}
\selectlanguage{english}%
{\small{}BiCGSTAB \cite{vanderVorst92BiCGSTAB}}\selectlanguage{american}%
& \multirow{2}{2cm}{{\small{}transpose-free bi-Lanczos 2}} & & & \tabularnewline
\cline{1-1}
\selectlanguage{english}%
{\small{}QMRCGSTAB \cite{CGS94QMRCGS}}\selectlanguage{american}%
& & & & \tabularnewline
\hline
\end{tabular}{\small \par}\selectlanguage{english}%
\end{table}
Both the Arnoldi iteration and the bi-Lanczos iteration are based
on the Krylov subspace $\mathcal{K}(\vec{A},\vec{r}_{0})$. However,
these iteration procedures have very different properties, which are
inherited by their corresponding KSP methods, as summarized in Table~\ref{tab:KrylovSubspaces}.
These properties, for the most part, determine the cost per iteration
of the KSP methods. For KSP methods based on the Arnoldi iteration,
at the $k$th iteration the residual $\vec{r}_{k}=\mathcal{P}_{k}(\vec{A})\vec{r}_{0}$
for some degree-$k$ polynomial $\mathcal{P}_{k}$, so the asymptotic
convergence rates depend on the eigenvalues and the generalized eigenvectors
in the Jordan form of $\vec{A}$ \cite{comparison_trefethen,Saad03IMS}.
For methods based on transpose-free bi-Lanczos, in general $\vec{r}_{k}=\hat{\mathcal{P}}_{k}(\vec{A})\vec{r}_{0}$,
where $\hat{\mathcal{P}}_{k}$ is a polynomial of degree $2k$. Therefore,
the convergence of these methods also depend on the eigenvalues and
generalized eigenvectors of $\vec{A}$, but at different asymptotic
rates. Typically, the reduction of error in one iteration of a bi-Lanczos-based
KSP method is approximately equal to that of two iterations in an
Arnoldi-based KSP method. Since the Arnoldi iteration requires only
one matrix-vector multiplication per iteration, compared to two per
iteration for the bi-Lanczos iteration, the cost of different KSP
methods are comparable in terms of the number of matrix-vector multiplications.
Theoretically, the Arnoldi iteration is more robust because of its
use of orthogonal basis, whereas the bi-Lanczos iteration may break
down if $\vec{v}_{k+1}^{T}\vec{w}_{k+1}=0$. However, the Arnoldi
iteration typically requires restarts, which can undermine convergence.
Therefore, the methods based on bi-Lanczos are often more robust than
GMRES with restarts. In general, if the iteration count is small compared
to the average number of nonzeros per row, the methods based on the
Arnoldi iteration may be more efficient; if the iteration count is
large, the cost of orthogonalization in Arnoldi iteration may become
higher than that of bi-Lanczos iteration. For these reasons, conflicting
results are often reported in the literature. However, the apparent
disadvantages of each KSP method may be overcome by effective preconditioners:
For Arnoldi iterations, if the KSP method converges before restart
is needed, then it may be the most effective method; for bi-Lanczos
iterations, if the KSP method converges before any break down, it
is typically more robust than the methods based on restarted Arnoldi
iterations. We will review the preconditioners in the next subsection.
Note that some KSP methods use a Krylov subspace other than $\mathcal{K}(\vec{A},\vec{r}_{0})$.
The most notable examples are LSQR \cite{Paige92LSQR} and LSMR \cite{Fong11LSMR},
which use the Krylov subspace $\mathcal{K}(\vec{A}^{T}\vec{A},\vec{A}^{T}\vec{r}_{0})$.
These methods are mathematically equivalent to applying CG or MINRES
to the normal equation, respectively, but with better numerical properties.
An advantage of these methods is that they are applicable to least
squares systems without modification. However, they are not transpose
free, they tend to converge slowly for square linear systems, and
they require special preconditioners. For these reasons, we do not
include them in this study.
\subsection{Preconditioners}
The convergence of KSP methods can be improved significantly by the
use of preconditioners. Various preconditioners have been proposed
for Krylov subspace methods over the past few decades. It is virtually
impossible to consider all of them. For this comparative study, we
focus on three preconditioners, which are representative of the state
of the art: Gauss-Seidel, incomplete LU factorization, and algebraic
multigrid.
\subsubsection{Left and Right Preconditioners}
Roughly speaking, a preconditioner is a matrix or transformation $\vec{M}$,
whose inverse $\vec{M}^{-1}$ approximates $\vec{A}^{-1}$, and $\vec{M}^{-1}\vec{v}$
can be computed efficiently. For nonsymmetric linear systems, a preconditioner
may be applied either to the left or the right of $\vec{A}$. With
a left preconditioner, instead of solving (\ref{eq:linear_system}),
one solves the linear system
\begin{equation}
\vec{M}^{-1}\vec{A}\vec{x}=\vec{M}^{-1}\vec{b}
\end{equation}
by utilizing the Krylov subspace $\mathcal{K}(\vec{M}^{-1}\vec{A},\vec{M}^{-1}\vec{b})$
instead of $\mathcal{K}(\vec{A},\vec{b})$. For a right preconditioner,
one solves the linear system
\begin{equation}
\vec{A}\vec{M}^{-1}\vec{y}=\vec{b}
\end{equation}
by utilizing the Krylov subspace $\mathcal{K}(\vec{A}\vec{M}^{-1},\vec{b})$,
and then $\vec{x}=\vec{M}^{-1}\vec{y}$. The convergence of a preconditioned
KSP method is then determined by the eigenvalues of $\vec{M}^{-1}\vec{A}$,
which are the same as those of $\vec{A}\vec{M}^{-1}$. Qualitatively,
$\vec{M}$ is a good preconditioner if $\vec{M}^{-1}\vec{A}$ is not
too far from normal and its eigenvalues are more clustered than those
of $\vec{A}$ \cite{TB97NLA}. However, this is more useful as a guideline
for developers of preconditioners, rather than for practitioners.
Although the left and right-preconditioners have similar asymptotic
behavior, they can behave drastically differently in practice. This
is because the termination criterion of a Krylov subspace method is
typically based on the norm of the residual of the preconditioned
system. With a left preconditioner, the preconditioned residual $\Vert\vec{M}^{-1}\vec{r}_{k}\Vert$
may differ significantly from the true residual $\Vert\vec{r}_{k}\Vert$
if $\Vert\vec{M}^{-1}\Vert$ is far from 1, which unfortunately is
often the case. This in turn leads to erratic behavior, such as premature
termination or false stagnation of the preconditioned KSP method,
unless the true residual is calculated explicitly at the cost of additional
matrix-vector multiplications. In contrast, a right preconditioner
does not alter the residual, so the stopping criteria can use the
true residual with little or no extra cost. Unless $\vec{M}$ is very
ill-conditioned, $\vec{M}^{-1}\vec{y}$ would not lead to large errors
in $\vec{x}$. For these reasons, we consider only right preconditioners
in this comparative study.
Note that a preconditioner may also be applied to both the left and
right of $\vec{A}$, leading to the so-called \emph{symmetric preconditioners}.
Such preconditioners are more commonly used for preserving the symmetry
of symmetric matrices. Like left preconditioners, they also alter
the norm of the residual. Therefore, we do not consider symmetric
preconditioners either.
\subsubsection{Gauss-Seidel}
Gauss-Seidel is one of the simplest preconditioners. Based on stationary
iterative methods, Gauss-Seidel is relatively easy to implement, so
it is often the choice if one must implement a preconditioner from
scratch.
Consider the partitioning $\vec{A}=\vec{D}+\vec{L}+\vec{U}$, where
$\vec{D}$ is the diagonal of $\vec{A}$, $\vec{L}$ is the strict
lower triangular part, and $\vec{U}$ is the strict upper triangular
part. Given $\vec{x}_{k}$ and $\vec{b}$, the Gauss-Seidel method
computes a new approximation to $\vec{x}_{k+1}$ as
\begin{equation}
\vec{x}_{k+1}=(\vec{D}+\vec{L})^{-1}(\vec{b}-\vec{U}\vec{x}_{k}).
\end{equation}
Gauss-Seidel is a special case of SOR, which computes $\vec{x}_{k+1}$
as
\begin{equation}
\vec{x}_{k+1}=(\vec{D}+\omega\vec{L})^{-1}\left(\omega\left(\vec{b}-\vec{U}\vec{x}_{k}\right)+(1-\omega)\vec{D}\vec{x}_{k}\right).
\end{equation}
When $\omega=1$, the SOR reduces to Gauss-Seidel; when $\omega>1$
and $\omega<1$, it corresponds to over-relaxation and under-relaxation,
respectively. We choose to include Gauss-Seidel instead of SOR in
our comparison, because it is parameter free, and an optimal choice
of $\omega$ in SOR is problem dependent. Another related preconditioner
is the Jacobi or diagonal preconditioner, which is less effective
than Gauss-Seidel. A limitation of Gauss-Seidel, also shared by Jacobi
and SOR, is that the diagonal entries of $\vec{A}$ must be nonzero.
Fortunately, this condition is typically satisfied for linear systems
arising from PDE discretizations.
\subsubsection{Incomplete LU Factorization}
Incomplete LU factorization (ILU) is one of the most widely used ``black-box''
preconditioners. It performs an approximate factorization
\begin{equation}
\vec{A}\approx\vec{L}\vec{U},
\end{equation}
where $\vec{L}$ and $\vec{U}$ are far sparser than those in the
true LU factorization of $\vec{A}$. In its simplest form, ILU does
not introduce any fill, so that $\vec{L}$ and $\vec{U}$ preserve
the sparsity patterns of the lower and upper triangular parts of $\vec{A}$,
respectively. In this case, the diagonal entries of $\vec{A}$ must
be nonzero. The ILU may be extended to preserve relatively large fills,
and to use partial pivoting. These improve the stability of the factorization,
but also increases the computational cost and storage. The ILU factorization
is available in software packages, such as PETSc and MATLAB. Their
default options are typically no-fill, which is what we will use in
this study.
\subsubsection{Algebraic Multigrid}
Multigrid methods, including geometric multigrid (GMG) and algebraic
multigrid (AMG), are the most sophisticated preconditioners. These
methods are typically based on stationary iterative methods, and they
accelerate the convergence by constructing a series of coarser representations.
Compared to Gauss-Seidel and ILU, multigrid preconditioners, especially
GMG, are far more difficult to implement. Fortunately, AMG preconditioners
are more easily accessible through software libraries. There are primarily
two types of AMG methods: the classical AMG, and smoothed aggregation,
which are based on different coarsening strategies and different prolongation
and restriction operators. An efficient implementation of the former
is available in Hyper \cite{falgout2002hypre}, and that of the latter
is available in ML \cite{GeeSie06ML}, both accessible through PETSc.
Computationally, AMG is more expensive than Gauss-Seidel and ILU in
terms of both setup time and cost per iteration, but they are also
more scalable in problems size. Therefore, they are beneficial only
if they can accelerate the convergence of KSP methods significantly,
and the problem size is sufficiently large. In general, the classical
AMG is more expensive than smoothed aggregation in terms of cost per
iteration, but it tends to converge much faster. Depending on the
types of the problems, the classical AMG may outperform smoothed aggregation
and vice versa. The classical AMG may require more tuning to achieve
good performance. Therefore, in this work we primarily use the smoothed
aggregation, but we will also present a comparison between ML and
Hyper in Section \ref{sub:ML-VS-HYPRE}.
\section{Analysis of Preconditioned KSP Methods\label{sec:Analysis-KSP}}
In this section, we discuss a few Krylov subspace methods in more
detail, especially the preconditioned GMRES, TFQMR, BiCGSTAB, and
QMRCGSTAB with right preconditioners. In the literature, these methods
are typically given either without preconditioners or with left preconditioners.
We present their high-level descriptions with right preconditioners.
We also present some theoretical results in terms of operation counts
and storage, which are helpful in interpreting the numerical results.
The cost of the preconditioners are independent of the KSP methods,
so we do not include them in the comparison.
\subsection{GMRES}
Developed by Saad and Schultz \cite{Saad86GMRES}, GMRES, or generalized
minimal residual method, is one of most well-known iterative methods
for solving large, sparse, nonsymmetric systems. GMRES is based on
the Arnoldi iteration. At the $k$th iteration, it minimizes $\Vert\vec{r}_{k}\Vert$
in $\mathcal{K}_{k}(\vec{A},\vec{b})$. Equivalently, it finds an
optimal degree-$k$ polynomial $\mathcal{P}_{k}(\vec{A})$ such that
$\vec{r}_{k}=\mathcal{P}_{k}(\vec{A})\vec{r}_{0}$ and $\Vert\vec{r}_{k}\Vert$
is minimized. Suppose the approximate solution has the form
\begin{equation}
\vec{x}_{k}=\vec{x}_{0}+\vec{Q}_{k}\vec{z},\label{eq:GMRES_sol}
\end{equation}
where $\vec{Q}_{k}$ was given in (\ref{eq:Arnoldi_basis}). Let $\beta=\Vert\vec{r}_{0}\Vert$
and $\vec{q}_{1}=\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$. It then follows
that
\begin{equation}
\vec{r}_{k}=\vec{b}-\vec{A}\vec{x}_{k}=\vec{b}-\vec{A}(\vec{x}_{0}+\vec{Q}_{k}\vec{z})=\vec{r}_{0}-\vec{A}\vec{Q}_{k}\vec{z}=\vec{Q}_{k+1}(\beta\vec{e}_{1}-\tilde{\vec{H}}_{k}\vec{z}),\label{eq:GMRES_residual}
\end{equation}
and $\Vert\vec{r}_{k}\Vert=\Vert\beta\vec{e}_{1}-\tilde{\vec{H}}_{k}\vec{z}\Vert$.
Therefore, $\vec{r}_{k}$ is minimized by solving the least squares
system $\tilde{\vec{H}}_{k}\vec{z}\approx\beta\vec{e}_{1}$ using
QR factorization. In this sense, GMRES is closely related to MINRES
for solving symmetric systems \cite{Paige75MINRES}. Algorithm~1
gives a high-level pseudocode of the preconditioned GMRES with a right
preconditioner; for a more detailed pseudocode, see e.g. \cite[p. 294]{Saad03IMS}.
For nonsingular matrices, the convergence of GMRES depends on whether
$\vec{A}$ is close to normal, and also on the distribution of its
eigenvalues \cite{comparison_trefethen,TB97NLA}. At the $k$th iteration,
GMRES requires one matrix-vector multiplication, $k+1$ axpy operations
(i.e., $\alpha\vec{x}+\vec{y}$), and $k+1$ inner products. Let $\ell$
denote the average number of nonzeros per row. In total, GMRES requires
$2n(\ell+2k+2)$ floating-point operations per iteration, and requires
storing $k+5$ vectors in addition to the matrix itself. Due to the
high cost of orthogonalization in the Arnoldi iteration, GMRES in
practice needs to be restarted periodically. This leads to GMRES with
restart, denoted by GMRES($r$), where $r$ is the iteration count
before restart. A typical value of $r$ is 30.
\begin{algorithm}
\begin{minipage}[t]{0.45\textwidth}%
\textbf{\noun{Algorithm}}\textbf{ 1:}
\textbf{Preconditioned GMRES}
\textbf{input}: $\vec{x}_{0}$: initial guess
\hspace{1.25cm}$\vec{r}_{0}$: initial residual
\textbf{output}: $\vec{x}_{*}$: final solution
\begin{algorithmic}[1]
\STATE $\vec{q}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$;
$\beta\leftarrow\Vert\vec{r}_{0}\Vert$
\STATE \textbf{for} $k=1,2,\dots$
\STATE ~~~~obtain $\tilde{\vec{H}}_{k}$ and $\vec{Q}_{k}$ from
Arnoldi iteration s.t. $\vec{r}_{k}=\mathcal{P}_{k}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$
\STATE ~~~~solve $\tilde{\vec{H}}_{k}\vec{z}\approx\beta\vec{e}_{1}$
\STATE ~~~~$\vec{y}_{k}\leftarrow\vec{Q}_{k}\vec{z}$
\STATE ~~~~check convergence of $\Vert\vec{r}_{k}\Vert$
\STATE \textbf{end for}
\STATE $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$
\end{algorithmic}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\textbf{\noun{Algorithm}}\textbf{ 2:}
\textbf{Preconditioned TFQMR}
\textbf{input}: $\vec{x}_{0}$: initial guess
\hspace{1.25cm}$\vec{r}_{0}$: initial residual
\textbf{output}: $\vec{x}_{*}$: final solution
\begin{algorithmic}[1]
\STATE $\vec{v}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$;
$\beta\leftarrow\Vert\vec{r}_{0}\Vert$
\STATE \textbf{for} $k=1,2,\dots$
\STATE ~~~~obtain $\tilde{\vec{T}}_{k}$ and $\vec{V}_{k}$ from
bi-Lanczos s.t. $\vec{r}_{k}=\mathcal{\tilde{P}}_{k}^{2}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$
\STATE ~~~~solve $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$
\STATE ~~~~$\vec{y}_{k}\leftarrow\vec{V}_{k}\vec{z}$
\STATE ~~~~check convergence of $\Vert\vec{r}_{k}\Vert$
\STATE \textbf{end for}
\STATE $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$
\end{algorithmic}%
\end{minipage}
\end{algorithm}
\subsection{QMR and TFQMR}
Proposed by Freund and Nachtigal \cite{FN91QMR}, QMR, or quasi-minimal
residual method, minimizes $\vec{r}_{k}$ in a pseudonorm within the
Krylov subspace $\mathcal{K}(\vec{A},\vec{r}_{0})$. At the $k$th
step, suppose the approximate solution has the form
\begin{equation}
\vec{x}_{k}=\vec{x}_{0}+\vec{V}_{k}\vec{z},\label{eq:22}
\end{equation}
where $\vec{V}_{k}$ was the same as that in (\ref{eq:nonorth_basis}).
Let $\beta=\Vert\vec{r}_{0}\Vert$ and $\vec{v}_{1}=\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$.
It then follows that
\begin{equation}
\vec{r}_{k}=\vec{b}-\vec{A}\vec{x}_{k}=\vec{b}-\vec{A}(\vec{x}_{0}+\vec{V}_{k}\vec{z})=\vec{r}_{0}-\vec{A}\vec{V}_{k}\vec{z}=\vec{V}_{k+1}(\beta\vec{e}_{1}-\tilde{\vec{T}}_{k}\vec{z}).
\end{equation}
QMR minimize $\Vert\beta\vec{e}_{1}-\tilde{\vec{T}}_{k}\vec{z}\Vert$
by solving the least-squares problem $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$,
which is equivalent to minimizing the pseudonorm
\begin{equation}
\Vert\vec{r}_{k}\Vert_{\vec{W}_{k+1}^{T}}=\Vert\vec{W}_{k+1}^{T}\vec{r}_{k}\Vert_{2},
\end{equation}
where $\vec{W}_{k+1}$ was defined in (\ref{eq:biorth_basis_W}).
QMR requires explicit constructions of $\vec{W}_{k}$. TFQMR \cite{Freund93TFQMR}
is transpose-free variant, which constructs $\vec{V}_{k}$ without
forming $\vec{W}_{k}$. Motivated by CGS, \cite{Sonneveld89CGS},
at the $k$th iteration, TFQMR finds a degree-$k$ polynomial $\tilde{\mathcal{P}}_{k}(\vec{A})$
such that $\vec{r}_{k}=\mathcal{\tilde{P}}_{k}^{2}(\vec{A})\vec{r}_{0}$.
This is what we refer to as ``transpose-free bi-Lanczos 1'' in Table~\ref{tab:KrylovSubspaces}.
Algorithm~2 outlines TFQMR with a right preconditioner. Its only
difference from GMRES is in lines 3--5. Detailed pseudocode without
preconditioners can be found in \cite{Freund93TFQMR} and \cite[p. 252]{Saad03IMS}.
At the $k$th iteration, TFQMR requires two matrix-vector multiplication,
ten axpy operations (i.e., $\alpha\vec{x}+\vec{y}$), and four inner
products. In total, TFQMR requires $4n(\ell+7)$ floating-point operations
per iteration, and requires storing eight vectors in addition to the
matrix itself. This is comparable to QMR, which requires 12 axpy operations
and two inner products, so QMR requires the same number of floating-point
operations. However, QMR requires storing twice as many vectors as
TFQMR. In practice, TFQMR often outperforms QMR, because the multiplication
with $\vec{A}^{T}$ is often less optimized. In addition, the preconditioning
of QMR is problematic, especially with multigrid preconditioners.
Therefore, TFQMR is in general preferred over QMR. Both QMR and TFQMR
may suffer from break downs, but they rarely happen in practice, especially
with a good preconditioner.
\subsection{BiCGSTAB}
Proposed by van der Vorst \cite{vanderVorst92BiCGSTAB}, BiCGSTAB
is a transpose-free version of BiCG, which has smoother convergence
than BiCG and CGS. Different from CGS and TFQMR, at the $k$th iteration,
BiCGSTAB constructs another degree-$k$ polynomial
\begin{equation}
\mathcal{Q}_{k}(\vec{A})=(1-\omega_{1}\vec{A})(1-\omega_{2}\vec{A})\cdots(1-\omega_{k}\vec{A})\label{eq:bicgstab_poly}
\end{equation}
in addition to $\mathcal{\tilde{P}}_{k}(\vec{A})$ in CGS, such that
$\vec{r}_{k}=\mathcal{Q}_{k}(\vec{A})\mathcal{\tilde{P}}_{k}(\vec{A})\vec{r}_{0}$.
BiCGSTAB determines $\omega_{k}$ by minimizing $\Vert\vec{r}_{k}\Vert$
with respect to $\omega_{k}$. This is what we referred to as ``transpose-free
bi-Lanczos 2'' in Table~\ref{tab:KrylovSubspaces}. Like BiCG and
CGS, BiCGSTAB solves the linear system $\vec{T}_{k}\vec{z}=\beta\vec{e}_{1}$
using LU factorization without pivoting, which is analogous to solving
the tridiagonal system using Cholesky factorization in CG \cite{Hestenes52CG}.
Algorithm~3 outlines BiCGSTAB with a right preconditioner, of which
the only difference from GMRES is in lines 3--5. Detailed pseudocode
without preconditioners can be found in \cite[p. 136]{Van-der-Vorst:2003aa}.
At the $k$th iteration, BiCGSTAB requires two matrix-vector multiplications,
six axpy operations, and four inner products. In total, it requires
$4n(\ell+5)$ floating-point operations per iteration, and requires
storing $10$ vectors in addition to the matrix itself. Like GMRES,
the convergence rate of BiCGSTAB also depends on the distribution
of the eigenvalues of $\vec{A}$. Unlike GMRES, however, BiCGSTAB
is ``parameter free.'' Its underlying bi-Lanczos iteration may break
down, but it is very rare in practice with a good preconditioner.
Therefore, BiCGSTAB is often more efficient and robust than restarted
GMRES.
\begin{algorithm}
\begin{minipage}[t]{0.45\textwidth}%
\textbf{\noun{Algorithm}}\textbf{ 3:}
\textbf{Preconditioned BiCGSTAB}
\textbf{input}: $\vec{x}_{0}$: initial guess
\hspace{1.25cm}$\vec{r}_{0}$: initial residual
\textbf{output}: $\vec{x}_{*}$: final solution
\begin{algorithmic}[1]
\STATE $\vec{v}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$;
$\beta\leftarrow\Vert\vec{r}_{0}\Vert$
\STATE \textbf{for} $k=1,2,\dots$
\STATE ~~~~obtain $\vec{T}_{k}$ \& $\vec{V}_{k}$ from bi-Lanczos
s.t. $\vec{r}_{k}=\mathcal{Q}_{k}\mathcal{\tilde{P}}_{k}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$
\STATE ~~~~solve $\vec{T}_{k}\vec{z}=\beta\vec{e}_{1}$
\STATE ~~~~$\vec{y}_{k}\leftarrow\vec{V}_{k}\vec{z}$
\STATE ~~~~check convergence of $\Vert\vec{r}_{k}\Vert$
\STATE \textbf{end for}
\STATE $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$
\end{algorithmic}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\textbf{\noun{Algorithm 4}}\textbf{:}
\textbf{Preconditioned QMRCGSTAB}
\textbf{input}: $\vec{x}_{0}$: initial guess
\hspace{1.25cm}$\vec{r}_{0}$: initial residual
\textbf{output}: $\vec{x}_{*}$: final solution
\begin{algorithmic}[1]
\STATE $\vec{v}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$;
$\beta\leftarrow\Vert\vec{r}_{0}\Vert$
\STATE \textbf{for} $k=1,2,\dots$
\STATE ~~~~obtain $\tilde{\vec{T}}_{k}$ \& $\vec{V}_{k}$ from
bi-Lanczos s.t. $\vec{r}_{k}=\mathcal{Q}_{k}\mathcal{\tilde{P}}_{k}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$
\STATE ~~~~solve $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$
\STATE ~~~~$\vec{y}_{k}\leftarrow\vec{V}_{k}\vec{z}$
\STATE ~~~~check convergence of $\Vert\vec{r}_{k}\Vert$
\STATE \textbf{end for}
\STATE $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$
\end{algorithmic}%
\end{minipage}
\end{algorithm}
\subsection{QMRCGSTAB}
One disadvantage of BiCGSTAB is that the residual does not decrease
monotonically, and is often quite oscillatory. Chan ${\it el\thinspace al.}$
\cite{CGS94QMRCGS} proposed QMRCGSTAB, which is a hybrid of QMR and
BiCGSTAB, to improve the smoothness of BiCGSTAB. Like BiCGSTAB, QMRCGSTAB
constructs a polynomial $\mathcal{Q}_{k}(\vec{A})$ as defined in
((\ref{eq:bicgstab_poly})) by minimizing $\Vert\vec{r}_{k}\Vert$
with respect to $\omega_{k}$, which they refer to as ``local quasi-minimization.''
Like QMR, it then minimizes $\Vert\vec{W}_{k+1}^{T}\vec{r}_{k}\Vert_{2}$
by solving the least-squares problem $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$,
which they refer to as ``global quasi-minimization.'' Algorithm~4
outlines the high-level algorithm, of which its only difference from
BiCGSTAB is in line 4. Detailed pseudocode without preconditioners
can be found in \cite{CGS94QMRCGS}.
At the $k$th iteration, QMRCGSTAB requires two matrix-vector multiplications,
eight axpy operations, and six inner products. In total, it requires
$4n(\ell+7)$ floating-point operations per iteration, and it requires
storing 13 vectors in addition to the matrix itself. Like QMR and
BiCGSTAB, the underlying bi-Lanczos may break down, but it is very
rare in practice with a good preconditioner.
\subsection{Comparison of Operation Counts and Storage}
We summarize the cost and storage comparison of the four KSP methods
in Table~\ref{tab:Comparison-of-operations}. Except for GMRES, the
other methods require two matrix-vector multiplications per iteration.
However, we should not expect GMRES to be twice as fast as the other
methods, because the reduction of error in one iteration of the other
methods is approximately equal to that of two iterations in GMRES.
Therefore, the cost these methods are comparable in terms of matrix-vector
multiplication. However, since GMRES minimizes the true 2-norm of
the residual if no restart is needed, its cost per iteration is smaller
for small iteration counts. Therefore, GMRES may indeed by the most
efficient, especially with an effective preconditioner. However, without
an effective preconditioner, the restarted GMRES may converge slowly
and even stagnate for large systems. For the three methods based on
bi-Lanczos, computing the $2$-norm of the residual for convergence
checking requires an extra inner product, as included in Table~\ref{tab:Comparison-of-operations}.
Among the three methods, BiCGSTAB is the most efficient, requiring
$8n$ fewer floating point operations per iteration than TFQMR and
QMRCGSTAB. In Section~\ref{sec:Results}, we will present the numerical
comparisons of the different methods, which mostly agree with the
above analysis.
\begin{table}[tb]
\caption{\label{tab:Comparison-of-operations}Comparison of operations per
iteration and memory requirements of various KSP methods. $n$ denotes
the number of rows, $\ell$ the average number of nonzeros per row,
and $k$ the iteration count. }
\centering{}%
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{{\small{}Method}} & \multirow{2}{*}{Min.} & {\small{}Mat-vec} & \multirow{2}{*}{{\small{}axpy}} & {\small{}Inner} & \multirow{2}{*}{{\small{}FLOPs}} & Stored\tabularnewline
& & {\small{} Prod.} & & {\small{} Prod.} & & vectors\tabularnewline
\hline
\hline
{\small{}GMRES} & $\Vert\vec{r}_{k}\Vert$ & {\small{}1} & {\small{}$k$+1} & {\small{}$k$+1} & {\small{}$2n(\ell+2k+2)$} & {\small{}$k+5$}\tabularnewline
\hline
{\small{}BiCGSTAB} & $\Vert\vec{r}_{k}(\omega_{k})\Vert$ & \multirow{4}{*}{{\small{}2}} & {\small{}6} & {\small{}4} & {\small{}$4n(\ell+5)$} & {\small{}10}\tabularnewline
\cline{1-2} \cline{4-7}
{\small{}TFQMR } & $\Vert\vec{r}_{k}\Vert_{\vec{W}_{k+1}^{T}}$ & & {\small{}10} & {\small{}4} & \multirow{3}{*}{{\small{}$4n(\ell+7)$}} & {\small{}$8$}\tabularnewline
\cline{1-2} \cline{4-5} \cline{7-7}
\multirow{2}{*}{{\small{}QMRCGSTAB }} & $\Vert\vec{r}_{k}(\omega_{k})\Vert$ \& & & \multirow{2}{*}{{\small{}8}} & \multirow{2}{*}{{\small{}6}} & & \multirow{2}{*}{{\small{}$13$}}\tabularnewline
& $\Vert\vec{r}_{k}\Vert_{\vec{W}_{k+1}^{T}}$ & & & & & \tabularnewline
\hline
\end{tabular}
\end{table}
In terms of storage, TFQMR requires the least amount of memory. BiCGSTAB
requires two more vectors than TFQMR, and QMRCGSTAB requires three
more vectors than BiCGSTAB. GMRES requires the most amount of memory
when $k\apprge8$. These storage requirements are typically not large
enough to be a concern in practice.
The analysis above did not include the preconditioners. The computational
cost of Gauss-Seidel and ILU is approximately equal to one matrix-vector
multiplication per iteration. The cost of the multigrid preconditioner
is dominated by that of the setup and the smoothing steps at the finest
level, which is typically a few times of that of Gauss-Seidel and
ILU, depending on how many times the smoother is called. Both ILU
and multigrid preconditioner require extra storage proportional to
the number of nonzeros in the coefficient matrices, which is rarely
a concern in practice.
\section{\label{sec:PDE-Discretization-Methods}PDE Discretization Methods}
For our comparative study, we construct test matrices from PDE discretizations
in $2$-D and $3$-D. In this section, we give a brief overview of
a few discretization methods used in our tests, with a focus on the
origins of non-symmetry of the linear systems.
\subsection{Weighted Residual Formulation of a Model Problem}
Consider an abstract but general linear, time-independent PDE over
$\Omega$
\begin{equation}
\mathcal{P}\,u(\vec{x})=f(\vec{x}),\label{eq:linearPDE}
\end{equation}
with Dirichlet or Neumann boundary conditions over $\partial\Omega$,
where $\mathcal{P}$ is a linear differential operator, and $f$ is
a known function. A specific example is the model problem
\begin{align}
-\nabla^{2}u+c\nabla u+du & =f\quad\text{in }\Omega,\label{eq:model_problem}\\
u & =g\quad\text{on }\partial\Omega,
\end{align}
for which $\mathcal{P}=-\nabla^{2}+c\nabla+d$, where $c$ and $d$
are scalar constants or functions. When $d=0$, it is a convection-diffusion
equation; when $c=0$, it is a Helmholtz equation.
Most PDE discretization methods can be expressed in a weighted residual
formulation. In particular, consider a set of test (a.k.a. weight)
functions $\Psi(\vec{x})=\{\psi_{j}(\vec{x})\}$. The PDE (\ref{eq:linearPDE})
is then converted into a set of integral equations
\begin{equation}
\int_{\Omega}\mathcal{P}\,u(\vec{x})\,\psi_{j}\,d\vec{x}=\int_{\Omega}f(\vec{x})\,\psi_{j}\,d\vec{x}.\label{eq:weak_form}
\end{equation}
To discretize the system fully, we approximate $u$ by a set of basis
functions $\Phi(\vec{x})=\{\phi_{i}(\vec{x})\}$, i.e., $u\approx\vec{u}^{T}\vec{\Phi}=\sum_{i}u_{i}\phi_{i}$.
We then obtain a linear system
\begin{equation}
\vec{A}\vec{u}=\vec{b},\label{eq:linearsys}
\end{equation}
where
\begin{equation}
a_{ij}=\int_{\Omega}\left(\mathcal{P}\,\phi_{j}(\vec{x})\right)\,\psi_{i}(\vec{x})\,d\vec{x}\mbox{ \,\,\ and\,\,\ }b_{i}=\int_{\Omega}f(\vec{x})\psi_{i}(\vec{x})\,d\vec{x}.
\end{equation}
The system needs to be further modified to apply the boundary conditions.
In general, the test and/or basis functions have local support, and
therefore $\vec{A}$ is in general sparse.
\subsection{Galerkin Finite Element Methods}
The finite element methods (FEM) are widely used for discretizing
PDEs over complex geometries. For an introduction of finite element
methods, see e.g. \cite{ZTZ05FEM}. In the classical Galerkin FEM,
the basis functions $\vec{\Phi}$ and the test functions $\vec{\Psi}$
are equal. If $\mathcal{P}$ is the Laplacian operator $\nabla^{2}$
and $\phi_{i}$ vanishes along $\partial\Omega$, then after integration
by parts,
\begin{equation}
a_{ij}=\int_{\Omega}\left(\nabla^{2}\,\phi_{j}(\vec{x})\right)\,\phi_{i}(\vec{x})\,d\vec{x}=-\int_{\Omega}\nabla\,\phi_{j}(\vec{x})\cdot\nabla\,\phi_{j}(\vec{x})\,d\vec{x},
\end{equation}
so we obtain a symmetric linear system for Helmholtz equations. However,
for the convection-diffusion equation or more complicated PDEs, the
linear system is in general nonsymmetric. In this study, we will use
the convention-diffusion equation as the test problem for FEM in both
2D and 3D.
\subsection{Petrov-Galerkin Methods}
Another source of nonsymmetric linear systems is the Petrov-Galerkin
methods, in which the test functions are different from the basis
functions. The Petrov-Galerkin methods are desirable, because the
basis and test functions have different desired properties for accuracy
and stability. An example of Petrov-Galerkin methods is AES-FEM \cite{CDJ16OEQ,Conley16HAES},
which uses generalized Lagrange polynomials for basis functions and
the standard linear finite-element basis functions as test functions.
Unlike Galerkin methods, the accuracy and stability of AES-FEM are
independent of the element quality of the meshes. The linear systems
from AES-FEM are always nonsymmetric, even for the Helmholtz equations.
We will consider some matrices arising from AES-FEM methods for the
convection-diffusion equation in both 2D and 3D.
\subsection{Finite Difference and Generalized Finite Difference}
The finite difference methods are often used to discretize PDEs on
structured or curvilinear meshes. For the Helmholtz equations with
Dirichlet boundary conditions, we may obtain a symmetric linear system
by using centered difference approximation on a uniform structured
mesh. However, the finite difference methods in general lead to nonsymmetric
matrices with more complicated PDEs, more sophisticated boundary or
jump conditions, higher-order discretizations, nonuniform meshes,
or curvilinear meshes. We will consider a nonsymmetric matrix from
finite difference methods of a Helmholtz equation on a nonuniform
structured mesh, arising from climate modeling.
The finite difference methods were traditionally limited to structured
meshes. However, they can be generalized to unstructured meshes by
the generalized finite difference methods, or GFD \cite{Benito08GFDM}.
These methods are weighted residual methods with Dirac delta functions
as the test functions and the generalized Lagrange polynomials as
basis functions. Similar to the Petrov-Galerkin methods, the generalized
finite difference methods in general result in nonsymmetric linear
systems. We will consider some test matrices from GFD for the convection-diffusion
equation.
\section{Numerical Results\label{sec:Results}}
In this section, we present some empirical comparisons of the preconditioned
KSP methods described in Section~\ref{sec:Analysis-KSP}. For GMRES,
TFQMR and BiCGSTAB, we use the built-in implementations in PETSc v3.7.1
\cite{petsc-user-ref}. For GMRES, we use 30 as the restart parameter,
the default in PETSc, so we denote the method by GMRES(30). QMRCGSTAB
is not available in PETSc. We implemented it ourselves using the lower-level
matrix-vector libraries in PETSc. We use the Gauss-Seidel, ILU and
AMG as right preconditioners for these KSP methods. The Gauss-Seidel
preconditioner is available in PETSc as SOR with the relaxation parameter
set to $1$. For ILU, we use the default options in PETSc, which has
no fills. For AMG, we primarily use the smoothed aggregation in ML
v5.0 \cite{GeeSie06ML} with default parameters. We will also compare
ML against the classical AMG in Hypre v2.10 \cite{falgout2002hypre}.
We compare the convergence history and runtimes of these methods.
For the convergence criteria, we use the relative $2$-norm of the
residual, i.e. the 2-norm of the residual divided by the 2-norm of
the right-hand side. For all the cases, the tolerance is set to $10^{-10}$.
We conducted our tests on a single node of a cluster with two 2.6
GHz Intel Xeon E5-2690v3 processors and 128 GB of memory. Because
ILU is only available in serial in PETSc, we performed all the tests
using a single core, and defer comparisons of parallel algorithms
and implementations to future work.
\subsection{Test Matrices}
\begin{table}
\caption{\label{tab:test_matrices}Summary of test matrices. }
\centering{}{\footnotesize{}}%
\begin{tabular}{>{\centering}p{1.1cm}|>{\centering}p{2.1cm}|c|>{\raggedleft}p{1.5cm}|>{\raggedleft}p{1.7cm}|>{\centering}p{1.6cm}}
\hline
\textbf{\footnotesize{}Matrix} & \textbf{\footnotesize{}Discretization} & \textbf{\footnotesize{}PDE} & \textbf{\footnotesize{}Size } & \textbf{\footnotesize{}\#Nonzeros} & \textbf{\footnotesize{}Cond. No.}\tabularnewline
\hline
\hline
1 & \textbf{\footnotesize{}FEM 2D} & {\footnotesize{} conv. diff.} & {\footnotesize{}1,044,226} & {\footnotesize{}7,301,314} & {\footnotesize{}8.31e5}\tabularnewline
\hline
2 & \textbf{\footnotesize{}FEM 3D} & {\footnotesize{} conv. diff.} & {\footnotesize{}237,737} & {\footnotesize{}1,819,743} & {\footnotesize{}8.90e3}\tabularnewline
\hline
3 & \textbf{\footnotesize{}FEM 3D} & {\footnotesize{} conv. diff.} & {\footnotesize{}1,529,235} & {\footnotesize{}23,946,925 } & {\footnotesize{}3.45e4}\tabularnewline
\hline
4 & \textbf{\footnotesize{}FEM 3D} & {\footnotesize{} conv. diff.} & {\footnotesize{}13,110,809} & {\footnotesize{}197,881,373} & $-$\tabularnewline
\hline
5 & \textbf{\footnotesize{}AES-FEM 2D} & {\footnotesize{} conv. diff.} & {\footnotesize{}1,044,226 } & {\footnotesize{}13,487,418} & {\footnotesize{}9.77e5}\tabularnewline
\hline
6 & \textbf{\footnotesize{}AES-FEM 3D} & {\footnotesize{} conv. diff.} & {\footnotesize{}13,110,809} & {\footnotesize{}197,882,439} & $-$\tabularnewline
\hline
7 & \textbf{\footnotesize{}GFD 2D} & {\footnotesize{} conv. diff.} & {\footnotesize{}1,044,226 } & {\footnotesize{}7,476,484} & {\footnotesize{}2.38e6}\tabularnewline
\hline
8 & \textbf{\footnotesize{}GFD 3D} & {\footnotesize{} conv. diff.} & {\footnotesize{}1,529,235} & {\footnotesize{}23,948,687} & {\footnotesize{}6.56e4}\tabularnewline
\hline
9 & \textbf{\footnotesize{}FDM 2D} & {\footnotesize{} Helmholtz} & {\footnotesize{}1,340,640} & {\footnotesize{}6,694,058} & {\footnotesize{}7.23e08}\tabularnewline
\hline
\end{tabular}{\footnotesize \par}
\end{table}
We constructed the test matrices from PDE discretizations as described
in Section~\ref{sec:PDE-Discretization-Methods}. We selected nine
representative matrices from a much larger number of case that we
have tested. The sizes of these matrices range from about $10^{5}$
to $10^{7}$ unknowns. Table~\ref{tab:test_matrices} summarizes
the PDE discretizations, the sizes, and the condition numbers of each
test matrix. The condition numbers of the largest matrices were unavailable
because their estimations ran out of memory.
\begin{figure}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{figures/Mesh2_2D_16}
\par\end{center}
\caption{\label{fig:Representative-example-2-D}Example unstructured 2D mesh
for convection-diffusion equation.}
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=0.98\columnwidth]{figures/climate_plot}
\par\end{center}
\begin{flushleft}
\caption{\label{fig:nonuniform-mesh}Example nonuniform structured mesh for
Helmholtz equation.}
\par\end{flushleft}%
\end{minipage}
\end{figure}
For the 2D FEM, AES-FEM, and GFD, our test matrices were obtained
with an unstructured mesh generated using Triangle \cite{ShewchukTRIANGLE96}.
Figure~\ref{fig:Representative-example-2-D} shows the qualitative
pattern of the mesh at a much coarser resolution than what was used
in our tests. For the 3D tests, we generated three unstructured meshes
of a cube at different resolutions using TetGen \cite{Si2006}, to
facilitate the scalability study of the preconditioned KSP methods
with respect to the number of unknowns. For the finite difference
method, we consider a matrix obtained from an unequally spaced structured
mesh for the Helmholtz equation with Neumann boundary conditions and
a very small constant $d$ in (\ref{eq:model_problem}), so the matrix
has a very large condition number. Figure~\ref{fig:nonuniform-mesh}
shows the qualitative pattern of the mesh at a much coarser resolution.
\subsection{Convergence Comparison}
\begin{figure}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix1_SOR}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix4_SOR}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix1_ILU}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix4_ilu}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix1_ml}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix4_ml}
\par\end{center}%
\end{minipage}
\caption{\label{fig:FEM-residual}Relative residuals versus numbers of matrix-vector
multiplications for Gauss-Seidel, ILU and ML preconditioner for matrix
1 (left column) and matrix 4 (right column).}
\end{figure}
\begin{figure}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix7_sor}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix9_sor}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix7_ilu}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix9_ilu}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix7_ml}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix9_ml}
\par\end{center}%
\end{minipage}
\caption{\label{fig:FD-residual}Relative residual versus iteration count for
Gauss-Seidel, ILU and ML preconditioners for matrix 7 (left column)
and matrix 9 (right right).}
\end{figure}
The Krylov subspace methods we consider theoretically are all based
on the same Krylov subspace, but their practical convergence is complicated
by the restarts in Arnoldi iteration and the nonorthogonal basis in
the bi-Lanczos iteration. To supplement the theoretical results in
Section~\ref{sec:Analysis-KSP}, we present the convergence history
of four test matrices, including FEM 2D (matrix 1), FEM 3D (matrix
4), GFD 2D (matrix 7), and Helmholtz equation 2D (matrix 9) in Figures~\ref{fig:FEM-residual}
and \ref{fig:FD-residual}. These plots are representative for the
other test cases. Because the asymptotic convergence of the methods
depend on the degrees of the polynomials, or equivalently the number
of matrix-vector multiplications, we plot the relative residual with
respect to the number of matrix-vector multiplications, instead of
iteration counts. For ease of cross-comparison of different preconditioners,
we truncated the $x$ axis to be the same for Gauss-Seidel and ILU
for each matrix.
Figure~\ref{fig:FEM-residual} shows the convergence history of the
FEM in 2D and 3D. We observe that with ML, the four methods had about
the same convergence trajectories, while GMRES(30) converged slightly
faster, and the convergence of all the methods were quite smooth,
without apparent oscillation. In contrast, with Gauss-Seidel or ILU
preconditioners, GMRES(30) converged fast initially, but then slowed
down drastically due to restart, whereas BiCGSTAB had highly oscillatory
residuals. QMRCGSTAB was smoother than BiCGSTAB, and it sometimes
converged faster than BiCGSTAB. The convergence of TFQMR exhibited
a staircase pattern, indicating frequent near stagnation. These results
indicate that an effective multigrid preconditioner can effectively
overcome the disadvantages of each of these KSP methods, including
oscillations in BiCGSTAB and slow convergence of GMRES due to restarts.
Figure~\ref{fig:FD-residual} shows the convergence results for GFD
2D (matrix 7) and finite-difference solution for Helmholtz 2D (matrix
9). The result for matrix 7 is qualitative similar to those of 2D
FEM, except that the stagnation of TFQMR is even more apparent. Matrix
9 is much more problematic for all the cases. Bi-CGTAB oscillated
wildly. GMRES and TFQMR both stagnated with Gauss-Seidel and ILU.
Even with ML, it took more than 300 matrix-vector products for all
the methods. We will address the efficiency issue of AMG preconditioners
further in Section~\ref{sub:ML-VS-HYPRE} when we compare smoothed
aggregation with classical AMG.
\subsection{Timing Comparison}
\begin{figure}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/BAR_PLOT/matrix1_bar}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/BAR_PLOT/matrix4_bar}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/BAR_PLOT/matrix7_bar}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/BAR_PLOT/matrix8_bar}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/BAR_PLOT/matrix5_bar}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/BAR_PLOT/matrix9_bar}
\par\end{center}%
\end{minipage}
\caption{\label{fig:Timing}Timing results with convection-diffusion equation
and Helmholtz equation. The encircled bars indicate the fastest solver-preconditioner
combination. For matrix 9, star ({*}) indicates stagnation of the
solvers after 10,000 iterations.}
\end{figure}
The convergence plots are helpful in revealing the intrinsic properties
of the KSP methods, but for most applications, the overall runtime
is the ultimate criteria. Figure~\ref{fig:Timing} compares the runtimes
for six matrices: FEM 2D and 3D (matrices 1 and 4), AES-FEM 2D and
3D (matrices 5 and 6), GFD 2D (matrix 7), and Helmholtz 2D (matrix
9). The results of GFD 3D was qualitatively the same as FEM 3D, so
we did not include them. We consider the combinations of all four
KSP methods with the three preconditioners, and encircle the ones
with the best performance for each matrix. It can be seen that ML
accelerated all the KSP methods significantly better than Gauss-Seidel
and ILU. With ML, GMRES(30) is slightly faster than the other KSP
methods in five out of six cases. However, GMRES(30) is also significantly
slower than the others when using Gauss-Seidel or ILU preconditioners.
These are consistent with the convergence results in Figures~\ref{fig:FEM-residual}
and \ref{fig:FD-residual}. Therefore, the numbers of matrix-vector
multiplications are fairly good predictors of the overall performance.
Among the bi-Lanczos-based methods, BiCGSTAB is usually the most efficient,
thanks to its lower cost per iteration, despite its less smooth convergence.
QMRCGSTAB is a good alternative of BiCGSTAB if smoother convergence
is desired, which may lead to earlier termination for relatively larger
convergence tolerances. TFQMR is less reliable due to its frequent
stagnation. Between ILU and Gauss-Seidel, ILU consistently outperforms.
Note that for the Helmholtz equation, none of the methods converged
with Gauss-Seidel after 10,000 iterations. These results suggest that
GMRES or BiCGSTAB with ML should be ones' first choices. However,
if an AMG preconditioner is unavailable, then BiCGSTAB with ILU may
be a viable alternative for relatively small problems.
\subsection{ML Versus HYPRE Preconditioners\label{sub:ML-VS-HYPRE}}
\begin{figure}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{\string"figures/Iteration_plots/ML V_S HYPRE/matrix1\string".pdf}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{\string"figures/Iteration_plots/ML V_S HYPRE/matrix1_bar\string".pdf}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{\string"figures/Iteration_plots/ML V_S HYPRE/matrix4\string".pdf}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{\string"figures/Iteration_plots/ML V_S HYPRE/matrix4_bar\string".pdf}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{\string"figures/Iteration_plots/ML V_S HYPRE/matrix9\string".pdf}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{\string"figures/Iteration_plots/ML V_S HYPRE/matrix9_bar\string".pdf}
\par\end{center}%
\end{minipage}
\caption{\label{fig:ML_vs_Hypre}Convergence history (left) and runtimes (right)
of preconditioned KSP methods with ML and Hypre for FEM 2D, FEM 3D
and Helmholtz 2D. The encircled bars indicate the solver-preconditioner
combination with the best performance.}
\end{figure}
Our preceding results demonstrated the effectiveness of ML versus
ILU and Gauss-Seidel. There are two primary types of AMG: smoothed
aggregation and classical AMG. We now compare their respective implementations
in ML and Hypre. We consider three representative cases: FEM 2D (matrix
1), FEM 3D (matrix 4), and Helmholtz 2D (matrix 9).
Figure~\ref{fig:ML_vs_Hypre} shows the convergence and runtimes
of the four KSP methods with ML and Hypre. For ML, we used the default
parameters. For Hypre, however, different ``strong thresholds''
are needed for 2D and 3D problems, as documented in Hypre User's Manual.
This threshold controls the sparsity of the coarse levels. For 2D
problems, we used the default threshold, which is 0.25. For 3D FEM,
the recommended value was 0.5 in the Manual, but we found 0.8 delivered
the best performance in our tests, which is what used for the results
Figure~\ref{fig:ML_vs_Hypre}. ML outperformed Hypre for FEM 3D by
about a factor of 2, because of its lower cost per iteration. For
FEM 2D, ML also outperformed Hypre, but less significantly. However,
for the ill-conditioned 2D Helmholtz equation, Hypre outperformed
ML by a factor of 30. We tried various smoothers in ML, but the results
were quantitatively the same. This indicates that Hypre performs better
than ML for ill-conditioned systems, because its coarser matrices
are denser and hence preserve more information than those of ML. Overall,
there is not a clear winner between the two AMG methods. ML may be
preferred, because it does not require manually tuning the parameters
based on the dimension of the PDE, and it performs better for well-conditioned
problems. These results also indicate that more research into multigrid
preconditioners are needed.
\subsection{Scalability Comparison}
\begin{figure}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Scalability/scalability_gmres}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Scalability/scalability_bicgstab}
\par\end{center}%
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Scalability/scalability_qmrcgstab}
\par\end{center}%
\end{minipage}\hfill{} %
\begin{minipage}[t]{0.45\textwidth}%
\begin{center}
\includegraphics[width=1\textwidth]{figures/Scalability/scalability_tfqmr}
\par\end{center}%
\end{minipage}
\caption{\label{fig:Scalability}Scalability result of the preconditioned solvers
for FEM 3D.}
\end{figure}
The relative performances of preconditioned KSP methods may depend
on problem sizes. To assess the scalability of different methods,
we consider the matrices 2, 3 and 4 from FEM 3D, whose numbers of
unknowns grow approximately by a factor of 8 between each adjacent
pair. Figure~\ref{fig:Scalability} shows the timing results of the
four Krylov subspace methods with Gauss-Seidel, ILU, ML, and Hypre
preconditioners. The $x$ axis corresponds to the number of unknowns,
and the $y$ axis corresponds to the runtimes, both in logarithmic
scale. For a perfectly scalable method, the slope should be 1. We
observe that with either ML or Hypre, the slope for the four KSP methods
were all nearly 1, where ML has a slightly smaller slope. The slopes
for Gauss-Seidel and ILU are greater than 1, so the numbers of iterations
would grow as the problem size grows. Therefore, the performance advantage
of multigrid preconditioners would become even larger as the problem
size increases.
\section{Conclusions and Discussions\label{sec:Conclusions-and-Future}}
In this paper, we presented a systematic comparison of a few preconditioned
Krylov subspace methods, including GMRES, TFQMR, BiCGSTAB and QMRCGSTAB,
with Gauss-Seidel, ILU and AMG as right preconditioners. These methods
are representative of the state-of-the-art methods for solving large,
sparse, nonsymmetric linear systems arising from PDE discretizations.
We compared the methods theoretically regarding their cost per iteration,
and empirically regarding convergence, runtimes, and scalability.
Our results show that GMRES with smoothed-aggregation AMG preconditioners
is often the most efficient method, because GMRES tends to the most
efficient when the iteration count is low, which is the case with
an effective AMG preconditioner. However, GMRES is far less competitive
than the other methods with Gauss-Seidel or ILU, because the restarts
may cause slow convergence and even stagnation.
Based on our analysis, we make the following primary recommendation:
\begin{quotation}
\emph{For a very large, reasonably well-conditioned linear system,
use GMRES with smoothed-aggregation AMG as right preconditioner.}
\end{quotation}
With an AMG preconditioner, BiCGSTAB converges almost as smoothly
as the other methods, and it can be safely used in place of GMRES
with only a slight loss of performance. However, GMRES should not
be used for large systems without a multigrid preconditioner.
The easiest way to implement the above recommendation is to use existing
software packages. PETSc \cite{petsc-user-ref} is an excellent choice,
since it supports both left and right preconditioning for GMRES and
BiCGSTAB and supports smoothed aggregation through ML \cite{GeeSie06ML}
as an optional external package. Note that PETSc uses left preconditioning
by default. The user must explicitly set the option to use right preconditioning
to avoid premature termination or false stagnation.
Some software packages do not support right preconditioning or AMG
preconditioners. For example, as of Release 2016a, the built-in GMRES
solver in MATLAB only supports left preconditioning, and there is
no built-in support for AMG. In these cases, we make a secondary recommendation:
\begin{verse}
\emph{If AMG is unavailable and the problem size is moderate, BiCGSTAB
with ILU as right preconditioner is a reasonable choice.}
\end{verse}
This choice may be good for MATLAB users, because MATLAB has built-in
support for ILU, and the built-in BiCGSTAB uses right preconditioning.
If smoother convergence is desired, a custom implementation of QMRCGSTAB
with right preconditioning may be used. The built-in TFQMR in MATLAB
supports right preconditioning, but it stagnates frequently, so we
do not recommend. The Gauss-Seidel preconditioner should be used only
as a last resort if neither AMG nor ILU is available.
We note that although smoothed aggregation is a good choice in many
cases, it is by no means bulletproof, especially for ill-conditioned
systems. For linear systems arising from elliptic PDEs, the condition
number typically grows inversely proportional to $h^{2}$, so ill-conditioning
occurs quite frequently in practice for large-scale problems. For
relatively ill-conditioned systems, the classical AMG, as implemented
in Hypre \cite{falgout2002hypre}, may be a better choice than smoothed
aggregation. However, the classical AMG is not as scalable as smoothed
aggregation, and it requires tuning parameters for 3D problems \cite{falgout2002hypre}.
Further research and development are needed to match the efficiency
of smoothed aggregation and the robustness of classical AMG. A promising
direction is hybrid geometric+algebraic multigrid \cite{LJM14HYGA},
which we plan to explore in the future.
One limitation of this work is that we did not consider parallel performance
and the scalability of the iterative methods with respect to the number
of cores. This omission was necessary to make the scope of this study
manageable. Fortunately, for our primary recommendation, the MPI-based
parallel implementation of right-preconditioned GMRES and BiCGSTAB
are available in PETSc, and that of the smoothed aggregation AMG is
available in ML. They are excellent choices for distributed-memory
machines. For shared-memory machines and GPU acceleration, some OpenMP
and CUDA-based implementations are available in some software packages,
such as \cite{Paralution}, but its current implementation seems to
support only left preconditioning. Further development and comparison
of different parallel algorithms are still needed, which we plan to
explore in the future. Another omission in this work was the solution
of nonsymmetric, rank-deficient linear systems, which is a challenging
problem in its own right.
\section*{Acknowledgements}
Results were obtained using the high-performance LI-RED computing
system at the Institute for Advanced Computational Science of Stony
Brook University, funded by the Empire State Development grant NYS
\#28451.
\bibliographystyle{siam}
| {
"timestamp": "2016-07-04T02:10:12",
"yymm": "1607",
"arxiv_id": "1607.00351",
"language": "en",
"url": "https://arxiv.org/abs/1607.00351",
"abstract": "Preconditioned Krylov subspace (KSP) methods are widely used for solving large-scale sparse linear systems arising from numerical solutions of partial differential equations (PDEs). These linear systems are often nonsymmetric due to the nature of the PDEs, boundary or jump conditions, or discretization methods. While implementations of preconditioned KSP methods are usually readily available, it is unclear to users which methods are the best for different classes of problems. In this work, we present a comparison of some KSP methods, including GMRES, TFQMR, BiCGSTAB, and QMRCGSTAB, coupled with three classes of preconditioners, namely Gauss-Seidel, incomplete LU factorization (including ILUT, ILUTP, and multilevel ILU), and algebraic multigrid (including BoomerAMG and ML). Theoretically, we compare the mathematical formulations and operation counts of these methods. Empirically, we compare the convergence and serial performance for a range of benchmark problems from numerical PDEs in 2D and 3D with up to millions of unknowns and also assess the asymptotic complexity of the methods as the number of unknowns increases. Our results show that GMRES tends to deliver better performance when coupled with an effective multigrid preconditioner, but it is less competitive with an ineffective preconditioner due to restarts. BoomerAMG with proper choice of coarsening and interpolation techniques typically converges faster than ML, but both may fail for ill-conditioned or saddle-point problems while multilevel ILU tends to succeed. We also show that right preconditioning is more desirable. This study helps establish some practical guidelines for choosing preconditioned KSP methods and motivates the development of more effective preconditioners.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A Comparison of Preconditioned Krylov Subspace Methods for Large-Scale Nonsymmetric Linear Systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357195106374,
"lm_q2_score": 0.805632181981183,
"lm_q1q2_score": 0.7909178898382215
} |
https://arxiv.org/abs/1708.07093 | Where is the cone? | Real quadric curves are often referred to as "conic sections," implying that they can be realized as plane sections of circular cones. However, it seems that the details of this equivalence have been partially forgotten by the mathematical community. The definitive analytic treatment was given by Otto Staude in the 1880s and a non-technical description was given in the first chapter of Hilbert and Cohn-Vossen's "Geometry and the Imagination" (1932). The main theorem is elegant and easy to state but is surprisingly difficult to find in the literature. A synthetic version appears in The Universe of Conics (2016) but we still have not found a full analytic treatment written down. The goal of this note is to fill a surprising gap in the literature by advertising this beautiful theorem, and to provide the slickest possible analytic proof by using standard linear algebra that was not standard in 1932. | \section{Introduction}
\label{sec:introduction}
A {\em real quadric curve} is defined by an equation of the form
\begin{equation*}
ax^2+bxy+cy^2+dx+ey+f=0
\end{equation*}
for some real numbers $a,b,c,d,e,f\in\mathbb{R}$. One often sees real quadric curves described as ``conic sections." This terminology suggests that we can view this curve as the intersection of the $x,y$-plane with a circular cone living in $x,y,z$-space. However, it is very rare to see the details of this spelled out. The purpose of the present note is to answer the question: {\bf where is the cone?} More specifically, we will answer the following three questions:
\begin{itemize}
\item From which points in space does a given quadric curve look like a circle?\\ (This is the apex of the cone.)
\item In which direction should we look to see this circle?\\(This is the axis of symmetry of the cone.)
\item How big is the circle?\\(This is the angle of aperture of the cone.)
\end{itemize}
The answers to these questions were well known to geometers in the late nineteenth and early twentieth centuries. However, based on my several years of internet searching, it seems that the answers are not well known today. Eventually I found a clue in footnote 4 on page 24 of Hilbert and Cohn-Vossen's {\em Geometry and the Imagination} (1932).\footnote{For the precise statement see Section \ref{sec:symmetries} below.} This led me to a genre of textbooks on the ``analytic geometry of three dimensions" that were written around the same time. I found the text of D.M.Y.~Sommerville (1934) particularly helpful.
Without further ado, here is the main theorem describing cones over quadric curves.
\bigskip
{\bf Main Theorem.}\footnote{As stated this result applies only to {\em non-degenerate} and {\em central} quadric curves (ellipses and hyperbolas). Analogous results for non-central quadric curves (parabolas) and degenerate curves (line pairs) can be obtained from limiting arguments that are not so interesting, so I will omit them.} Let $a,b,c\in\mathbb{R}$ be distinct real numbers and consider the following space curves in the three principal coordinate planes:
\begin{align*}
x^2/(a-c)+y^2/(b-c)=1 \quad\text{and}\quad z=0,\\
x^2/(a-b)+z^2/(c-b)=1 \quad\text{and}\quad y=0,\\
y^2/(b-a)+z^2/(c-a)=1 \quad\text{and}\quad x=0.
\end{align*}
In general, two of these curves are real and the third is imaginary. If $\mathbf{u}$ is any point on one of the real curves then the cone from $\mathbf{u}$ to the other real curve is circular. Furthermore, the axis of symmetry of this cone is the tangent line to the first curve at $\mathbf{u}$.\footnote{and from this one can easily compute the angle of aperture} The cone from any {\bf other} point in space to any one of the real curves is {\bf not} circular. \hfill ///
\bigskip
For example, if $c<b<a$ then we have an ellipse in the $x,y$-plane and a hyperbola in the $x,z$-plane. Figure \ref{fig:ellipse} shows a typical circular cone over the ellipse and Figure \ref{fig:hyperbola} shows a typical circular cone over the hyperbola.\footnote{These pictures were made with GeoGebra.}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{cone_over_ellipse.png}
\end{center}
\caption{Realizing an ellipse as a conic section.}
\label{fig:ellipse}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{cone_over_hyperbola.png}
\end{center}
\caption{Realizing a hyperbola as a conic section.}
\label{fig:hyperbola}
\end{figure}
This theorem provides a completely satisfying answer to the question ``where is the cone?" and I am surprised by how difficult it was to track down. After reading the result in Hilbert and Cohn-Vossen I found it stated without proof in several textbooks of the period.\footnote{It even appears as Exercise 6 on page 100 of Barry Spain's {\em Analytic Quadrics} (1960).} More recently, the result appears as Theorem 4.2.1 in {\em The Universe of Conics} (2016), where the authors provide a case-by-case synthetic proof. However, I still have not found an analytic proof written down anywhere.
My goal in this note is to provide the slickest possible analytic treatment for the statement in Hilbert and Cohn-Vossen's footnote by employing standard linear algebra that was not standard in 1932. I don't claim to have found the ``book proof" but hopefully I have done something useful to fill this surprising gap in the literature.
\section{The Equation of a Circular Cone}
\label{sec:circcone}
To give an analytic treatment of conic sections we must first give an analytic treatment of cones. A {\em circular cone} in $\mathbb{R}^3$ is specified by the following data:
\begin{itemize}
\item A point in space $\mathbf{u}\in\mathbb{R}^3$ defining the apex,
\item A unit vector $\mathbf{r}\in\mathbb{R}^3$ defining the axis of symmetry,
\item An angle $\theta\in[0,\pi/2]$ defining the aperture of the cone.\footnote{The extreme values $\theta=0$ and $\theta=\pi/2$ correspond to a line and a plane, respectively.}
\end{itemize}
Geometrically, the cone consists of all points $\mathbf{x}\in\mathbb{R}^3$ such that the line connecting $\mathbf{x}$ and $\mathbf{u}$ makes an angle of $\theta$ (or $\pi-\theta$) with the axis of symmetry. Algebraically, we can express this situation with the dot product. Since $\mathbf{r}$ is a unit vector we have
\begin{equation*}
(\mathbf{x}-\mathbf{u})^T\mathbf{r}=\|\mathbf{x}-\mathbf{u}\|\cdot\|\mathbf{r}\|\cdot\cos\theta=\|\mathbf{x}-\mathbf{u}\|\cdot\cos\theta. \tag{$\ast$}
\end{equation*}
Furthermore, since every $1\times 1$ matrix is symmetric we have
\begin{equation*}
(\mathbf{x}-\mathbf{u})^T\mathbf{r}=\left[(\mathbf{x}-\mathbf{u})^T\mathbf{r}\right]^T=\mathbf{r}^T(\mathbf{x}-\mathbf{u}).
\end{equation*}
Then by squaring both sides of ($\ast$) we obtain the following equation for the circular cone:
\begin{align*}
[(\mathbf{x}-\mathbf{u})^T\mathbf{r}]^2&=\|\mathbf{x}-\mathbf{u}\|^2\cdot\cos^2\theta \\
[(\mathbf{x}-\mathbf{u})^T\mathbf{r}]\cdot [\mathbf{r}^T(\mathbf{x}-\mathbf{u})] &= (\mathbf{x}-\mathbf{u})^T(\mathbf{x}-\mathbf{u})\cdot\cos^2\theta \\
(\mathbf{x}-\mathbf{u})^T\left(\mathbf{r}\r^T\right)(\mathbf{x}-\mathbf{u})&= (\mathbf{x}-\mathbf{u})^T\left((\cos^2\theta)I\right)(\mathbf{x}-\mathbf{u})\\
(\mathbf{x}-\mathbf{u})^T\left(\mathbf{r}\r^T-(\cos^2\theta)I\right)(\mathbf{x}-\mathbf{u}) &=0.
\end{align*}
In summary, for any unit vector $\mathbf{r}=(r,s,t)$ and angle $\theta$ we define the matrix
\begin{equation*}
C_{\mathbf{r},\theta}:=\mathbf{r}\r^T-(\cos^2\theta)I=\begin{pmatrix} r^2-\cos^2\theta&rs&rt\\ rs&s^2-\cos^2\theta&st \\ rt& st& t^2-\cos^2\theta\end{pmatrix}.
\end{equation*}
Then the circular cone with apex $\mathbf{u}$, axis $\mathbf{r}$ and aperture $\theta$ is defined by the equation
\begin{equation*}
\boxed{(\mathbf{x}-\mathbf{u})^T C_{\mathbf{r},\theta}(\mathbf{x}-\mathbf{u})=0.}
\end{equation*}
Let us make a few observations about the {\em cone matrix} $C_{\mathbf{r},\theta}$. Since $\mathbf{r}^T\mathbf{r}=\|\mathbf{r}\|^2=1$ we observe that $\mathbf{r}$ is an eigenvector of $C_{\mathbf{r},\theta}$ with eigenvalue $1-\cos^2\theta$:
\begin{align*}
C_{\mathbf{r},\theta}\cdot \mathbf{r}=\left(\mathbf{r}\r^T-(\cos^2\theta)I\right)\mathbf{r}=\mathbf{r}(\mathbf{r}^T\mathbf{r})-(\cos^2\theta)\mathbf{r}=(1-\cos^2\theta)\mathbf{r}.
\end{align*}
And if $\mathbf{r}'$ is any vector perpendicular to $\mathbf{r}$ (i.e., if $\mathbf{r}^T\mathbf{r}'=0$) then we observe that $\mathbf{r}'$ is an eigenvector of $C_{\mathbf{r},\theta}$ with eigenvalue $-\cos^2\theta$:
\begin{align*}
C_{\mathbf{r},\theta}\cdot \mathbf{r}'=\left(\mathbf{r}\r^T-(\cos^2\theta)I\right)\mathbf{r}'=\mathbf{r}(\mathbf{r}^T\mathbf{r}')-(\cos^2\theta)\mathbf{r}'=-(\cos^2\theta)\mathbf{r}'.
\end{align*}
We conclude that the cone matrix $C_{\mathbf{r},\theta}$ has eigenvalues
\begin{equation*}
1-\cos^2\theta,\quad -\cos^2\theta,\quad -\cos^2\theta.
\end{equation*}
Conversely, let $C^T=C$ be {\bf any} real symmetric matrix with the same eigenvalues. In this case I claim that $C=C_{\mathbf{r},\theta}$ for some unit vector $\mathbf{r}\in\mathbb{R}^3$. Indeed, since any real symmetric matrix is orthogonally diagonalizable (by the Principal Axis Theorem), we know that there exists a real orthogonal matrix $Q^T=Q^{-1}$ such that
\begin{equation*}
Q^T C Q = \begin{pmatrix} 1-\cos^2\theta &0&0 \\ 0&-\cos^2\theta & 0 \\ 0&0&-\cos^2\theta\end{pmatrix}=\begin{pmatrix} 1&0&0\\mathbf{0}&0&0\\mathbf{0}&0&0\end{pmatrix}-(\cos^2\theta)I.
\end{equation*}
Now let $\mathbf{r}$ be the first column of the matrix $Q$, i.e., the eigenvector of $C$ corresponding to eigenvalue $1-\cos^2\theta$. Since $QQ^T=I$ we see that $\mathbf{r}$ is a unit vector and we observe that
\begin{equation*}
C=Q\left[\begin{pmatrix} 1&0&0\\mathbf{0}&0&0\\mathbf{0}&0&0\end{pmatrix}-(\cos^2\theta)I\right]Q^T=Q\begin{pmatrix} 1&0&0\\mathbf{0}&0&0\\mathbf{0}&0&0\end{pmatrix}Q^T-(\cos^2\theta)QQ^T=\mathbf{r}\r^T-(\cos^2\theta)I
\end{equation*}
as desired.
\section{Quadric Cones in General}
\label{sec:quadcone}
More generally, for any real symmetric matrix $C^T=C$ we consider the quadratic equation
\begin{equation*}
(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u})=0.
\end{equation*}
Clearly the point $\mathbf{x}=\mathbf{u}$ is a solution. If the (necessarily real) eigenvalues of $C$ are either all negative or all positive then $\mathbf{x}=\mathbf{u}$ is the {\bf only (real) solution}. We are interested in the case when the eigenvalues of $C$ are nonzero and not all of the same sign. In this case we say that $C$ is {\em indefinite} and we say that the equation $(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u})=0$ defines a {\em quadric cone}.
The following theorem says that the reflection symmetries of a quadric cone are the same as the eigenvectors of its matrix. In fact it is true that {\bf every} symmetry of the cone is a product of {\bf reflection} symmetries,\footnote{This is the famous Cartan-Dieudonn\'e Theorem.} but we won't need that result.
\bigskip
{\bf Theorem.} Let $C^T=C$ be an indefinite real symmetric matrix (i.e., whose eigenvalues are nonzero and not all of the same sign). For any vector $\mathbf{p}\in\mathbb{R}^3$ the following are equivalent:
\begin{itemize}
\item We have $C\mathbf{p}=c\mathbf{p}$ for some scalar $c\in\mathbb{R}$.
\item The quadric cone $(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u})=0$ is symmetric about the plane $\mathbf{u}+\mathbf{p}^\perp$ which is perpendicular to $\mathbf{p}$ and passes through the apex $\mathbf{u}$.
\end{itemize}
\hfill ///
\bigskip
To prove this we require a general lemma on quadratic forms, which is an easy consequence of Hilbert's Nullstellensatz. To keep the treatment self-contained I will present an elementary proof that I learned from John H.~Elton (2009).
\bigskip
{\bf Lemma.} Let $A^T=A$ and $B^T=B$ be real symmetric matrices of the same size, with $A$ indefinite. If the cone of $A$ is contained in the cone of $B$, i.e., if for all vectors $\mathbf{x}$ we have
\begin{equation*}
\mathbf{x}^TA\mathbf{x}=0 \quad\Longrightarrow\quad \mathbf{x}^TB\mathbf{x}=0,
\end{equation*}
then it follows that $B=\lambda A$ for some real scalar $\lambda\neq 0$.\hfill ///
\bigskip
{\bf Proof of the Lemma.} We will prove the result for $3\times 3$ matrices. (See Elton for the general case.) By the Principal Axis Theorem there exists an orthogonal matrix $Q^T=Q^{-1}$ such that $Q^TAQ$ is diagonal:
\begin{equation*}
Q^TAQ=\begin{pmatrix} \lambda_1 & & \\ & \lambda_2 & \\ &&\lambda_3\end{pmatrix}.
\end{equation*}
Since $Q$ is invertible, it is enough to show that $Q^TBQ$ is a scalar multiple of $Q^TAQ$. Furthermore, since $A$ is indefinite we can assume without loss of generality (replacing $A$ by $-A$ if necessary) that the eigenvalues of $A$ satisfy $\lambda_1 > 0 > \lambda_2,\lambda_3$. Let us define the positive real numbers
\begin{equation*}
\ell_1:=1/\sqrt{\lambda_1}, \quad \ell_2:=1/\sqrt{-\lambda_2}\quad\text{and}\quad \ell_3:=1/\sqrt{-\lambda_3}.
\end{equation*}
We observe that the following two equations hold:
\begin{equation*}
0=\begin{pmatrix} \ell_1 & \pm \ell_2 & 0\end{pmatrix} \begin{pmatrix} \lambda_1 & & \\ & \lambda_2 & \\ &&\lambda_3\end{pmatrix} \begin{pmatrix} \ell_1\\ \pm \ell_2 \\ 0 \end{pmatrix} = \begin{pmatrix} \ell_1 & \pm \ell_2 & 0\end{pmatrix}Q^TAQ \begin{pmatrix} \ell_1\\ \pm \ell_2 \\ 0 \end{pmatrix}.
\end{equation*}
Then the implication $(\mathbf{x}^TA\mathbf{x}=0)\Rightarrow(\mathbf{x}^TB\mathbf{x}=0)$ with $\mathbf{x}^T=\begin{pmatrix} \ell_1 &\pm\ell_2&0\end{pmatrix} Q^T$ tells us that
\begin{equation*}
0=\begin{pmatrix} \ell_1 & \pm \ell_2 & 0\end{pmatrix}Q^TBQ \begin{pmatrix} \ell_1\\ \pm \ell_2 \\ 0 \end{pmatrix}.
\end{equation*}
By writing $b_{ij}$ for the $(i,j)$-the entry of $Q^TBQ$ these two equations become\begin{equation*}
\left\{
\begin{array}{ccccccc}
b_{11}/\lambda_1 & - & b_{22}/\lambda_2 & + & 2b_{12}\ell_1\ell_2 & = & 0, \\
b_{11}/\lambda_1 & - & b_{22}/\lambda_2 & - & 2b_{12}\ell_1\ell_2 & = & 0.
\end{array}
\right.
\end{equation*}
By adding these equations we find that $b_{22}=\lambda_2(b_{11}/\lambda_1)$ and hence $b_{12}=0$. Then a similar argument shows that
\begin{equation*}
0=\begin{pmatrix} \ell_1 & 0 & \pm\ell_3\end{pmatrix}Q^TBQ \begin{pmatrix} \ell_1\\ 0 \\ \pm \ell_3 \end{pmatrix},
\end{equation*}
which implies that $b_{33}=\lambda_3(b_{11}/\lambda_1)$ and $b_{13}=0$. At this point we know that
\begin{equation*}
Q^TBQ-\left(\frac{b_{11}}{\lambda_1}\right)Q^TAQ = \begin{pmatrix} 0&0&0 \\ 0&0&b_{23} \\ 0&b_{23}&0 \end{pmatrix}
\end{equation*}
and it remains only to show that $b_{23}=0$. To do this we fix any real numbers $(\alpha,\beta,\gamma)$ with the properties $\alpha^2=\beta^2+\gamma^2$ and $\beta\gamma\neq 0$.\footnote{Elton chooses $(\alpha,\beta,\gamma)=(5,4,3)$.} Then we obtain
\begin{equation*}
\begin{pmatrix} \alpha\ell_1&\beta\ell_2&\gamma\ell_3\end{pmatrix} Q^TAQ \begin{pmatrix} \alpha\ell_1\\ \beta\ell_2\\ \gamma\ell_3\end{pmatrix}=\alpha^2-\beta^2-\gamma^2=0 \quad\Rightarrow\quad \begin{pmatrix} \alpha\ell_1&\beta\ell_2&\gamma\ell_3\end{pmatrix} Q^TBQ \begin{pmatrix} \alpha\ell_1\\ \beta\ell_2\\ \gamma\ell_3\end{pmatrix}=0,
\end{equation*}
which implies that
\begin{equation*}
2\beta\gamma\ell_2\ell_3 b_{23}=\begin{pmatrix} \alpha\ell_1&\beta\ell_2&\gamma\ell_3\end{pmatrix} \begin{pmatrix} 0&0&0 \\ 0&0&b_{23} \\ 0&b_{23}&0 \end{pmatrix} \begin{pmatrix} \alpha\ell_1\\ \beta\ell_2\\ \gamma\ell_3\end{pmatrix}=0.
\end{equation*}
Finally, since $\beta\gamma\neq 0$ we conclude that $b_{23}=0$. This completes the proof.\hfill $\qed$
\bigskip
{\bf Proof of the Theorem.} Recall that $C^T=C$ is an indefinite real symmetric matrix. It is sufficient to prove the theorem in the case when $\mathbf{p}$ is a unit vector and the apex is at the origin: $\mathbf{u}=\mathbf{0}$. Since $\mathbf{p}$ is a unit vector (i.e., $\mathbf{p}^T\mathbf{p}=1$) we observe that $P=\mathbf{p}\p^T$ is the matrix that projects orthogonally onto the line $\mathbb{R}\mathbf{p}$ and that
\begin{equation*}
R=I-2P=I-2\mathbf{p}\p^T
\end{equation*}
is the matrix that reflects orthogonally across the plane $\mathbf{p}^\perp$.
First let us suppose that $\mathbf{p}$ is an eigenvector of $C$, say $C\mathbf{p}=c\mathbf{p}$. To prove that the reflection $R$ leaves the cone invariant we must show for any vector $\mathbf{x}$ that
\begin{equation*}
\mathbf{x}^TC\mathbf{x}=0 \quad\Rightarrow\quad (R\mathbf{x})^T C(R\mathbf{x})=\mathbf{x}^T RCR \mathbf{x} =0.
\end{equation*}
By observing that
\begin{align*}
RCR &= (I-2\mathbf{p}\p^T)C(I-2\mathbf{p}\p^T) \\
&= C-2\mathbf{p}(\mathbf{p}^TC) - 2(C\mathbf{p})\mathbf{p}^T + 4\mathbf{p}\p^T(C\mathbf{p})\mathbf{p}^T \\
&= C -2\mathbf{p}(c\mathbf{p}^T)-2(c\mathbf{p})\mathbf{p}^T+4\mathbf{p}\p^T(c\mathbf{p})\mathbf{p}^T \\
&= C -2c\mathbf{p}\p^T-2c\mathbf{p}\p^T+4c\mathbf{p}(\mathbf{p}^T\mathbf{p})\mathbf{p}^T \\
&= C -2c\mathbf{p}\p^T-2c\mathbf{p}\p^T+4c\mathbf{p}\p^T \\
&= C,
\end{align*}
we see that the statement is vacuously true. Conversely, let us suppose that
\begin{equation*}
\mathbf{x}^TC\mathbf{x}=0 \quad\Rightarrow\quad \mathbf{x}^T RCR\mathbf{x} =0
\end{equation*}
for all vectors $\mathbf{x}$. Since $C$ is indefinite, the Lemma tells us that $RCR=\lambda C$ for some scalar $\lambda\in\mathbb{R}$. Since $\det(C)\neq 0$ and $\det(R)=-1$ this implies that
\begin{align*}
\det(\lambda C) &= \det(RCR) \\
\lambda^3 \det(C) &= \det(R)^2 \det(C) \\
\lambda^3 \det(C) &= \det(C) \\
\lambda^3 &=1,
\end{align*}
which implies that $\lambda=1$ since $\lambda$ is real. Thus we conclude that $RCR=C$. Finally, since $R^2=I$ (indeed, $R$ is a reflection) we observe that
\begin{align*}
RC &= CR \\
(I-2\mathbf{p}\p^T)C &= C(I-2\mathbf{p}\p^T) \\
C-2\mathbf{p}(C\mathbf{p})^T &= C-2(C\mathbf{p})\mathbf{p}^T \\
\mathbf{p}(C\mathbf{p})^T &= (C\mathbf{p})\mathbf{p}^T.
\end{align*}
Let $\mathbf{p}_i$ and $(C\mathbf{p})_i$ denote the $i$-th entries of the vectors $\mathbf{p}$ and $C\mathbf{p}$, respectively. By comparing the $(i,j)$-th entry on both sides of the previous equation we find that $\mathbf{p}_i(C\mathbf{p})_j=(C\mathbf{p})_i\mathbf{p}_j$ for all $i$ and $j$. Since $\mathbf{p}\neq\mathbf{0}$ there exists some index $i$ such that $\mathbf{p}_i\neq 0$; let us define $c:=(C\mathbf{p})_i/\mathbf{p}_i$. Then for all indices $j$ we have $(C\mathbf{p})_j=c\mathbf{p}_j$ and it follows that $C\mathbf{p}=c\mathbf{p}$ as desired. \hfill $\qed$
\bigskip
The following theorem summarizes the results of Sections \ref{sec:circcone} and \ref{sec:quadcone}.
\bigskip
{\bf Theorem.} Let $C^T=C$ be an indefinite real symmetric matrix with (real, nonzero) eigenvalues $\lambda,\mu,\nu$, not all of the same sign. By the Principal Axis Theorem there exists an orthogonal basis of eigenvectors $\mathbf{r},\mathbf{s},\mathbf{t}$ corresponding to eigenvalues $\lambda,\mu,\nu$, respectively.
\begin{itemize}
\item If $\lambda,\mu,\nu$ are distinct then the quadric cone $(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u})=0$ is symmetric with respect to the three mutually perpendicular planes $\mathbf{u}+\mathbf{r}^\perp, \mathbf{u}+\mathbf{s}^\perp, \mathbf{u}+\mathbf{t}^\perp$, called the {\em principal planes} of the cone. The cone is not symmetric with respect to any other plane.
\item If two eigenvalues collide, say $\mu=\nu$, then the cone $(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u})=0$ is circular with axis of symmetry $\mathbf{u}+\mathbb{R}\mathbf{r}$ and angle of aperture $\theta$ satisfying
\begin{equation*}
\cos^2\theta=\frac{\mu}{\mu-\lambda}.
\end{equation*}
\end{itemize}
\hfill ///
\bigskip
{\bf Proof.} It remains only to prove the second statement. We assume that the eigenvalues of $C$ are $\lambda,\mu,\mu$ with corresponding orthogonal eigenbasis $\mathbf{r},\mathbf{s},\mathbf{t}$. Since the eigenvalues of $\mathbf{s}$ and $\mathbf{t}$ are equal we see that the cone is symmetric with respect to any plane of the form $\mathbf{u}+(a\mathbf{s}+b\mathbf{t})^\perp$. In other words, the cone has rotational symmetry around the perpendicular axis $\mathbf{u}+\mathbb{R}\mathbf{r}$. Now let us consider the matrix $C':=C/(\lambda-\mu)$ with eigenvalues
\begin{equation*}
\frac{\lambda}{\lambda-\mu},\quad\frac{\mu}{\lambda-\mu},\quad\frac{\mu}{\lambda-\mu}.
\end{equation*}
Observe that the equations $(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u})=0$ and $(\mathbf{x}-\mathbf{u})^T C'(\mathbf{x}-\mathbf{u})=0$ define the same cone. Since $\lambda$ and $\mu$ have opposite signs we observe that $0<\mu/(\mu-\lambda),\lambda/(\lambda-\mu)<1$ and $\mu/(\mu-\lambda)+\lambda/(\lambda-\mu)=1$, hence there exists a unique angle $\theta\in[0,\pi/2]$ satisfying
\begin{equation*}
\frac{\lambda}{\lambda-\mu}=1-\cos^2\theta \qquad\text{and}\qquad \frac{\mu}{\mu-\lambda}=\cos^2\theta.
\end{equation*}
It follows from the remarks of Section \ref{sec:circcone} that $C'=C_{\mathbf{r},\theta}$ as desired. \hfill $\qed$
\section{Tangent Cones Over Quadric Surfaces}
\label{sec:tangentcone}
Our goal is to study the cone from a point $\mathbf{u}$ in $x,y,z$-space to a quadric curve lying in the $x,y$-coordinate plane. Surprisingly, it turns out that the best way to do this is to first consider the {\bf tangent cones} from a given point $\mathbf{u}$ to a certain family of {\bf quadric surfaces} in $x,y,z$-space. Then we will allow these quadric surfaces to degenerate to the desired quadric curve.
The quadric surfaces we will consider have the form
\begin{equation*}
\mathbf{x}^T A\mathbf{x} =1,
\end{equation*}
where $A^T=A$ is a real symmetric matrix. If $\mathbf{u}$ is any point, then the {\em tangent cone from $\mathbf{u}$ to the surface} consists of all points $\mathbf{x}$ such that the line $t\mathbf{x}+(1-t)\mathbf{u}$ has double contact with the surface. Observe that the line and surface intersect when
\begin{align*}
(tx+(1-t)\mathbf{u})^T A (t\mathbf{x}+(1-t)^2\mathbf{u}) &= 1 \\
t^2 \mathbf{x}^TA\mathbf{x}+2t(1-t)\mathbf{x}^tA\mathbf{u}+(1-t)^2\mathbf{u}^TA\mathbf{u} &= 1\\
t^2(\mathbf{x}^TA\mathbf{x}+\mathbf{u}^TA\mathbf{u}-2\mathbf{x}^TA\mathbf{u})+t(2\mathbf{x}^TA\mathbf{u}-2\mathbf{u}^TA\mathbf{u})+(\mathbf{u}^TA\mathbf{u}-1) &= 0.
\end{align*}
This quadratic equation in $t$ usually has two distinct roots corresponding to two distinct points of contact with the surface. Double contact occurs when the discriminant vanishes, i.e., when
\begin{align*}
(2\mathbf{x}^TA\mathbf{u}-2\mathbf{u}^TA\mathbf{u})^2-4(\mathbf{x}^TA\mathbf{x}+\mathbf{u}^TA\mathbf{u}-2\mathbf{x}^TA\mathbf{u})(\mathbf{u}^TA\mathbf{u}-1) &= 0 \\
(\mathbf{x}^TA\mathbf{u}-\mathbf{u}^TA\mathbf{u})^2-(\mathbf{x}^TA\mathbf{x}+\mathbf{u}^TA\mathbf{u}-2\mathbf{x}^TA\mathbf{u})(\mathbf{u}^TA\mathbf{u}-1) &= 0 \\
(\mathbf{x}^TA\mathbf{u})^2-2(\mathbf{x}^TA\mathbf{u})-(\mathbf{x}^TA\mathbf{x})(\mathbf{u}^TA\mathbf{u})+\mathbf{x}^TA\mathbf{x}+\mathbf{u}^TA\mathbf{u} &= 0 \\
(\mathbf{x}^TA\mathbf{u}-1)^2-(\mathbf{x}^TA\mathbf{x}-1)(\mathbf{u}^TA\mathbf{u}-1) &= 0.
\end{align*}
In summary, the tangent cone from $\mathbf{u}$ to the surface $\mathbf{x}^TA\mathbf{x}=1$ has the equation
\begin{equation*}
\boxed{(\mathbf{x}^TA\mathbf{u}-1)^2=(\mathbf{x}^TA\mathbf{x}-1)(\mathbf{u}^TA\mathbf{u}-1).}
\end{equation*}
However, since this is a quadric cone with apex at $\mathbf{u}$, we prefer to express it in the form $(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u})=0$ for some real symmetric matrix $C^T=C$. To find the matrix $C$ we expand the equations in terms of $\mathbf{x}$ to get
\begin{align*}
(\mathbf{x}-\mathbf{u})^TC(\mathbf{x}-\mathbf{u}) &=0\\
\mathbf{x}^T[C]\mathbf{x}+[-2\mathbf{u}^TC]\mathbf{x}+[\mathbf{u}^TC\mathbf{u}]&=0
\end{align*}
and
\begin{align*}
(\mathbf{x}^TA\mathbf{u}-1)^2-(\mathbf{x}^TA\mathbf{x}-1)(\mathbf{u}^TA\mathbf{u}-1)&=0\\
(\mathbf{x}^TA\mathbf{u})^2-2(\mathbf{x}^TA\mathbf{u})-(\mathbf{x}^TA\mathbf{x})(\mathbf{u}^TA\mathbf{u})+\mathbf{x}^TA\mathbf{x}+\mathbf{u}^TA\mathbf{u} &= 0 \\
(\mathbf{x}^TA\mathbf{u})(\mathbf{x}^TA\mathbf{u})^T-2(\mathbf{x}^TA\mathbf{u})+(\mathbf{x}^TA\mathbf{x})(1-\mathbf{u}^TA\mathbf{u})+\mathbf{u}^TA\mathbf{u} &= 0 \\
\mathbf{x}^T(A\mathbf{u}\u^TA)\mathbf{x}+\mathbf{x}^T\left[(1-\mathbf{u}^TA\mathbf{u})A\right]\mathbf{x}-2(\mathbf{x}^TA\mathbf{u})+\mathbf{u}^TA\mathbf{u} &= 0 \\
\mathbf{x}^T\left[A\mathbf{u}\u^TA+(1-\mathbf{u}^TA\mathbf{u})A\right]\mathbf{x}+[-2\mathbf{u}^TA]\mathbf{x}+[\mathbf{u}^TA\mathbf{u}] &= 0.
\end{align*}
By comparing the leading quadratic forms we see that the cone matrix $C$ satisfies
\begin{equation*}
C=A\mathbf{u}\u^TA+(1-\mathbf{u}^TA\mathbf{u})A,
\end{equation*}
at least up to a scalar multiple. For this choice of $C$ we easily verify that $-2\mathbf{u}^TC=-2\mathbf{u}^TA$ and $\mathbf{u}^TC\mathbf{u}=\mathbf{u}^TA\mathbf{u}$, so the linear and constant terms also agree.
In summary, we find that the tangent cone from the point $\mathbf{u}$ to the quadric surface $\mathbf{x}^TA\mathbf{x}=1$ has the equation
\begin{equation*}
\boxed{(\mathbf{x}-\mathbf{u})^T \left[ A\mathbf{u}\u^TA+(1-\mathbf{u}^TA\mathbf{u})A \right] (\mathbf{x}-\mathbf{u})=0.}
\end{equation*}
\section{Confocal Quadric Surfaces}
\label{sec:confocal}
The specific surfaces that we need are called {\em confocal quadric surfaces}. This is the key idea that I learned from Hilbert and Cohn-Vossen.
\bigskip
{\bf Definition.} Fix distinct real numbers $a,b,c\in\mathbb{R}$. Then for any real parameter $k\in\mathbb{R}$ not in the set $\{a,b,c\}$ we consider the quadric surface
\begin{equation*}
\mathbf{x}^TA_k\mathbf{x}=\frac{x^2}{a-k}+\frac{y^2}{b-k}+\frac{z^2}{c-k}=1,
\end{equation*}
where the matrix $A_k$ is defined by
\begin{equation*}
A_k:=\begin{pmatrix} 1/(a-k) & & \\ & 1/(b-k) & \\ && 1/(c-k) \end{pmatrix}.
\end{equation*}
The surfaces $\mathbf{x}^TA_k\mathbf{x}=1$ for various $k$ describe a {\em confocal family of quadrics}. \hfill ///
\bigskip
For example, let us assume that $c<b<a$. Then the surface $\mathbf{x}^TA_k\mathbf{x}=1$ is
\begin{itemize}
\item an ellipsoid when $k<c$,
\item a hyperboloid of one sheet when $c<k<b$,
\item a hyperboloid of two sheets when $b<k<a$,
\item imaginary when $a<k$.
\end{itemize}
As $k$ approaches one of the critical values $\{a,b,c\}$, the surface degenerates to a space curve in one of the three principal planes of the family. For example, as $k\to c$ we must have $z\to 0$ and hence the surface $\mathbf{x}^TA_k\mathbf{x}=1$ degenerates to the space curve defined by
\begin{equation*}
\frac{x^2}{a-c}+\frac{y^2}{b-c}=1 \quad\text{and}\quad z=0.
\end{equation*}
Note that this is an ellipse in the $x,y$-plane, called the {\em focal ellipse} of the system. For values of $k$ near $c$ we obtain either a very thin ellipsoid on the inside of the ellipse ($k<c$) or a very thin hyperboloid of one sheet on the outside of the ellipse ($c<k$). Similarly, as $k\to b$ or $k\to a$ the surface $\mathbf{x}^TA_k\mathbf{x}=1$ degenerates to the space curve
\begin{equation*}
\frac{x^2}{a-b}+\frac{z^2}{c-b}=1 \quad\text{and}\quad y=0
\end{equation*}
or
\begin{equation*}
\frac{y^2}{b-a}+\frac{z^2}{c-a}=1 \quad\text{and}\quad x=0,
\end{equation*}
respectively. The first of these is a hyperbola in the $x,z$-plane, called the {\em focal hyperbola} of the system. For $k$ on either side of $b$ we obtain very thin hyperboloid of one sheet ($k<b$) or a very thin hyperboloid of two sheets ($b<k$). As $k\to a$ from the left, the two sheets of the hyperboloid converge to the $y,z$-plane from opposite sides and then disappear. The focal curve in the $y,z$-plane is imaginary.
\bigskip
[Remark: The focal {\bf curves} of a confocal family of quadric {\bf surfaces} generalize the focal {\bf points} of a confocal family of quadric {\bf curves}. To see this, one should fix two real numbers $a\neq b$ and consider the family of quadric curves
\begin{equation*}
\frac{x^2}{a-k}+\frac{y^2}{b-k}=1.
\end{equation*}
For more on this topic see Chapter 1 of Hilbert and Cohn-Vossen.]
\bigskip
Here is the ``fundamental theorem" of the subject. We will prove the theorem for surfaces but it should be clear how to generalize the theorem to confocal quadrics in any number of dimensions.
\bigskip
{\bf Fundamental Theorem of Confocal Quadrics.} Fix real numbers $a,b,c\in\mathbb{R}$ satisfying $c<b<a$ and consider the confocal family $\mathbf{x}^TA_k\mathbf{x}=1$ of quadric surfaces. For each point $\mathbf{u}=(u,v,w)\in\mathbb{R}^3$ that is not on a principal plane (i.e., with $u,v,w\neq 0$) there exist exactly three surfaces of the family passing through $\mathbf{u}$. These surfaces correspond to parameters $k_1,k_2,k_3\in\mathbb{R}$ satisfying
\begin{equation*}
k_1<c<k_2<b<k_3<a,
\end{equation*}
hence there is one surface of each topological type. The tangent planes to the three surfaces at $\mathbf{u}$ are mutually perpendicular, and the {\em Cartesian coordinates} $(u,v,w)$ are related to the {\em confocal coordinates} $(k_1,k_2,k_3)$ as follows:
\begin{align*}
u^2 &= (a-k_1)(a-k_2)(a-k_3) / (b-a)(c-a) \\
v^2 &= (b-k_1)(b-k_2)(b-k_3) / (a-b)(c-b) \\
w^2 &= (c-k_1)(c-k_2)(c-k_3) / (a-c)(b-c).
\end{align*}
\hfill ///
\bigskip
Before proving this we must derive the equation of the tangent plane at a given point on a quadric surface. So consider a general quadric surface $\mathbf{x}^TA\mathbf{x}=1$ and let $\mathbf{u}$ be any point on the surface, so that $\mathbf{u}^TA\mathbf{u}=1$. Observe that
\begin{equation*}
\boxed{\mathbf{x}^TA\mathbf{u}=1}
\end{equation*}
is the equation of some plane passing through $\mathbf{u}$. I claim that this is the tangent plane.
To see why, we will show that any line $\mathbf{u}+t\mathbf{v}$ contained in the plane $\mathbf{x}^TA\mathbf{u}=1$ has at least double contact with the surface at $\mathbf{u}$. Indeed, since the line is contained in the plane we have for all $t$ that
\begin{align*}
(\mathbf{u}+t\mathbf{v})^TA\mathbf{u} &= 1 \\
\mathbf{u}^TA\mathbf{u}+t\,\mathbf{v}^TA\mathbf{u} &= 1\\
1+t\,\mathbf{v}^TA\mathbf{u} &= 1\\
t\,\mathbf{v}^TA\mathbf{u} &=0,
\end{align*}
and it follows that $\mathbf{v}^TA\mathbf{u}=0$. Then the intersection of the line with the surface is determined by the following equation in $t$:
\begin{align*}
(\mathbf{u}+t\mathbf{v})^TA(\mathbf{u}+t\mathbf{v}) &= 1 \\
\mathbf{u}^TA\mathbf{u}+2t\,\mathbf{v}^TA\mathbf{u}+t^2\,\mathbf{v}^TA\mathbf{v} &= 1\\
1+0+t^2\,\mathbf{v}^TA\mathbf{v} &= 1 \\
t^2\,\mathbf{v}^TA\mathbf{v} &= 0.
\end{align*}
If $\mathbf{v}^TA\mathbf{v}=0$ then we see that the line is completely contained in the surface, and if $\mathbf{v}^TA\mathbf{v}\neq 0$ then we see that $t=0$ is a double root as desired. In summary, we find that the plane $\mathbf{x}^TA\mathbf{u}=1$ is tangent to the surface $\mathbf{x}^TA\mathbf{x}=1$ at $\mathbf{x}=\mathbf{u}$.
\bigskip
{\bf Proof of the Fundamental Theorem.} We have assumed that $c<b<a$. Let $\mathbf{u}=(u,v,w)$ be any point that is not on a principal plane (i.e., such that $u,v,w\neq 0$). We are looking for values of $k$ such that $\mathbf{u}^TA_k\mathbf{u}=1$. In other words, we want
\begin{align*}
\begin{pmatrix} u & v & w\end{pmatrix} \begin{pmatrix} 1/(a-k) && \\ &1/(b-k)& \\ && 1/(c-k) \end{pmatrix} \begin{pmatrix} u\\ v\\ w\end{pmatrix} &= 1\\
\frac{u^2}{a-k}+\frac{v^2}{b-k}+\frac{w^2}{c-k} &=1 \\
(b-k)(c-k)u^2+(a-k)(c-k)v^2+(a-k)(b-k)w^2 &= (a-k)(b-k)(c-k).
\end{align*}
Therefore we will define the polynomial
\begin{equation*}
\boxed{\varphi_\mathbf{u}(k):=(b-k)(c-k)u^2+(a-k)(c-k)v^2+(a-k)(b-k)w^2- (a-k)(b-k)(c-k).}
\end{equation*}
We observe that $\varphi_u(k)$ is a cubic polynomial in $k$ with leading coefficient $1$ and satisfying
\begin{align*}
\varphi_\mathbf{u}(c)&=(a-c)(b-c)w^2>0,\\
\varphi_\mathbf{u}(b)&=(a-b)(c-b)v^2<0,\\
\varphi_\mathbf{u}(a)&=(b-a)(c-a)u^2>0.
\end{align*}
It follows that $\varphi_u(k)$ has three distinct real roots $k_1,k_2,k_3$ satisfying
\begin{equation*}
k_1<c<k_2<b<k_3<a.
\end{equation*}
By previous remarks we know that the surfaces corresponding to $k_1,k_2,k_3$ are an ellipsoid, a hyperboloid of one sheet and a hyperboloid of two sheets, respectively. Thus we have found one surface of each topological type passing through $\mathbf{u}$.
By the remarks after the statement of the theorem, the tangent planes to these three surfaces at $\mathbf{u}$ have the equations
\begin{equation*}
\mathbf{x}^TA_{k_1}\mathbf{u}=1, \quad \mathbf{x}^TA_{k_2}\mathbf{u}=1 \quad\text{and}\quad \mathbf{x}^TA_{k_3}\mathbf{u}=1,
\end{equation*}
respectively. To show that these planes are mutually perpendicular it is enough to show that the normal vectors $A_{k_1}\mathbf{u}$, $A_{k_2}\mathbf{u}$ and $A_{k_3}\mathbf{u}$ are mutually perpendicular, and for this we will use a clever trick. Observe that for any real numbers $k\neq\ell$ with $k,\ell\not\in\{a,b,c\}$ we have the following ``partial fractions" identity:
\begin{equation*}
\boxed{A_kA_\ell=\frac{A_k-A_\ell}{k-\ell}.}
\end{equation*}
Now if $k\neq \ell$ are two elements of the set $\{k_1,k_2,k_3\}$ then by definition we have $\mathbf{u}^TA_k\mathbf{u}=1$ and $\mathbf{u}^TA_\ell\mathbf{u}=1$, and it follows that the vectors $A_k\mathbf{u}$ and $A_\ell\mathbf{u}$ are perpendicular:
\begin{equation*}
(A_k\mathbf{u})^T(A_\ell\mathbf{u})=\mathbf{u}^T(A_kA_\ell)\mathbf{u}=\frac{\mathbf{u}^T(A_k-A_\ell)\mathbf{u}}{k-\ell}=\frac{\mathbf{u}^TA_k\mathbf{u}-\mathbf{u}^TA_\ell\mathbf{u}}{k-\ell}=\frac{1-1}{k-\ell}=0.
\end{equation*}
It only remains to solve for the Cartesian coordinates $u,v,w$ in terms of the confocal coordinates $k_1,k_2,k_3$. Since the cubic polynomial $\varphi_\mathbf{u}(k)$ has leading coefficient $1$ and distinct roots $k_1,k_2,k_3$ we must have
\begin{equation*}
\varphi_\mathbf{u}(k)=(k-k_1)(k-k_2)(k-k_3).
\end{equation*}
Then by substituting $k=a,b,c$ into $\varphi_\mathbf{u}(k)$ we obtain
\begin{equation*}
\begin{array}{rcccl}
(b-a)(c-a)u^2 &=& \varphi_\mathbf{u}(a) &=& (a-k_1)(a-k_2)(a-k_3),\\
(a-b)(c-b)v^2 &=& \varphi_\mathbf{u}(b) &=& (b-k_1)(b-k_2)(b-k_3),\\
(a-c)(b-c)w^2 &=& \varphi_\mathbf{u}(c) &=& (c-k_1)(c-k_2)(c-k_3),
\end{array}
\end{equation*}
as desired. \hfill $\qed$
\section{Symmetries of a Tangent Cone}
\label{sec:symmetries}
The final ingredient that we need is the result stated in the footnote of Hilbert and Cohn-Vossen that I alluded to in the Introduction. This fact was apparently well known in the early twentieth century\footnote{Hilbert and Cohn-Vossen refer to the work of Otto Staude but they don't provide a reference. It seems that Staude's work is summarized in his textbook (1896).} but I have not been able to a find a proof in the literature. The selection of topics in this note was chosen to make the proof as slick as possible. Here is the footnote.
\bigskip
{\bf Footnote 4 from Hilbert and Cohn-Vossen (pg. 24).} {\em The following is another property of the confocal system, which, incidentally, includes the property just mentioned\,\footnote{Their ``property just mentioned" was a geometric description of our Main Theorem.} as a limiting case: The planes of symmetry of the tangent cone from any point $\mathbf{u}$ in space to any surface of the system which does not enclose $\mathbf{u}$ are the tangent planes at $\mathbf{u}$ to the three surfaces of the system that pass through $\mathbf{u}$.}
\bigskip
To express this in our language we fix distinct real numbers $c<b<a$ and consider the confocal system of quadric surfaces $\mathbf{x}^TA_k\mathbf{x}=1$ as in Section \ref{sec:confocal}. For a generic point in space $\mathbf{u}\in\mathbb{R}^3$ (i.e., not on the principal planes) recall from the Fundamental Theorem that there exist three confocal surfaces through $\mathbf{u}$ corresponding to some ``confocal parameters" $k_1,k_2,k_3\in\mathbb{R}$ satisfying
\begin{equation*}
k_1<c<k_2<b<k_3<c.
\end{equation*}
Furthermore, recall that tangent planes to the three confocal surfaces at $\mathbf{u}$ are mutually perpendicular and are given by the equations
\begin{equation*}
\mathbf{x}^TA_{k_1}\mathbf{u}=1,\quad \mathbf{x}^TA_{k_2}\mathbf{u}=1 \quad\text{and}\quad \mathbf{x}^TA_{k_3}\mathbf{u}=1.
\end{equation*}
Here is the generic case of Hilbert and Cohn-Vossen's statement.
\bigskip
{\bf Theorem.} Let $\mathbf{u}\in\mathbb{R}^3$ be a generic point (i.e., with nonzero coordinates). Fix any generic confocal parameter $\ell\in\mathbb{R}\setminus\{a,b,c,k_1,k_2,k_3\}$ and consider the tangent cone from the point $\mathbf{u}$ to the confocal surface $\mathbf{x}^TA_\ell\mathbf{x}=1$. We know from Section \ref{sec:tangentcone} that this cone has equation $(\mathbf{x}-\mathbf{u})^TK_{\mathbf{u},\ell}(\mathbf{x}-\mathbf{u})=0$ where the cone matrix is defined by
\begin{equation*}
\boxed{K_{\mathbf{u},\ell}:=A_\ell\mathbf{u}\u^T A_\ell+(1-\mathbf{u} A_\ell\mathbf{u}^T)A_\ell.}
\end{equation*}
I claim that this matrix has eigenvectors $A_{k_1}\mathbf{u}$, $A_{k_2}\mathbf{u}$ and $A_{k_3}\mathbf{u}$, with corresponding eigenvalues
\begin{equation*}
\begin{array}{rcl}
\lambda_1 &=& (\ell-k_2)(\ell-k_3)\,/\, (a-\ell)(b-\ell)(c-\ell), \\
\lambda_2 &=& (\ell-k_1)(\ell-k_3)\,/\, (a-\ell)(b-\ell)(c-\ell), \\
\lambda_3 &=& (\ell-k_1)(\ell-k_2)\,/\, (a-\ell)(b-\ell)(c-\ell). \\
\end{array}
\end{equation*}
Since the parameters $k_1,k_2,k_3$ are distinct we observe that the (nonzero) eigenvalues $\lambda_1,\lambda_2,\lambda_3$ are also distinct. If the eigenvalues are not all of the same sign (i.e., if $k_1<\ell<k_3$) then we conclude that $(\mathbf{x}-\mathbf{u})^TK_{\mathbf{u},\ell}(\mathbf{x}-\mathbf{u})=0$ is a real non-circular cone with planes of symmetry equal to the tangent planes $\mathbf{x}^TA_{k_1}\mathbf{u}=1$, $\mathbf{x}^TA_{k_2}\mathbf{u}=1$ and $\mathbf{x}^TA_{k_3}\mathbf{u}=1$ through $\mathbf{u}$. [Remark: It is interesting that that the planes of symmetry depend only on the point $\mathbf{u}$ and not on the parameter $\ell$.] \hfill ///
\bigskip
{\bf Proof.} Consider any $i\in\{1,2,3\}$. To prove that $A_{k_i}\mathbf{u}$ is an eigenvector of $K_{\mathbf{u},\ell}$ we will use the partial fractions identity
\begin{equation*}
A_\ell A_{k_i} = \frac{A_\ell - A_{k_i}}{\ell-k_i},
\end{equation*}
which holds because $\ell\neq k_i$ and $\ell,k_i\not\in\{a,b,c\}$. Since $\mathbf{u}$ is on the surface $\mathbf{x}^T A_{k_i}\mathbf{x}=1$ (i.e., $\mathbf{u}^TA_{k_i}\mathbf{u}=1$) we have
\begin{align*}
K_{\mathbf{u},\ell} A_{k_i}\mathbf{u} &= \left[ A_\ell\mathbf{u}\u^T A_\ell+(1-\mathbf{u} A_\ell\mathbf{u}^T)A_\ell\right] A_{k_i}\mathbf{u} \\
&= A_\ell\mathbf{u}\u^TA_\ell A_{k_i}\mathbf{u} +(1-\mathbf{u} A_\ell \mathbf{u}^T) A_\ell A_{k_i}\mathbf{u} \\
&= A_\ell\mathbf{u}\u^T\left( \frac{A_\ell-A_{k_i}}{\ell-k_i}\right)\mathbf{u}+(1-\mathbf{u} A_\ell \mathbf{u}^T)\left( \frac{A_\ell-A_{k_i}}{\ell-k_i}\right)\mathbf{u} \\
&= A_\ell\mathbf{u}\left( \frac{\mathbf{u}^TA_\ell\mathbf{u}-\mathbf{u}^TA_{k_i}\mathbf{u}}{\ell-k_i}\right)+(1-\mathbf{u} A_\ell \mathbf{u}^T)\left( \frac{A_\ell\mathbf{u}-A_{k_i}\mathbf{u}}{\ell-k_i}\right) \\
&= A_\ell\mathbf{u}\left( \frac{\mathbf{u}^TA_\ell\mathbf{u}-1}{\ell-k_i}\right)+(1-\mathbf{u} A_\ell \mathbf{u}^T)\left( \frac{A_\ell\mathbf{u}-A_{k_i}\mathbf{u}}{\ell-k_i}\right) \\
&= \cancel{A_\ell\mathbf{u}\left( \frac{\mathbf{u}^TA_\ell\mathbf{u}-1}{\ell-k_i}\right)}+\cancel{\left( \frac{1-\mathbf{u}^TA_\ell\mathbf{u}}{\ell-k_i}\right)A_\ell\mathbf{u}}-\left(\frac{1-\mathbf{u}^TA_\ell\mathbf{u}}{\ell-k_i}\right) A_{k_i}\mathbf{u} \\
&=\left(\frac{\mathbf{u}^TA_\ell\mathbf{u}-1}{\ell-k_i}\right) A_{k_i}\mathbf{u}.
\end{align*}
It follows that $A_{k_i}\mathbf{u}$ is an eigenvector of $K_{\mathbf{u},\ell}$ with eigenvalue $\lambda_i=\left(\mathbf{u}^TA_\ell\mathbf{u}-1\right)/(\ell-k_i)$. To compute the eigenvalue explicitly we recall from from the proof of the Fundamental Theorem that the cubic polynomial $\varphi_\mathbf{u}(k):=(a-k)(b-k)(c-k)\left(\mathbf{u}^TA_k\mathbf{u}-1\right)$ has distinct roots $k_1,k_2,k_3$ and hence
\begin{align*}
(a-k)(b-k)(c-k)\left(\mathbf{u}^T A_k\mathbf{u}-1\right) &= (k-k_1)(k-k_2)(k-k_3)\\
\mathbf{u}^TA_k\mathbf{u}-1 &= (k-k_1)(k-k_2)(k-k_3) / (a-k)(b-k)(c-k).
\end{align*}
By substituting $k=\ell$ we obtain the eigenvalue
\begin{equation*}
\lambda_i = \frac{\mathbf{u}^TA_\ell\mathbf{u}-1}{\ell-k_i}=\frac{(\ell-k_1)(\ell-k_2)(\ell-k_3)}{(a-\ell)(b-\ell)(c-\ell)(\ell-k_i)},\end{equation*}
which agrees with the claimed formulas for $\lambda_1,\lambda_2,\lambda_3$ when $i=1,2,3$. \hfill $\qed$
\bigskip
To complete the proof of the Main Theorem it only remains to examine what happens when the point $\mathbf{u}\in\mathbb{R}^3$ approaches a principal plane and when the confocal parameter $\ell\in\mathbb{R}$ approaches one of the critical values $\{a,b,c\}$.
\section{Proof of the Main Theorem}
\label{sec:mainthm}
In the previous section we computed the symmetries of the tangent cone from a generic point $\mathbf{u}\in\mathbb{R}^3$ to a generic surface $\mathbf{x}^TA_\ell\mathbf{x}=1$ in the confocal family corresponding to the fixed parameters $c<b<a$. If $k_1<c<k_2<b<k_3<a$ are the confocal coordinates of the point $\mathbf{u}$, we found that the tangent cone is real when $k_1<\ell<k_3$ and that it is never circular.
In this section we will complete the proof of the Main Theorem by allowing the point $\mathbf{u}$ to approach the principal planes and by allowing the parameter $\ell$ to approach one of the critical values $\{a,b,c\}$, i.e., by allowing the quadric surface $\mathbf{x}^TA_\ell \mathbf{x}=1$ to degenerate to one of the {\em focal curves} of the confocal system:
\begin{align*}
x^2/(a-c)+y^2/(b-c)=1 \quad\text{and}\quad z=0,\\
x^2/(a-b)+z^2/(c-b)=1 \quad\text{and}\quad y=0,\\
y^2/(b-a)+z^2/(c-a)=1 \quad\text{and}\quad x=0.
\end{align*}
Since $c<b<a$ we see that the first of these curves is an ellipse in the $x,y$-plane, the second is a hyperbola in the $x,z$-plane, and the third is an imaginary curve in the $y,z$-plane.
\bigskip
{\bf Letting the surface $\mathbf{x}^TA_\ell\mathbf{x}=1$ degenerate to a focal curve.}
Let $\mathbf{u}\in\mathbb{R}^3$ be a fixed point not on a principal plane, with confocal coordinates $k_1<c<k_2<b<k_3<a$. Let $(\mathbf{x}-\mathbf{u})^T K_{\mathbf{u},\ell} (\mathbf{x}-\mathbf{u})=0$ be the tangent cone from the point $\mathbf{u}$
to the surface $\mathbf{x}^TA_\ell\mathbf{x}=1$. From the previous section we know that the matrix $K_{\mathbf{u},\ell}$ has eigenvectors $A_{k_1}\mathbf{u}$, $A_{k_2}\mathbf{u}$ and $A_{k_3}\mathbf{u}$, with corresponding eigenvalues
\begin{equation*}
\begin{array}{rcl}
\lambda_1 &=& (\ell-k_2)(\ell-k_3)\,/\, (a-\ell)(b-\ell)(c-\ell), \\
\lambda_2 &=& (\ell-k_1)(\ell-k_3)\,/\, (a-\ell)(b-\ell)(c-\ell), \\
\lambda_3 &=& (\ell-k_1)(\ell-k_2)\,/\, (a-\ell)(b-\ell)(c-\ell). \\
\end{array}
\end{equation*}
Note that the eigen{\bf vectors} are independent of the parameter $\ell$. The eigen{\bf values} become undefined as $\ell$ approaches one of the critical values $\{a,b,c\}$, however this is easy to fix.
To see what happens as $\ell\to c$ (i.e., as the surface $\mathbf{x}^T A_\ell \mathbf{x}=1$ degenerates to the focal ellipse) we observe that the matrix $(c-\ell)K_{\mathbf{u},\ell}$ has the same eigenvectors as $K_{\mathbf{u},\ell}$ but with eigenvalues
\begin{equation*}
\begin{array}{rcl}
\lambda_1 &=& (\ell-k_2)(\ell-k_3)\,/\, (a-\ell)(b-\ell), \\
\lambda_2 &=& (\ell-k_1)(\ell-k_3)\,/\, (a-\ell)(b-\ell), \\
\lambda_3 &=& (\ell-k_1)(\ell-k_2)\,/\, (a-\ell)(b-\ell). \\
\end{array}
\end{equation*}
Since these eigenvalues are well-defined when $\ell\to c$, we conclude that the matrix $K_{\mathbf{u},c}:=\lim_{\ell\to c} (c-\ell)K_{\mathbf{u},\ell}$ exists\footnote{Unfortunately it seems that this matrix does not have a nice closed formula.} and is uniquely determined by having eigenvectors $A_{k_1}\mathbf{u}, A_{k_2}\mathbf{u}, A_{k_3}\mathbf{u}$ with corresponding eigenvalues
\begin{equation*}
\begin{array}{rcl}
\lambda_1 &=& (c-k_2)(c-k_3)\,/\, (a-c)(b-c), \\
\lambda_2 &=& (c-k_1)(c-k_3)\,/\, (a-c)(b-c), \\
\lambda_3 &=& (c-k_1)(c-k_2)\,/\, (a-c)(b-c). \\
\end{array}
\end{equation*}
Now the equation $(\mathbf{x}-\mathbf{u})^T K_{\mathbf{u},c}(\mathbf{x}-\mathbf{u})=0$ defines the cone from the point $\mathbf{u}$ to the focal ellipse. Since $k_1<c<k_2<k_3$ we find that the eigenvalues of $K_{\mathbf{u},c}$ satisfy
\begin{equation*}
\lambda_1>0>\lambda_2>\lambda_3.
\end{equation*}
It follows that the cone from a generic point $\mathbf{u}$ to the focal ellipse is real and non-circular. Below we will see what happens when $\mathbf{u}$ is non-generic.
Similarly we can define the matrices $K_{\mathbf{u},b}:=\lim_{\ell\to b}(b-\ell)K_{\mathbf{u},\ell}$ and $K_{\mathbf{u},a}:=\lim_{\ell\to a}(a-\ell)K_{\mathbf{u},\ell}$, which both have the same eigenvectors $A_{k_1}\mathbf{u}, A_{k_2}\mathbf{u}$ and $A_{k_3}\mathbf{u}$. The eigenvalues of $K_{\mathbf{u},b}$ are
\begin{equation*}
\begin{array}{rcl}
\lambda_1 &=& (b-k_2)(b-k_3)\,/\, (a-b)(c-b), \\
\lambda_2 &=& (b-k_1)(b-k_3)\,/\, (a-b)(c-b), \\
\lambda_3 &=& (b-k_1)(b-k_2)\,/\, (a-b)(c-b), \\
\end{array}
\end{equation*}
which satisfy
\begin{equation*}
\lambda_3<0<\lambda_1<\lambda_2.
\end{equation*}
It follows that the cone $(\mathbf{x}-\mathbf{u})^T K_{\mathbf{u},b}(\mathbf{x}-\mathbf{u})=0$ from a generic point $\mathbf{u}$ to the focal hyperbola is real and non-circular.
Finally we observe that the eigenvalues of $K_{\mathbf{u},a}$ are
\begin{equation*}
\begin{array}{rcl}
\lambda_1 &=& (a-k_2)(a-k_3)\,/\, (b-a)(c-a), \\
\lambda_2 &=& (a-k_1)(a-k_3)\,/\, (b-a)(c-a), \\
\lambda_3 &=& (a-k_1)(a-k_2)\,/\, (b-a)(c-a), \\
\end{array}
\end{equation*}
which satisfy
\begin{equation*}
0<\lambda_1<\lambda_2<\lambda_3.
\end{equation*}
Thus the cone $(\mathbf{x}-\mathbf{u})^T K_{\mathbf{u},a} (\mathbf{x}-\mathbf{u})=0$ from a generic point $\mathbf{u}$ to the imaginary focal curve is imaginary, as expected.
\bigskip
{\bf Letting the point $\mathbf{u}$ approach a principal plane.}
Fix real numbers $c<b<a$ as before and recall from the Fundamental Theorem that each generic point $\mathbf{u}=(u,v,w)\in\mathbb{R}^3$ is contained in three (mutually perpendicular) confocal quadric surfaces corresponding to some parameters
\begin{equation*}
k_1<c<k_2<b<k_3<a.
\end{equation*}
Conversely, any real numbers $k_1,k_2,k_3$ satisfying these inequalities correspond to three confocal surfaces that intersect (perpendicularly) at the eight points $\mathbf{u}=(u,v,w)$ defined by
\begin{align*}
u^2 &= (a-k_1)(a-k_2)(a-k_3)\,/\, (b-a)(c-a) \\
v^2 &= (b-k_1)(b-k_2)(b-k_3)\,/\, (a-b)(c-b) \\
w^2 &= (c-k_1)(c-k_2)(c-k_3)\,/\, (a-c)(b-c).
\end{align*}
Thus we observe that the point $\mathbf{u}$ approaches the three principal planes precisely when the parameters $k_1,k_2,k_3$ approach the critical values $c,b,a$ from the left:
\begin{equation*}
\begin{array}{ccc}
u\to 0 &\Leftrightarrow& k_1\to c \text{ from the left}, \\
v\to 0 &\Leftrightarrow& k_2\to b \text{ from the left}, \\
w\to 0 &\Leftrightarrow& k_3\to a \text{ from the left}. \\
\end{array}
\end{equation*}
As long as the values $k_1,k_2,k_3$ remain distinct we find that the matrix $K_{\mathbf{u},\ell}$ of the cone from $\mathbf{u}$ to any surface $\mathbf{x}^T A_\ell \mathbf{x}=1$ (including the degenerate cases when $\ell\in\{a,b,c\}$) has distinct eigenvalues $\lambda_1,\lambda_2,\lambda_3$, and thus it remains non-circular.
Under what conditions do we get a circular cone? In other words, under what conditions does the cone matrix $K_{\mathbf{u},\ell}$ have a repeated eigenvalue? As the confocal parameters $k_1,k_2,k_3$ move around, we observe from the explicit formulas for the eigenvalues $\lambda_1,\lambda_2,\lambda_3$ that
\begin{equation*}
\lim (\lambda_i-\lambda_j)=0 \quad\Leftrightarrow\quad \lim (k_i-k_j)=0.
\end{equation*}
In other words, the cone from $\mathbf{u}$ to a confocal surface or focal curve becomes circular precisely when two of the confocal parameters $k_1,k_2,k_3$ approach each other. Since the parameters of a generic point satisfy
\begin{equation*}
k_1<c<k_2<b<k_3<a,
\end{equation*}
we see that it is {\bf impossible} for $k_1$ and $k_3$ to approach each other. Thus we have two cases:
\bigskip
{\bf Case 1: The point $\mathbf{u}$ approaches the focal hyperbola.} As $k_2\to b\leftarrow k_3$ we find in the limit that the coordinates of the point $\mathbf{u}=(u,v,w)$ satisfy
\begin{equation*}
\begin{array}{rclcl}
u^2 &=& (a-k_1)(a-b)(a-b)\,/\, (b-a)(c-a) &=& (a-b)(a-k_1)\,/\, (a-c), \\
v^2 &=& (b-k_1)(b-b)(b-b)\,/\, (a-b)(c-b) &=& 0, \\
w^2 &=& (c-k_1)(c-b)(c-b)\,/\, (a-c)(b-c) &=& -(c-b)(c-k_1)\,/\, (a-c). \\
\end{array}
\end{equation*}
This implies that
\begin{equation*}
u^2/(a-b)+w^2/(c-b)=1 \quad\text{and}\quad v=0,
\end{equation*}
which tells us that $\mathbf{u}$ is on the focal hyperbola of the system. At the same time, the eigenvalues of the cone from $\mathbf{u}$ to the surface $\mathbf{x}^T A_\ell \mathbf{x}=1$ approach the values
\begin{equation*}
\begin{array}{rclcl}
\lambda_1 &=& (\ell-b)(\ell-b)\,/\, (a-\ell)(b-\ell)(c-\ell) &=& (b-\ell)\,/\,(a-\ell)(c-\ell), \\
\lambda_2 &=& (\ell-k_1)(\ell-b)\,/\, (a-\ell)(b-\ell)(c-\ell)&=& (k_1-\ell)\,/\,(a-\ell)(c-\ell), \\
\lambda_3 &=& (\ell-k_1)(\ell-b)\,/\, (a-\ell)(b-\ell)(c-\ell)&=& (k_1-\ell)\,/\,(a-\ell)(c-\ell). \\
\end{array}
\end{equation*}
Since $k_1<c<b$ implies $k_1\neq b$ we observe that $\lambda_1\neq\lambda_2=\lambda_3$. The cone is real precisely when $k_1<\ell<b$ and in this case the Theorem at the end of Section \ref{sec:circcone} says that the cone is {\bf circular} with angle of aperture $\theta$ satisfying\begin{equation*}
\cos^2\theta=\frac{\lambda_3}{\lambda_3-\lambda_1}=\frac{k_1-\ell}{k_1-b}.
\end{equation*}
We can view the point $\mathbf{u}$ locally as a function of the parameter $k_1$, which satisfies $k_1<\min\{c,\ell\}$. As $-\infty\leftarrow k_1$ the point $\mathbf{u}$ on the focal hyperbola goes to infinity and we have $\cos^2\theta\to 1$, or $\theta\to 0$. That is, from infinitely far away the surface $\mathbf{x}^T A_\ell \mathbf{x}=1$ looks like a point. If $\ell<c$ (i.e., if the surface $\mathbf{x}^T A_\ell\mathbf{x}=1$ is an ellipsoid) then as $k_1\to \ell$ the point $\mathbf{u}$ approaches an ``umbilic point" on the surface of the ellipsoid and the (circular) tangent cone flattens out into the tangent plane at the umbilic point. In the limiting case $\ell\to c$, the point $\mathbf{u}$ approaches one of the foci $(x,y,z)=(\pm\sqrt{a-b},0,0)$ of the focal ellipse. If $c<\ell$ (i.e., if the surface $\mathbf{x}^T A_\ell \mathbf{x}=1$ is a hyperboloid of one sheet) then as $k_1\to c$ we have $\cos^2\theta\to (c-\ell)/(c-b)$ and the angle of aperture reaches the {\bf maximum} value $\theta=\arccos\sqrt{(c-\ell)/(c-b)}$.
In all cases, the axis of symmetry of the cone is given by the eigenvector
\begin{equation*}
A_{k_1}\mathbf{u}=\left(\frac{u}{a-k_1},\frac{v}{b-k_1},\frac{w}{c-k_1}\right)=\left(\frac{u}{a-k_1},0,\frac{w}{c-k_1}\right),
\end{equation*}
which I claim is {\bf tangent} to the focal hyperbola at the point $\mathbf{u}$. Indeed, let us view the point $\mathbf{u}=(u,v,w)$ locally as a function of the parameter $k_1$. By differentiating the formula for $u^2$ with respect to $k_1$ we obtain
\begin{equation*}
2uu' = (u^2)' = \left[ (a-b)(a-k_1)\,/\, (a-c) \right]' = -(a-b)\,/\, (a-c) = -u^2\,/\, (a-k_1),
\end{equation*}
and hence $u'=(-1/2)\cdot u\,/\, (a-k_1)$. Similarly we see that $v'=(-1/2)\cdot v\,/\, (b-k_1)$ and $w'=(-1/2)\cdot w\,/\, (c-k_1)$, thus the tangent vector to the focal hyperbola at $\mathbf{u}$ is given by
\begin{equation*}
(u',v',w')=\left( \frac{-u}{2(a-k_1)},\frac{-v}{2(b-k_1)},\frac{-w}{2(c-k_1)}\right)=-\frac{1}{2} A_{k_1}\mathbf{u}
\end{equation*}
as desired. \hfill ///
\bigskip
{\bf Case 2: The point $\mathbf{u}$ approaches the focal ellipse.} As $k_1\to c\leftarrow k_2$ we find in the limit that the coordinates of the point $\mathbf{u}=(u,v,w)$ satisfy
\begin{equation*}
\begin{array}{rclcl}
u^2 &=& (a-c)(a-c)(a-k_3)\,/\, (b-a)(c-a) &=& (a-c)(a-k_3)\,/\, (a-b), \\
v^2 &=& (b-c)(b-c)(b-k_3)\,/\, (a-b)(c-b) &=& -(b-c)(b-k_3)\,/\, (a-b), \\
w^2 &=& (c-c)(c-c)(c-k_3)\,/\, (a-c)(b-c) &=& 0.
\end{array}
\end{equation*}
This implies that
\begin{equation*}
u^2/(a-c)+v^2/(b-c)=1 \quad\text{and}\quad w=0,
\end{equation*}
which tells us that $\mathbf{u}$ is on the focal ellipse of the system. At the same time, the eigenvalues of the cone from $\mathbf{u}$ to the surface $\mathbf{x}^T A_\ell \mathbf{x}=1$ approach the values
\begin{equation*}
\begin{array}{rclcl}
\lambda_1 &=& (\ell-c)(\ell-k_3)\,/\, (a-\ell)(b-\ell)(c-\ell) &=& (k_3-\ell)\,/\,(a-\ell)(b-\ell), \\
\lambda_2 &=& (\ell-c)(\ell-k_3)\,/\, (a-\ell)(b-\ell)(c-\ell)&=& (k_3-\ell)\,/\,(a-\ell)(b-\ell), \\
\lambda_3 &=& (\ell-c)(\ell-c)\,/\, (a-\ell)(b-\ell)(c-\ell)&=& (c-\ell)\,/\,(a-\ell)(b-\ell). \\
\end{array}
\end{equation*}
Since $c<b<k_3$ implies $c\neq k_3$ we observe that $\lambda_1=\lambda_2\neq \lambda_3$. The cone is real precisely when $c<\ell<k_3$ and in this case the Theorem at the end of Section \ref{sec:circcone} says that the cone is {\bf circular} with angle of aperture $\theta$ satisfying
\begin{equation*}
\cos^2\theta=\frac{\lambda_1}{\lambda_1-\lambda_3}=\frac{k_3-\ell}{k_3-c}.
\end{equation*}
We can view the point $\mathbf{u}$ locally as a function of the parameter $k_3$, which satisfies $\max\{b,\ell\}<k_3<a$. If $b<\ell$ (i.e., if the surface $\mathbf{x}^TA_\ell\mathbf{x}=1$ is a hyperboloid of two sheets) then as $\ell\leftarrow k_3$ the point $\mathbf{u}$ approaches an ``umbilic point" on the surface and the (circular) tangent cone flattens out into the tangent plane at the umbilic point. If $\ell<b$ (i.e., if the surface $\mathbf{x}^T A_\ell\mathbf{x}=1$ is a hyperboloid of one sheet) then as $b\leftarrow k_3$ we have $\cos^2\theta\to (b-\ell)/(b-c)$ and the angle of aperture reaches the {\bf maximum} value $\theta=\arccos\sqrt{(b-\ell)/(b-c)}$. In the limiting case $\ell\to b$ the point $\mathbf{u}$ approaches one of the foci $(x,y,z)=(\pm\sqrt{a-c},0,0)$ of the focal hyperbola. For any value of $\ell$, the angle of aperture reaches the {\bf minimum} value $\theta=\arccos\sqrt{(a-\ell)/(a-c)}$ when $k_3\to a$, i.e., when $\mathbf{u}$ approaches one of the points $(x,y,z)=(0,\pm\sqrt{b-c},0)$. (Since the point $\mathbf{u}$ is trapped on an ellipse it can't get infinitely far away.)
Finally, the axis of symmetry of the cone is given by the eigenvector $A_{k_3}\mathbf{u}$. Using a similar argument to the previous case we see that this axis is tangent to the focal ellipse at $\mathbf{u}$. \hfill ///
\bigskip
These results hold for any value of $\ell$ as long as the corresponding cone is real. The Main Theorem is just a summary of these results for the cases when $\ell\in\{a,b,c\}$.
\section{Conclusion}
\label{sec:conclusion}
To conclude the note I will answer the three questions from the Introduction in plain language. Let us consider a central and non-degenerate quadric curve in the real $x,y$-plane:
\begin{equation*}
\frac{x^2}{\alpha}+\frac{y^2}{\beta}=1.
\end{equation*}
We assume that the parameters $\alpha,\beta\in\mathbb{R}$ satisfy $\alpha>\beta$ and are not both negative. Thus our curve is a non-circular ellipse (when $\alpha>\beta>0$) or a non-rectangular hyperbola (when $\alpha>0>\beta$). If we define $(a,b,c):=(\alpha,\beta,0)$ then our curve becomes
\begin{equation*}
\frac{x^2}{a-c}+\frac{y^2}{b-c}=1,
\end{equation*}
which we identify as either the focal ellipse or the focal hyperbola of a certain family of confocal quadric surfaces in $x,y,z$-space.
\bigskip
{\bf Question 1:} From which points in space does our curve look like a circle?
\bigskip
{\bf Answer:} It looks like a circle from points on the the other real focal curve defined by:
\begin{equation*}
\frac{x^2}{a-b}+\frac{z^2}{c-b}=\frac{x^2}{\alpha-\beta}+\frac{z^2}{-\beta}=1 \quad\text{and}\quad y=0.
\end{equation*}
If our curve is an ellipse/hyperbola in the $x,y$-plane then the other focal curve is a hyperbola/ellipse in the $x,z$-plane, passing through the foci of the original curve.
\bigskip
{\bf Question 2:} In which direction should we look to see the circle?
\bigskip
{\bf Answer:} If we are sitting on the focal curve in the $x,z$-plane then we should look in the direction of the tangent line. The focal curve in the $x,y$-plane then looks like a circle centered on this line.
\bigskip
{\bf Question 3:} How big is the circle?
\bigskip
{\bf Answer:} The apparent size of the circle depends on the angle of aperture $\theta$ of the corresponding circular cone.
Suppose our curve is an ellipse ($\alpha>\beta>0$) and that $\mathbf{u}$ lies on the focal hyperbola. As $\mathbf{u}$ approaches one of the foci $(x,y,z)=(\pm\sqrt{\alpha-\beta},0,0)$ of the ellipse, the cone becomes flat and the ellipse looks like an infinitely big circle. As $\mathbf{u}$ goes to infinity the ellipse looks like an infinitesimally small circle.
On the other hand, suppose that our curve is a hyperbola ($\alpha>0>\beta$) and that $\mathbf{u}$ lies on the confocal ellipse. As $\mathbf{u}$ approaches one of the foci $(x,y,z)=(\pm\sqrt{\alpha-\beta},0,0)$ of the hyperbola, the cone becomes flat and the hyperbola looks like an infinitely big circle. As $\mathbf{u}$ approaches one of the points $(x,y,z)=(0,0,\pm\sqrt{-\beta})$ the angle of aperture reaches the {\bf minimum} value $\theta=\arccos\sqrt{\alpha/(\alpha-\beta)}$.
| {
"timestamp": "2018-01-12T02:01:43",
"yymm": "1708",
"arxiv_id": "1708.07093",
"language": "en",
"url": "https://arxiv.org/abs/1708.07093",
"abstract": "Real quadric curves are often referred to as \"conic sections,\" implying that they can be realized as plane sections of circular cones. However, it seems that the details of this equivalence have been partially forgotten by the mathematical community. The definitive analytic treatment was given by Otto Staude in the 1880s and a non-technical description was given in the first chapter of Hilbert and Cohn-Vossen's \"Geometry and the Imagination\" (1932). The main theorem is elegant and easy to state but is surprisingly difficult to find in the literature. A synthetic version appears in The Universe of Conics (2016) but we still have not found a full analytic treatment written down. The goal of this note is to fill a surprising gap in the literature by advertising this beautiful theorem, and to provide the slickest possible analytic proof by using standard linear algebra that was not standard in 1932.",
"subjects": "History and Overview (math.HO)",
"title": "Where is the cone?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357221825193,
"lm_q2_score": 0.8056321796478255,
"lm_q1q2_score": 0.7909178897000351
} |
https://arxiv.org/abs/1606.04975 | Sharp geometric requirements in the Wachspress interpolation error estimate | Geometric conditions on general polygons are given in [9] in order to guarantee the error estimate for interpolants built from generalized barycentric coordinates, and the question about identifying sharp geometric restrictions in this setting is proposed. In this work, we address the question when the construction is made by using Wachspress coordinates. We basically show that the imposed conditions: bounded aspect ratio property (barp), maximum angle condition (MAC) and minimum edge length property (melp) are actually equivalent to [MAC,melp], and if any of these conditions is not satisfied, then there is no guarantee that the error estimate is valid. In this sense, MAC and melp can be regarded as sharp geometric requirements in the Wachspress interpolation error estimate. | \section{Introduction}
Many and different conditions on the geometry of finite elements were required in order to guarantee optimal convergence in the interpolation error estimate. Some of them deal with interior angles like the {\it maximum angle condition} (maximum interior angle bounded away from $\pi$) and the {\it minimum angle condition} (minimum interior angle bounded away from $0$), but others deal with some lengths of the element like the {\it minimum edge length property} (the diameter of the element is comparable to the length of the segment determined by any two vertices) and the {\it bounded aspect ratio property} often called {\it regularity condition} (the diameter of the element and the diameter of the largest ball inscribed are comparable).
Classical results on general Lagrange finite elements consider the regularity condition \cite{CR}. On triangular elements, the error estimate holds under the minimum angle condition \cite{Ze,Z}. However, on triangles, the minimum angle condition and the regularity condition are equivalent. From \cite{BA,BG,J} we know that the weakest sufficient condition on triangular elements is the maximum angle condition. Some examples can be constructed in order to show that if a family of triangles does not satisfy the maximum angle condition, then the error estimate on these elements does not hold.
Recently, it was proved \cite{AM:2} that, for quadrilaterals elements, the minimum angle condition ($mac$) is the weakest known geometric condition required to obtain the classical $W^{1,p}$-error estimate, when $1 \leq p < 3$, to any arbitrary order $k$ greater than 1. Moreover, in this case, $mac$ is also necessary. In \cite{AM,AM:2} it was proved that the {\emph{double angle condition}} (any interior angle bounded away from zero and $\pi$) is a sufficient requirement to obtain the error estimate for any order and any $p \geq 1$. When $k=1$ and $1 \leq p < 3$, a less restrictive condition ensures the error estimate \cite{AD,AM}: the {\it regular decomposition property} ($RDP$). Property $RDP$ requires that after dividing the quadrilateral into two triangles along one of its diagonals, each resultant triangle verifies the maximum angle condition and the quotient between the length of the diagonals is uniformly bounded.
This brief picture intends to show that study of sharp geometric restrictions on finite elements under which the optimal error estimate remains valid is an interesting and active field of research.
In \cite{GRB,GRB:2}, geometric conditions on general polygons are given in order to guarantee the error estimate for interpolants built from generalized barycentric coordinates, and the question about identifying sharp geometric restrictions in this setting is proposed. In this work, we address the question for the first-order Wachspress interpolation operator.
We show that the three sufficient conditions considered in \cite{GRB} ({\it regularity condition}, {\it maximum angle condition} and {\it minimum edge length property}) are actually equivalent to the last two since the regularity condition is a consequence of the maximum angle condition and the minimum edge length property. Then we exhibit families of polygons satisfying only one of these conditions and show that the interpolation error estimate does not hold to adequate functions. In this sense, the {\it maximum angle condition} and the {\it minimum edge length property} can be regarded as sharp geometric requirements to obtain the optimal error estimate.
This work is structured as follows: In Section \ref{geoimpl}, we introduce notation and exhibit some basic relationships between different geometric conditions on general convex polygons. Section \ref{wach} is devoted to recall Wachspress coordinates and some elementary results associated to them; a brief picture about error estimates for the first-order Wachspress interpolation operator is also given there. Finally, in Section \ref{sharp}, we present two counterexamples to show that $MAC$ and $melp$ are sharp geometric requirements under which the optimal error estimate is valid.
\section{Geometric conditions}
\label{geoimpl}
\setcounter{equation}{0}
In order to introduce notation and formalize the requirements of each geometric condition, we give the following definitions. From now on, $\Omega$ will refer to a general convex polygon.
\medskip
\begin{enumerate}
\item[(i)] {\it (Bounded aspect ratio property)} We say that $\Omega$ satisfies the {\emph{bounded aspect ratio property}} (also called {\emph{regularity condition}}) if there exists a constant $\sigma>0$ such that
\begin{equation}
\label{barp}
\frac{diam(\Omega)}{\rho(\Omega)} \le \sigma,
\end{equation}
where $\rho(\Omega)$ is the diameter of the maximum ball inscribed in $\Omega$. In this case, we write $barp(\sigma)$.
\item[(ii)] {\it (Minimum edge length property)} We say that $\Omega$ satisfies the {\emph{minimum edge length property}} if there exists a constant $d_m>0$ such that
\begin{equation}
\label{mel}
0<d_m \leq \frac{\left\| {\bf v}_i-{\bf v}_j \right\|}{diam(\Omega)}
\end{equation}
for all $i \neq j$, where ${\bf v}_1, {\bf v}_2, \dots, {\bf v}_n$ are the vertices of $\Omega$. In this case, we write $melp(d_m)$.
\item[(iii)] {\it (Maximum angle condition)} We say that $\Omega$ satisfies the {\emph{maximum angle condition}} if there exists a constant $\psi_M>0$ such that
\begin{equation}
\label{MAC}
\beta \leq \psi_M < \pi
\end{equation}
for all interior angle $\beta$ of $\Omega$. In this case, we write $MAC(\psi_M)$,
\item[(iv)] {\it (Minimum angle condition)} We say that $\Omega$ satisfies the {\emph{minimum angle condition}} if there exists a constant $\psi_m>0$ such that
\begin{equation}
\label{mac}
0 < \psi_m \leq \beta.
\end{equation}
for all interior angle $\beta$ of $\Omega$. In this case, we write $mac(\psi_m)$.
\end{enumerate}
All along this work, when we say {\it regular polygon}, we refer to a polygon satisfying the regularity condition given by \eqref{barp}.
\subsection{Some basic relationships}
It is well known that regularity assumption implies that the minimum interior angle is bounded away from zero. We state this result in the following lemma
\begin{lemma}
\label{lemma:regmac}
If $\Omega$ is a convex polygon satisfying $barp(\sigma)$, then $\Omega$ verifies $mac(\psi_m)$ where $\psi_m$ is a constant depending only on $\sigma$.
\end{lemma}
\proof See for instance \cite[Proposition 4 (i)]{GRB}. \qed
\medskip
Considering the rectangle $R=[0,1] \times [0,s]$, where $0<s<1$, and taking $s \to 0^+$, we see that the converse statement of Lemma \ref{lemma:regmac} does not hold. Indeed, $R$ verifies the $mac(\pi/2)$ (independently of $s$), but, when $s$ tends to zero, $R$ is not regular in the sense given by \eqref{barp}. However, on triangular elements, $barp$ and $mac$ are equivalent. We use this fact to show that, on general polygons, the regularity condition is a consequence of the minimum edge length property and the maximum angle condition. To our knowledge, this elementary result has not been established or demonstrated previously.
\begin{figure}[h]
\resizebox{11.5cm}{5.2cm}{\includegraphics{macmel=reg.png}}
\caption{(A): A polygon with its diameter attained as the length of the straight line joining two non-consecutive vertices. (B): A polygon with its diameter attained as the length of the straight line joining two consecutive vertices.}
\label{fig:macmel=reg}
\end{figure}
\begin{lemma}
\label{lemma:macmelpeqreg}
If $\Omega$ is a convex polygon satisfying $MAC(\psi_M)$ and $melp(d_m)$, then $\Omega$ verifies $barp(\sigma)$, where $\sigma=\sigma(\psi_M,d_m)$.
\end{lemma}
\proof We prove this by induction on the number $n$ of vertices of $\Omega$. If $n=3$, i.e., $\Omega$ is a triangle, the result follows from the law of sines. Indeed, we only have to prove that $\Omega$ has its minimum interior angle bounded away from zero. Let $\alpha$ be the minimum angle of $\Omega$ (if there is more than one choice, we choose it arbitrarily) and let $l$ be the length of its opposite side. Since $diam(\Omega)$ is attained on one side of $\Omega$, we can assume, without loss of generality, $l \neq diam(\Omega)$. We call $\beta$ the opposite angle to $diam(\Omega)$. It is clear that $\beta$ can not approach zero and since it is bounded above by $\psi_M$, we get that $1/\sin(\beta) \le C$ for some positive constant $C$. Then, from the law of sines and the assumption $melp(d_m)$, we have
$$\frac{\sin(\alpha)}{\sin(\beta)} = \frac{l}{diam(\Omega)} \geq d_m.$$
In consequence, $\sin(\alpha) \ge C^{-1} d_m$ which proves that $\alpha$ is bounded away from zero.
Let $n>3$. Since the diameter of $\Omega$ realizes as the length of its longest {\it diagonal}, i.e., the longest straight line joining two vertices of $\Omega$, we need to consider two cases depending if these vertices are consecutive or not.
Assume that $diam(\Omega)$ is attained as the length of the line joining two non-consecutive vertices (these may not be unique, in this case we choose them arbitrarily). We can divide $\Omega$ by this diagonal into two convex polygons $\Omega_1$ and $\Omega_2$ with less number of vertices (see Figure \ref{fig:macmel=reg} (A)). It is clear that both of them satisfy $MAC(\psi_M)$ and, since $diam(\Omega_i)=diam(\Omega)$ and the set of vertices of $\Omega_i$ is a subset of the vertices of $\Omega$, we conclude that $\Omega_i$ also verifies $melp(d_m)$. Therefore, by the inductive hypothesis, $\Omega_1$ and $\Omega_2$ verify $barp(\sigma_1)$ and $barp(\sigma_2)$, respectively, for some constants $\sigma_1, \sigma_2$ depending only on $\psi_M$ and $d_m$. Then, since $\rho(\Omega) \geq \rho(\Omega_i)$, $i=1,2$, we have
$$\displaystyle \frac{diam(\Omega)}{\rho(\Omega)} = \frac{diam(\Omega_i)}{\rho(\Omega)} \leq \frac{diam(\Omega_i)}{\rho(\Omega_i)} \leq \sigma_i.$$
Finally, if $diam(\Omega)$ is attained on a side of $\Omega$, i.e., is the length of the line joining two consecutive vertices ${\bf v}_{j-1}$ and ${\bf v}_j$ (these may not be unique, in this case we choose them arbitrarily), we divide $\Omega$ by the diagonal joining ${\bf v}_{j-1}$ and ${\bf v}_{j+1}$ into the triangle $T_1=\Delta({\bf v}_{j-1}{\bf v}_j{\bf v}_{j+1})$ and a convex polygon $\Omega_2$ (see Figure \ref{fig:macmel=reg} (B)). It is clear that $T_1$ verifies $melp(d_m)$ and $MAC(\psi_M)$, so (by the case $n=3$) we have that $T_1$ satisfies $barp(\sigma_1)$ for some positive constant $\sigma_1$. Then, since $diam(T_1)=diam(\Omega)$ and $\rho(\Omega) \geq \rho(T_1)$, we have
$$\displaystyle \frac{diam(\Omega)}{\rho(\Omega)} = \frac{diam(T_1)}{\rho(\Omega)} \leq \frac{diam(T_1)}{\rho(T_1)} \leq \sigma_1.$$ \qed
\begin{cor}
\label{cor:equiv}
$[MAC, melp]$ and $[barp, MAC, melp]$ are equivalent conditions.
\end{cor}
Finally, notice that reciprocal statement of Lemma \ref{lemma:macmelpeqreg} is false. Consider the following families of quadrilaterals: $\mathcal{F}_1=\{ K(1,1-s,s,1-s) \}_{0<s<1}$ where $K(1,1-s,s,1-s)$ denotes the convex quadrilateral with vertices $(0,0), (1,0), (s, 1-s)$ and $(0,1-s)$, and $\mathcal{F}_2=\{ K(1,1,s,s) \}_{1/2<s<1}$ where $K(1,1,s,s)$ denotes the convex quadrilateral with vertices $(0,0), (1,0), (s, s)$ and $(0,1)$. Clearly, any quadrilateral belonging to $\mathcal{F}_1 \cup \mathcal{F}_2$ is regular in the sense given by \eqref{barp}. Each element of $\mathcal{F}_1$ satisfies $MAC(3\pi/4)$, but taking $s \to 0^+$, we see that the minimum edge length property is violated. On the other hand, each element of $\mathcal{F}_2$ verifies $melp(1/2)$; but taking $s \to 1/2^+$, we see that the maximum angle condition is not satisfied.
\section{Wachspress coordinates and the error estimate}
\label{wach}
\setcounter{equation}{0}
\subsection{Wachspress coordinates}
We start this section by remembering the definition of Wachspress coordinates and some of their main properties \cite{Fl:2, W}. Henceforth, we denote by ${\bf v}_1, {\bf v}_2, \dots, {\bf v}_n$ the vertices of $\Omega$ enumerated in counterclockwise order starting in an arbitrary vertex. Let $\bf x$ denote an interior point of $\Omega$ and let $A_i(\bf x)$ denote the area of the triangle with vertices $\bf x$, ${\bf v}_i$ and ${\bf v}_{i+1}$, i.e., $A_i({\bf x})=|\Delta ({\bf x} {\bf v}_i {\bf v}_{i+1})|$, where, by convention, ${\bf v}_0:= {\bf v}_n$ and ${\bf v}_{n+1}:={\bf v}_1$. Let $B_i$ denote the area of the triangle with vertices ${\bf v}_{i-1}$, ${\bf v}_i$ and ${\bf v}_{i+1}$, i.e., $B_i=|\Delta ({\bf v}_{i-1} {\bf v}_i {\bf v}_{i+1})|$. We summarize the notation in Figure \ref{fig:notation}.
\begin{figure}[h]
\centering
\resizebox{11.6cm}{5cm}{\includegraphics{notation.png}}
\caption{(A): Notation for $A_i({\bf x})$. (B): Notation for $B_i$.}
\label{fig:notation}
\end{figure}
\medskip
Define the Wachspress weight function $w_i$ as the product of the area of the “boundary” triangle, formed by ${\bf v}_i$ and its two adjacent vertices, and the areas of the $n-2$ interior triangles, formed by the point ${\bf x}$ and the polygon's adjacent vertices (making sure to exclude the two interior triangles that contain the vertex ${\bf v}_i$), i.e.,
\begin{equation}
\label{wi}
\displaystyle w_i({\bf x}) = B_i \prod_{j \neq i,i-1} A_j(\bf x).
\end{equation}
After applying the standard normalization, Wachspress coordinates are then given by
\begin{equation}
\label{lambdai}
\displaystyle \lambda_i({\bf x}) = \frac{w_i({\bf x})}{\sum_{j=1}^n w_j({\bf x})}.
\end{equation}
An equivalent expression of \eqref{wi} for $w_i$ is given in \cite{Mey}; the main advantages of this alternative expression is that the result is easy to implement and it shows that only the edge $\overline{{\bf x} {\bf v}_i}$ and its two adjacent angles $\alpha_i$ and $\delta_i$ are needed (see Figure \ref{fig:notation} (A)). Indeed, $w_i$ can be written as
\begin{equation}
\label{weights}
w_i({\bf x}) = \frac{\cot(\alpha_i)+\cot(\delta_i)}{\left\| {\bf x}-{\bf v}_i \right\|^2}
\end{equation}
where $\alpha_i=\angle\ {\bf x} {\bf v}_i {\bf v}_{i+1}$ and $\delta_i=\beta_i-\alpha_i$ with $\beta_i$ being the inner angle of $\Omega$ associated to ${\bf v}_i$ (see Figure \ref{fig:notation}). The evaluation of the Wachspress basis function is carried out using elementary vector calculus operations. The angles $\alpha_i$ and $\delta_i$ are not explicitly computed, as suggested in \cite{Mey}, vector cross product and vector dot product formulas are used to find the cotangents.
\medskip
Wachspress coordinates have the well-known following properties:
\begin{itemize}
\item[(I)] {\it (Non-negativeness)} $\lambda_i \geq 0$ on $\Omega$.
\item[(II)] {\it (Linear Completeness)} for any linear function $\ell :\Omega \to \mathbb R$, there holds $\ell = \sum_{i} \ell({\bf v}_i) \lambda_i$.
\item[] (Considering the linear map $\ell \equiv 1$ yields $\sum_{i} \lambda_i = 1$; this property is usually named {\it partition of unity}).
\item[(III)] {\it (Invariance)} If $L:\mathbb R^2 \to \mathbb R^2$ is a linear map and $S:\mathbb R^2 \to \mathbb R^2$ is a composition of rotation, translation and uniform scaling transformations, then $\lambda_i({\bf x})=\lambda_i^L(L({\bf x}))=\lambda_i^S(S({\bf x}))$, where $\lambda_i^F(F({\bf x}))$ denotes a set of barycentric coordinates on $F(\Omega)$.
\item[(IV)] {\it (Linear precision)} $\sum_{i} {\bf v}_i \lambda_i({\bf x})={\bf x}$, i.e., every point on $\Omega$ can be written as a convex combination of the vertices ${\bf v}_1, {\bf v}_2, \dots, {\bf v}_n$.
\item[(V)] {\it (Interpolation)} $\lambda_i({\bf v}_j)=\delta_{ij}$.
\end{itemize}
\subsection{Error estimate to the first-order Wachspress interpolation operator}
We only give a brief overview of some definitions and results which are of interest to us; for more details we refer to \cite{Das, GRB, Suk:2, Suk}.
Let $\{ \lambda_i \}$ be the Wachspress coordinates associated to $\Omega$ (see \eqref{lambdai}). Then, we can consider the first-order interpolation operator $I:H^2(\Omega) \to span \{ \lambda_i \} \subset H^1(\Omega)$ defined as
\begin{equation}
\label{defI}
\displaystyle I_{\Omega}u=Iu := \sum_{i} u({\bf v}_i) \lambda_i.
\end{equation}
Properties (I)-(V) of the Wachspress coordinates (more generally, generalized barycentric coordinates) guarantee that $I$ has the desirable properties of an interpolant. For this interpolant, called here the {\it first-order Wachspress interpolation operator}, the optimal convergence estimate
\begin{equation}
\label{errorestimate}
\left\| u-Iu \right\|_{H^1(\Omega)} \leq C diam(\Omega) |u|_{H^2(\Omega)}
\end{equation}
on polygons satisfying $[barp, MAC, melp]$ was proved \cite[Lemma 6]{GRB}.
\begin{rem}
\label{rem:red}
Thanks to {\rm Corollary \ref{cor:equiv}}, we can affirm that \eqref{errorestimate} holds on general convex polygons satisfying $[MAC, melp]$.
\end{rem}
\section{About sharpness on geometric restrictions}
\label{sharp}
\setcounter{equation}{0}
Since $[MAC, melp]$ are sufficient conditions to obtain \eqref{errorestimate}, we wonder if some of these requirements can be relaxed in order to obtain the error estimate. This question was partially answered in \cite{GRB}, where a counterexample, using pentagonal elements, is given in order to show that the $MAC$ can not be removed. For the sake of completeness, in Counterexample \ref{necmac}, we give a family of quadrilateral elements which does not satisfy $MAC$ but it verifies $melp$ and \eqref{errorestimate} does not hold. This example shows two things: $MAC$ is necessary in order to obtain the error estimate and, since every element in this family is regular in the sense given by \eqref{barp}, $barp$ is not enough to obtain \eqref{errorestimate}.
On the other hand, in Counterexample \ref{necmel}, we present a family of quadrilaterals which does not satisfy $melp$ but it verifies $MAC$ and \eqref{errorestimate} does not hold. Then, in order to obtain the interpolation error estimate, $melp$ is necessary.
In this sense, the question raised in \cite{GRB} about identifying sharp geometric restrictions under which the error estimates for the first-order Wachspress interpolation operator holds can be considered as answered.
\begin{figure}[h]
\resizebox{12cm}{5cm}{\includegraphics{counterex.png}}
\caption{Schematic picture of $K_s$ and $T_s$ (hatched area) considered in Counterexample \ref{necmac}.}
\label{fig:cex1}
\end{figure}
\begin{cex}
\label{necmac}
Consider the convex quadrilateral $K_s$ with the vertices ${\bf v}_1=(0,0), {\bf v}_2=(1,0), {\bf v}_3=(s,s)$ and ${\bf v}_4=(0,1)$, where $1/2<s<1$. We will be interested in the case when $s$ tends to $1/2$ since then the family of quadrilaterals $\{ K_s \}$ does not satisfy the maximum angle condition although it satisfies $melp(1/2)$.
Consider the function $u({\bf x})=x(1-x)$. Since $u({\bf v}_1)=0=u({\bf v}_2)=u({\bf v}_4)$, we have
$$Iu({\bf x})=u({\bf v}_3) \lambda_3({\bf x})= s(1-s) \lambda_3({\bf x}).$$
An straightforward computation yields
$$\displaystyle \lambda_3({\bf x}) = \frac{(2s-1)x}{s} \frac{y}{(s-1)(x+y)+s},$$
therefore
$$\displaystyle \frac{\partial \lambda_3}{\partial y} = \frac{(2s-1)x}{s} \frac{(s-1)x+s}{[(s-1)(x+y)+s]^2}.$$
Consider the triangle $T_s$ with vertices $(1/4,3/4)$, $(1/2,1/2)$ and $(1/2, (3s-1)/(2s))$ {\rm (see Figure \ref{fig:cex1})}. Then, on $T_s$, we have $1/4 \leq x \leq 1/2$, $1/2 \le y \le (3s-1)/(2s)$ and $x+y \geq 1$, so it follows that
$$0<(s-1)(x+y)+s \leq 2s-1
\quad \text{and} \quad
(s-1)x+s \geq (3s-1)/2$$
and hence
$$\displaystyle \frac{\partial \lambda_3}{\partial y} \geq \frac{(2s-1)}{4s} \frac{3s-1}{2(2s-1)^2}=\frac{3s-1}{8s(2s-1)}.$$
Then
$$|u-Iu|_{H^1(K_s)} \ge \left\| \frac{\partial (u-Iu)}{\partial y} \right\|_{L^2(K_s)} = \left\| \frac{\partial Iu}{\partial y} \right\|_{L^2(K_s)} = s(1-s)\left\| \frac{\partial \lambda_3}{\partial y} \right\|_{L^2(K_s)}$$
and, consequently,
$$|u-Iu|_{H^1(K_s)} \ge s(1-s)\left\| \frac{\partial \lambda_3}{\partial y} \right\|_{L^2(T_s)}.$$
Since $|T_s|=(2s-1)/(2^4s)$, we have
$$\left\| \frac{\partial \lambda_3}{\partial y} \right\|_{L^2(T_s)}^2 \geq \frac{(3s-1)^2}{(8s)^2(2s-1)^2}|T_s|=
\frac{(3s-1)^2}{2^{10}s^3(2s-1)} \to \infty$$
when $s \to 1/2^+$. Finally, as $|u|_{H^2(K_s)} = 2 |K_s|^{1/2} \leq 2$ and $diam(K_s)=\sqrt{2}$, we conclude that \eqref{errorestimate} can not hold.
\end{cex}
\begin{figure}[h]
\centering
\resizebox{11.3cm}{5cm}{\includegraphics{counterex2.png}}
\caption{Schematic picture of $K_s$ and $D_s$ (hatched area) considered in Counterexample \ref{necmel}.}
\label{fig:cex2}
\end{figure}
\begin{cex}
\label{necmel}
Consider now the convex quadrilateral $K_s$ with the vertices ${\bf v}_1=(0,0), {\bf v}_2=(1,0), {\bf v}_3=(1-\sqrt[4]{s},s)$ and ${\bf v}_4=(0,s)$, where $0 < s < (1/2)^4$. Note that the family of quadrilaterals $\{ K_s \}$ satisfies $MAC(\pi/2+\tan^{-1}(2^3))$ $($independently of $s)$ but it does not satisfy the minimum edge length property when $s$ tends to zero since $\left\| {\bf v}_1-{\bf v}_4 \right\| = s \to 0^+$ and $diam(K_s) \sim 1$.
Consider the function $u({\bf x})=x^2$. Since $u({\bf v}_1)=0=u({\bf v}_4)$, we have, calling $a := 1-\sqrt[4]{s}$,
$$Iu({\bf x})=u({\bf v}_2) \lambda_2({\bf x})+u({\bf v}_3) \lambda_3({\bf x}) = \lambda_2({\bf x})+ a^2 \lambda_3({\bf x})$$
where
$$\lambda_2({\bf x})=\frac{x(s-y)}{s+y(a-1)} \quad \text{and} \quad
\lambda_3({\bf x})=\frac{xy}{s+y(a-1)}.$$
A simple computation yields
$$\frac{\partial (Iu-u)}{\partial y} = \frac{\partial Iu}{\partial y} = \frac{xsa(a-1)}{(s+y(a-1))^2}.$$
Let $D_s = K_s \cap \{ x \geq 1/2 \}$ $($see {\rm Figure \ref{fig:cex2})}. Since $a-1 <0$, we get $s+y(a-1) \leq s$ and then, on $D_s$, we have
$$\left| \frac{\partial (Iu-u)}{\partial y} \right| \geq \frac{xa(1-a)}{s} \geq \frac{a(1-a)}{2s}.$$
Therefore,
$$|Iu-u|_{H^1(K_s)}^2 \geq
\left\| \frac{\partial (Iu-u)}{\partial y} \right\|_{L^2(K_s)}^2 \geq
\left\| \frac{\partial (Iu-u)}{\partial y} \right\|_{L^2(D_s)}^2 \geq
\frac{a^2(1-a)^2}{4s^2} |D_s|,$$
and since $|D_s|=as/2$, we conclude that
$$|Iu-u|_{H^1(K_s)}^2 \geq
\frac{a^3(1-a)^2}{8s} =
\frac{(1-\sqrt[4]{s})^3}{8\sqrt{s}}$$
which tends to infinity when $s$ tends to zero. Finally, since $|u|_{H^2(K_s)} = 2 |K_s|^{1/2} \leq 2$ and $diam(K_s) \sim 1$, we conclude that \eqref{errorestimate} can not hold.
\end{cex}
| {
"timestamp": "2017-07-03T02:06:28",
"yymm": "1606",
"arxiv_id": "1606.04975",
"language": "en",
"url": "https://arxiv.org/abs/1606.04975",
"abstract": "Geometric conditions on general polygons are given in [9] in order to guarantee the error estimate for interpolants built from generalized barycentric coordinates, and the question about identifying sharp geometric restrictions in this setting is proposed. In this work, we address the question when the construction is made by using Wachspress coordinates. We basically show that the imposed conditions: bounded aspect ratio property (barp), maximum angle condition (MAC) and minimum edge length property (melp) are actually equivalent to [MAC,melp], and if any of these conditions is not satisfied, then there is no guarantee that the error estimate is valid. In this sense, MAC and melp can be regarded as sharp geometric requirements in the Wachspress interpolation error estimate.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Sharp geometric requirements in the Wachspress interpolation error estimate",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9908743644972026,
"lm_q2_score": 0.7981867729389246,
"lm_q1q2_score": 0.7909028113859299
} |
https://arxiv.org/abs/1711.10561 | Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations | We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this two part treatise, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct classes of algorithms, namely continuous time and discrete time models. The resulting neural networks form a new class of data-efficient universal function approximators that naturally encode any underlying physical laws as prior information. In this first part, we demonstrate how these networks can be used to infer solutions to partial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters. | \section{Systematic studies}
\subsection{Continuous Time Models}
\subsubsection{Example (Burgers' Equation)}
As an example, let us consider the Burgers' equation. This equation arises in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow \cite{basdevant1986spectral}. It is a fundamental partial differential equation and can be derived from the Navier-Stokes equations for the velocity field by dropping the pressure gradient term.
For small values of the viscosity parameters, Burgers' equation can lead to shock formation that is notoriously hard to resolve by classical numerical methods. In one space dimension the Burger's equation along with Dirichlet boundary conditions reads as
\begin{eqnarray}\label{eq:Burgers}
&& u_t + u u_x - (0.01/\pi) u_{xx} = 0,\ \ \ x \in [-1,1],\ \ \ t \in [0,1],\\
&& u(0,x) = -\sin(\pi x),\nonumber\\
&& u(t,-1) = u(t,1) = 0.\nonumber
\end{eqnarray}
Let us define $f(t,x)$ to be given by
\[
f := u_t + u u_x - (0.01/\pi) u_{xx},
\]
and proceed by approximating $u(t,x)$ by a deep neural network. To highlight the simplicity in implementing this idea we have included a Python code snippet using Tensorflow \cite{abadi2016tensorflow}; currently one of the most popular and well documented open source libraries for machine learning computations. To this end, $u(t,x)$ can be simply defined as
\begin{lstlisting}[language=Python]
def neural_net(H, weights, biases):
for l in range(0,num_layers-2):
W = weights[l]; b = biases[l]
H = tf.tanh(tf.add(tf.matmul(H, W), b)) # tanh(HW + b)
W = weights[-1]; b = biases[-1]
H = tf.add(tf.matmul(H, W), b) # HW + b
return H
def u(t, x):
u = neural_net(tf.concat([t,x],1), weights, biases)
return u
\end{lstlisting}
Correspondingly, the \emph{physic informed neural network} $f(t,x)$ takes the form
\begin{lstlisting}[language=Python]
def f(t, x):
u = u(t, x)
u_t = tf.gradients(u, t)[0]
u_x = tf.gradients(u, x)[0]
u_xx = tf.gradients(u_x, x)[0]
f = u_t + u*u_x - (0.01/tf.pi)*u_xx
return f
\end{lstlisting}
The shared parameters between the neural networks $u(t,x)$ and $f(t,x)$ can be learned by minimizing the mean squared error loss
\begin{equation}\label{eq:MSE_Burgers_CT_inference}
MSE = MSE_u + MSE_f,
\end{equation}
where
\[
MSE_u = \frac{1}{N_u}\sum_{i=1}^{N_u} |u(t^i_u,x_u^i) - u^i|^2,
\]
and
\[
MSE_f = \frac{1}{N_f}\sum_{i=1}^{N_f}|f(t_f^i,x_f^i)|^2.
\]
Here, $\{t_u^i, x_u^i, u^i\}_{i=1}^{N_u}$ denote the initial and boundary training data on $u(t,x)$ and $\{t_f^i, x_f^i\}_{i=1}^{N_f}$ specify the collocations points for $f(t,x)$. The loss $MSE_u$ corresponds to the initial and boundary data while $MSE_f$ enforces the structure imposed by equation \eqref{eq:Burgers} at a finite set of collocation points.
In all benchmarks considered in this work, the total number of training data $N_u$ is relatively small (a few hundred up to a few thousand points), and we chose to optimize all loss functions using
using L-BFGS; a quasi-Newton, full-batch gradient-based optimization algorithm \cite{liu1989limited}. For larger data-sets a more computationally efficient mini-batch setting can be readily employed using stochastic gradient descent and its modern variants \cite{goodfellow2016deep}. Despite the fact that this procedure is only guaranteed to converge to a local minimum, our empirical evidence indicates that, if the given partial differential equation is well-posed and its solution is unique, our method is capable of achieving good prediction accuracy given a sufficiently expressive neural network architecture and a sufficient number of collocation points $N_f$.
This general observation will be quantified by specific sensitivity studies that accompany the numerical examples presented in the following.
Figure~\ref{fig:Burgers_CT_inference} summarizes our results for the
data-driven solution of the Burgers equation. Specifically, given a set of $N_u = 100$ randomly distributed initial and boundary data, we learn the latent solution $u(t,x)$ by training all ? parameters of a 9-layer deep neural network using the mean squared error loss of \eqref{eq:MSE_Burgers_CT_inference}. Each hidden layer contained $20$ neurons and a hyperbolic tangent activation function. In general, the neural network should be given sufficient approximation capacity in order to accommodate the anticipated complexity of $u(t,x)$. However, in this example, our choice aims to highlight the robustness of the proposed method with respect to the well known issue of over-fitting. Specifically, the term in $MSE_f$ in equation \eqref{eq:MSE_Burgers_CT_inference} acts as a regularization mechanism that penalizes solutions that do not satisfy equation \eqref{eq:Burgers}. Therefore, a key property of {\em physics informed neural networks} is that they can be effectively trained using small data sets; a setting often encountered in the study of physical systems for which the cost of data acquisition may be prohibitive.
The top panel of Figure~\ref{fig:Burgers_CT_inference} shows the predicted spatio-temporal solution $u(t,x)$, along with the locations of the initial and boundary training data. We must underline that, unlike any classical numerical method for solving partial differential equations, this prediction is obtained without any sort of discretization of the spatio-temporal domain. The exact solution for this problem is analytically available \cite{basdevant1986spectral}, and the resulting prediction error is measured at $6.7 \cdot 10^{-4}$ in the relative $\mathbb{L}_2$-norm. Note that this error is about two orders of magnitude lower than the one reported in our previous work on data-driven solution of partial differential equation using Gaussian processes \cite{raissi2017numerical}. A more detailed assessment of the predicted solution is presented in the bottom panel of Figure~\ref{fig:Burgers_CT_inference}. In particular, we present a comparison between the exact and the predicted solutions at different time instants $t=0.25,0.50,0.75$.
Using only a handful of initial data, the {\em physics informed neural network} can accurately capture the intricate nonlinear behavior of the Burgers equation that leads to the development of sharp internal layer around $t = 0.4$. The latter is notoriously hard to accurately resolve with classical numerical methods and requires a laborious spatio-temporal discretization of Eq.~\eqref{eq:Burgers}.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_CT_inference.pdf}
\caption{{\em Burgers equation:} {\it Top:} Predicted solution $u(t,x)$ along with the initial and boundary training data. In addition we are using 10,000 collocation points generated using a Latin Hypercube Sampling strategy. {\it Bottom:} Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the dashed vertical lines in the top panel. The relative $\mathbb{L}_{2}$ error for this case is $6.7 \cdot 10^{-4}$, with model training taking approximately 60 seconds on one NVIDIA Titan X GPU.}
\label{fig:Burgers_CT_inference}
\end{figure}
To further analyze the performance of our method, we have performed systematic study to quantify its predictive accuracy for different number of training and collocation points, as well as for different neural network architectures. In Table~\ref{tab:Burgers_CT_inference_1} we report the resulting relative $\mathbb{L}_{2}$ error for different number of initial and boundary training data $N_u$ and different number of collocation points $N_f$, while keeping the 9-layer network architecture fixed. The general trend shows increased prediction accuracy as the total number of training data $N_u$ is increased, given a sufficient number of collocation points $N_f$. This observation highlights the key strength of {\em physics informed neural networks}: by encoding the structure of the underlying physical law through the collocation points $N_f$ one can obtain a more accurate and data-efficient learning algorithm.
\footnote{Note that the case $N_f = 0$ corresponds to a standard neural network model, i.e. a neural network that does not take into account the underlying governing equation.}
Finally, Table~\ref{tab:Burgers_CT_inference_2} shows the resulting relative $\mathbb{L}_{2}$ for different number of hidden layers, and different number of neurons per layer, while the total number of training and collocation points is kept fixed to
$N_u = 100$ and $N_f=10000$, respectively. As expected, we observe that as the number of layers and neurons is increased (hence the capacity of the neural network to approximate more complex functions), the predictive accuracy is increased.
\begin{table}
\label{tab:Burgers_CT_inference_1}
\centering
\begin{tabular}{|l||cccccc|}
\hline
\diagbox{$N_u$}{$N_f$} & 2000 & 4000 & 6000 & 7000 & 8000 & 10000 \\ \hline\hline
20 & 2.9e-01 & 4.4e-01 & 8.9e-01 & 1.2e+00 & 9.9e-02 & 4.2e-02 \\
40 & 6.5e-02 & 1.1e-02 & 5.0e-01 & 9.6e-03 & 4.6e-01 & 7.5e-02 \\
60 & 3.6e-01 & 1.2e-02 & 1.7e-01 & 5.9e-03 & 1.9e-03 & 8.2e-03 \\
80 & 5.5e-03 & 1.0e-03 & 3.2e-03 & 7.8e-03 & 4.9e-02 & 4.5e-03 \\
100 & 6.6e-02 & 2.7e-01 & 7.2e-03 & 6.8e-04 & 2.2e-03 & 6.7e-04 \\
200 & 1.5e-01 & 2.3e-03 & 8.2e-04 & 8.9e-04 & 6.1e-04 & 4.9e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative $\mathbb{L}_{2}$ error between the predicted and the exact solution $u(t,x)$ for different number of initial and boundary training data $N_u$, and different number of collocation points $N_f$. Here, the network architecture is fixed to 9 layers with 20 neurons per hidden layer.}
\end{table}
\begin{table}
\label{tab:Burgers_CT_inference_2}
\centering
\begin{tabular}{|c||ccc|}
\hline
\diagbox{Layers}{Neurons} & 10 & 20 & 40 \\ \hline\hline
2 & 7.4e-02 & 5.3e-02 & 1.0e-01 \\
4 & 3.0e-03 & 9.4e-04 & 6.4e-04 \\
6 & 9.6e-03 & 1.3e-03 & 6.1e-04 \\
8 & 2.5e-03 & 9.6e-04 & 5.6e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative $\mathbb{L}_{2}$ error between the predicted and the exact solution $u(t,x)$ for different number of hidden layers, and different number of neurons per layer. Here, the total number of training and collocation points is fixed to
$N_u = 100$ and $N_f=10000$, respectively.}
\end{table}
\subsection{Discrete Time Models}
\subsubsection{Example (Burgers' Equation)}
To highlight the key features of the discrete time representation we revisit the problem of data-driven solution of the Burgers' equation. The nonlinear operator in equation \eqref{eq:RungeKutta_inference_rearranged} is given by
\[
\mathcal{N}[u^{n+c_j}] = u^{n+c_j} u^{n+c_j}_x - (0.01/\pi)u^{n+c_j}_{xx}
\]
and the shared parameters of the neural networks \eqref{eq:RungeKutta_PU_prior_inference} and \eqref{eq:RungeKutta_PI_prior_inference} can be learned by minimizing the sum of squared errors
\[
SSE = SSE_n + SSE_b
\]
where
\[
SSE_n = \sum_{j=1}^q \sum_{i=1}^{N_n} |u^n_j(x^{n,i}) - u^{n,i}|^2,
\]
and
\[
SSE_b = \sum_{i=1}^q \left(|u^{n+c_i}(-1)|^2 + |u^{n+c_i}(1)|^2\right) + |u^{n+1}(-1)|^2 + |u^{n+1}(1)|^2.
\]
Here, $\{x^{n,i}, u^{n,i}\}_{i=1}^{N_n}$ corresponds to the data at time $t^n$.
The Runge-Kutta scheme now allows us to infer the latent solution $u(t,x)$ in a sequential fashion. Starting from initial data $\{x^{n,i}, u^{n,i}\}_{i=1}^{N_n}$ at time $t^n$ and data at the domain boundaries $x = -1$ and $x = 1$ at time $t^{n+1}$, we can use the aforementioned loss functions to train the networks of \eqref{eq:RungeKutta_PU_prior_inference}, \eqref{eq:RungeKutta_PI_prior_inference}, and predict the solution at time $t^{n+1}$. A Runge-Kutta time-stepping scheme would then use this prediction as initial data for the next step and proceed to train again and predict $u(t^{n+2},x)$, $u(t^{n+3},x)$, etc., one step at a time.
In classical numerical analysis, these steps are usually confined to be small due to stability constraints for explicit schemes or computational complexity constrains for implicit formulations \cite{iserles2009first}.
These constraints become more severe as the total number of Runge-Kutta stages $q$ is increased, and, for most problems of practical interest, one needs to take thousands to millions of such steps until the solution is resolved up to a desired final time. In sharp contrast to classical methods, here we can employ implicit Runge-Kutta schemes with an arbitrarily large number of stages at effectively no extra cost.
\footnote{To be precise, it is only the number of parameters in the last layer of the neural network that increases linearly with the total number of stages.}
This enables us to take very large time steps while retaining stability and high predictive accuracy, therefore allowing us to resolve the entire spatio-temporal solution in a single step.
The result of applying this process to the Burgers' equation is presented in Figure~\ref{fig:Burgers_DT_inference}. For illustration purposes, we start with a set of $N_n=250$ initial data at $t = 0.1$, and employ a {\em physics informed neural network} induced by an implicit Runge-Kutta scheme with 500 stages to predict the solution at time $t=0.9$ in a single step. The theoretical error estimates for this scheme predict a temporal error accumulation of $\mathcal{O}(\Delta{t}^{2q})$ \cite{iserles2009first}, which in our case translates into an error way below machine precision, i.e., $\Delta{t}^q = 0.8^{1000} \approx 10^{-97}$. To our knowledge, this is the first time that an implicit Runge-Kutta scheme of that high-order has ever been used. Remarkably, starting from a smooth initial data at $t=0.1$ we can predict the nearly discontinuous solution at $t=0.9$ in a single time-step with a relative $\mathbb{L}_{2}$ error of $6.7 \cdot 10^{-4}$. This error is two orders of magnitude lower that the one reported in \cite{raissi2017numerical}, and it is entirely attributed to the neural network's capacity to approximate $u(t,x)$, as well as to the degree that the sum of squared errors loss allows interpolation of the training data. The network architecture used here consists of 4 layers with 50 neurons in each hidden layer.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_DT_inference.pdf}
\caption{{\em Burgers equation:} {\it Top:} Solution $u(t,x)$ along with the location of the initial training snapshot at $t=0.1$ and the final prediction snapshot at $t=0.9$. {\it Bottom:} Initial training data and final prediction at the snapshots depicted by the white vertical lines in the top panel. The relative $\mathbb{L}_{2}$ error for this case is $6.7 \cdot 10^{-4}$, with model training taking approximately 60 seconds on one NVIDIA Titan X GPU.}
\label{fig:Burgers_DT_inference}
\end{figure}
A detailed systematic study to quantify the effect of different network architectures is presented in Table~\ref{tab:Burgers_DT_inference_2}. By keeping the number of Runge-Kutta stages fixed to 500 and the time-step size to $\Delta{t}=0.8$, we have varied the number of hidden layers and the number of neurons per layer, and monitored the resulting relative $\mathbb{L}_{2}$ error for the predicted solution at time $t=0.9$. Evidently, as the neural network capacity is increased the predictive accuracy is enhanced.
\begin{table}
\label{tab:Burgers_DT_inference_2}
\centering
\begin{tabular}{|c||ccc|}
\hline
\diagbox{Layers}{Neurons} & 10 & 25 & 50 \\ \hline\hline
1 & 4.1e-02 & 4.1e-02 & 1.5e-01 \\
2 & 2.7e-03 & 5.0e-03 & 2.4e-03 \\
3 & 3.6e-03 & 1.9e-03 & 9.5e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative final prediction error measure in the $\mathbb{L}_{2}$ norm for different number of hidden layers and neurons in each layer. Here, the number of Runge-Kutta stages is fixed to 500 and the time-step size to $\Delta{t}=0.8$.}
\end{table}
The key parameters controlling the performance of our discrete time algorithm are the total number of Runge-Kutta stages $q$ and the time-step size $\Delta{t}$. In Table~\ref{tab:Burgers_DT_inference_1} we summarize the results of an extensive systematic study where we fix the network architecture to 4 hidden layers with 50 neurons per layer, and vary the number of Runge-Kutta stages $q$ and the time-step size $\Delta{t}$. Specifically, we see how cases with low numbers of stages fail to yield accurate results when the time-step size is large. For instance, the case $q=1$ corresponding to the classical trapezoidal rule, and the case $q=2$ corresponding to the $4^{\text{th}}$-order Gauss-Legendre method, cannot retain their predictive accuracy for time-steps larger than 0.2, thus mandating a solution strategy with multiple time-steps of small size. On the other hand, the ability to push the number of Runge-Kutta stages to 32 and even higher allows us to take very large time steps, and effectively resolve the solution in a single step without sacrificing the accuracy of our predictions. Moreover, numerical stability is not sacrificed either as implicit Runge-Kutta is the only family of time-stepping schemes that remain A-stable regardless for their order, thus constituting them ideal for stiff problems \cite{iserles2009first}. These properties are previously unheard-of for an algorithm of such implementation simplicity, and illustrate one of the key highlights of our discrete time approach.
\begin{table}
\label{tab:Burgers_DT_inference_1}
\centering
\begin{tabular}{|l||cccc|}
\hline
\diagbox{$q$}{$\Delta{t}$} & 0.2 & 0.4 & 0.6 & 0.8 \\ \hline\hline
1 & 3.5e-02 & 1.1e-01 & 2.3e-01 & 3.8e-01 \\
2 & 5.4e-03 & 5.1e-02 & 9.3e-02 & 2.2e-01 \\
4 & 1.2e-03 & 1.5e-02 & 3.6e-02 & 5.4e-02 \\
8 & 6.7e-04 & 1.8e-03 & 8.7e-03 & 5.8e-02 \\
16 & 5.1e-04 & 7.6e-02 & 8.4e-04 & 1.1e-03 \\
32 & 7.4e-04 & 5.2e-04 & 4.2e-04 & 7.0e-04 \\
64 & 4.5e-04 & 4.8e-04 & 1.2e-03 & 7.8e-04 \\
100 & 5.1e-04 & 5.7e-04 & 1.8e-02 & 1.2e-03 \\
500 & 4.1e-04 & 3.8e-04 & 4.2e-04 & 8.2e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative final prediction error measure in the $\mathbb{L}_{2}$ norm for different number of Runge-Kutta stages $q$ and time-step sizes $\Delta{t}$. Here, the network architecture is fixed to 4 hidden layers with 50 neurons in each layer.}
\end{table}
\subsection{Continuous Time Models}
\subsubsection{Example (Burgers' Equation)}
As a first example, let us again consider the Burgers' equation. In one space dimension the equation reads as
\begin{eqnarray}
&& u_t + \lambda_1 u u_x - \lambda_2 u_{xx} = 0
\end{eqnarray}
Let us define $f(t,x)$ to be given by
\[
f := u_t + \lambda_1 u u_x - \lambda_2 u_{xx},
\]
and proceed by approximating $u(t,x)$ by a deep neural network. This will result in the \emph{physics informed neural network} $f(t,x)$. The shared parameters of the neural networks $u(t,x)$ and $f(t,x)$ along with the parameters $\lambda = (\lambda_1, \lambda_2)$ of the differential operator can be learned by minimizing the mean squared error loss
\begin{equation}\label{eq:MSE_Burgers_CT_inference}
MSE = MSE_u + MSE_f,
\end{equation}
where
\[
MSE_u = \frac{1}{N}\sum_{i=1}^{N} |u(t^i_u,x_u^i) - u^i|^2,
\]
and
\[
MSE_f = \frac{1}{N}\sum_{i=1}^{N}|f(t_f^i,x_f^i)|^2.
\]
Here, $\{t_u^i, x_u^i, u^i\}_{i=1}^{N}$ denote the training data on $u(t,x)$. The loss $MSE_u$ corresponds to the training data on $u(t,x)$ while $MSE_f$ enforces the structure imposed by equation \eqref{eq:Burgers} at a finite set of collocation points, whose number and location is taken to be the same with the training data.
To illustrate the effectiveness of our approach we have created a training data-set by randomly generating $N = 2000$ points across the entire spatio-temporal domain from the exact solution corresponding to $\lambda_1 = 1.0$ and $\lambda_2 = 0.01/\pi$. The locations of the training points are illustrated in the top panel of Figure~\ref{fig:Burgers_CT_identification}.
This data-set is then used to train a 9-layer deep neural network with 20 neurons per hidden layer by minimizing the mean square error loss of \eqref{eq:MSE_Burgers_CT_inference} using the L-BFGS optimizer \cite{liu1989limited}. Upon training, the networks parameters are calibrated to predict the entire solution $u(t,x)$, as well as the unknown parameters $\lambda = (\lambda_1, \lambda_2)$ that define the underlying dynamics. A visual assessment of the predictive accuracy of the {\em physics informed neural network} is given at the middle and bottom panels of Figure~\ref{fig:Burgers_CT_identification}. The network is able to identify the underlying partial differential equation with remarkable accuracy, even in the case which the scattered training data is corrupted with 1\% uncorrelated noise.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_CT_identification.pdf}
\caption{{\em Burgers equation:} {\it Top:} Predicted solution $u(t,x)$ along with the training data. {\it Middle:} Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the dashed vertical lines in the top panel. {\it Bottom:} Correct partial differential equation along with the identified one obtained by learning $\lambda_1, \lambda_2$.}
\label{fig:Burgers_CT_identification}
\end{figure}
To further scrutinize the performance of our algorithm we have performed a systematic study with respect to the total number of training data, the noise corruption levels, and the neural network architecture. The results as summarized in Tables~\ref{tab:Burgers_CT_identification_1} and~\ref{tab:Burgers_CT_identification_2}. The key observation here is that the proposed methodology appears to be very robust with respect to noise levels in the data, and yield a reasonable identification accuracy even for noise corruption up to 10\%. This enhanced robustness seems to greatly outperform competing approaches using Gaussian process regression as previously reported in \cite{raissi2017hidden}, as well as approaches relying on sparse regression that require relatively clean data for accurately computing numerical gradients \cite{brunton2016discovering}.
\begin{table}
\label{tab:Burgers_CT_identification_1}
\centering
\begin{tabular}{|l||cccc||cccc|} \hline
& \multicolumn{4}{c||}{\% error in $\lambda_1$} & \multicolumn{4}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{$N_u$}{noise} & 0\% & 1\% & 5\% & 10\% & 0\% & 1\% & 5\% & 10\% \\ \hline\hline
500 & 0.131 & 0.518 & 0.118 & 1.319 & 13.885 & 0.483 & 1.708 & 4.058 \\
1000 & 0.186 & 0.533 & 0.157 & 1.869 & 3.719 & 8.262 & 3.481 & 14.544 \\
1500 & 0.432 & 0.033 & 0.706 & 0.725 & 3.093 & 1.423 & 0.502 & 3.156 \\
2000 & 0.096 & 0.039 & 0.190 & 0.101 & 0.469 & 0.008 & 6.216 & 6.391 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different number of training data $N_u$ corrupted by different noise levels. Here, the neural network architecture is kept fixed to 9 layers and 20 neurons per layer.}
\end{table}
\begin{table}
\label{tab:Burgers_CT_identification_2}
\centering
\begin{tabular}{|c||ccc||ccc|} \hline
& \multicolumn{3}{c||}{\% error in $\lambda_1$} & \multicolumn{3}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{Layers}{Neurons} & 10 & 20 & 40 & 10 & 20 & 40 \\ \hline\hline
2 & $11.696$ & $2.837$ & $1.679$ & $103.919$ & $67.055$ & $49.186$ \\
4 & $0.332$ & $0.109$ & $0.428$ & $4.721$ & $1.234$ & $6.170$ \\
6 & $0.668$ & $0.629$ & $0.118$ & $3.144$ & $3.123$ & $1.158$ \\
8 & $0.414$ & $0.141$ & $0.266$ & $8.459$ & $1.902$ & $1.552$ \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different number of hidden layers and neurons per layer. Here, the training data is considered to be noise-free and fixed to $N = 2000$.}
\end{table}
\subsection{Discrete Time Models}
\subsubsection{Example (Burgers' Equation)}
Let us again illustrate the key features of this method through the lens of the Burgers' equation. Recall the equation's form
\begin{equation}\label{eq:Burgers_DT_identification}
u_t + \lambda_1 u u_x - \lambda_2 u_xx = 0,
\end{equation}
and notice that the nonlinear operator in equation \eqref{eq:RungeKutta_identification_rearranged} is given by
\[
\mathcal{N}[u^{n+c_j}] = \lambda_1 u^{n+c_j} u^{n+c_j}_x - \lambda_2 u^{n+c_j}_{xx}.
\]
Given merely two training data snapshots, the shared parameters of the neural networks along with the parameters $\lambda = (\lambda_1, \lambda_2)$ can be learned by minimizing the sum of squared errors \eqref{eq:SSE_identification}. Here, we have created a training data-set comprising of $N_n=199$ and $N_{n+1}=201$ spatial points by randomly sampling the exact solution at time instants $t^n=0.1$ and $t^{n+1}=0.9$, respectively. The training data, along with the predictions of the trained network, are shown in the top and middle panel of Figure~\ref{fig:Burgers_DT_identification}. The neural network architecture used here includes 4 hidden layers with 50 neurons each, while the number of Runge-Kutta stages is empirically chosen to yield a temporal error accumulation of the order of machine precision $\epsilon$ by setting
\footnote{This is motivated by the theoretical error estimates for implicit Runge-Kutta schemes suggesting a truncation error of $\mathcal{O}(\Delta{t}^{2q})$ \cite{iserles2009first}.}
\begin{equation}\label{eq:Runge-Kutta_stages}
q = 0.5\log{\epsilon}/\log(\Delta{t}),
\end{equation}
where the time-step for this example is $\Delta{t}=0.8$. The bottom panel of Figure~\ref{fig:Burgers_DT_identification} summarizes the identified parameters $\lambda = (\lambda_1, \lambda_2)$ for the cases of noise-free data, as well as noisy data with 1\% of uncorrelated noise corruption. For both cases, the proposed algorithm is able to learn the correct parameter values $\lambda_1=1.0$ and $\lambda_2=0.01/\pi$ with remarkable accuracy, despite the fact that the two data snapshots used for training are very far apart, and potentially describe different regimes of the underlying dynamics.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_DT_identification.pdf}
\caption{{\em Burgers equation:} {\it Top:} Predicted solution $u(t,x)$ along with the temporal locations of the two training snapshots. {\it Middle:} Training data and exact solution corresponding to the two temporal snapshots depicted by the dashed vertical lines in the top panel. {\it Bottom:} Correct partial differential equation along with the identified one obtained by learning $\lambda_1, \lambda_2$.}
\label{fig:Burgers_DT_identification}
\end{figure}
A further sensitivity analysis is performed to quantify the accuracy of our predictions with respect to the gap between the training snapshots $\Delta{t}$, the noise levels in the training data, and the {\em physics informed neural network} architecture. As shown in Table~\ref{tab:Burgers_DT_identification_1}, the proposed algorithm is quite robust to both $\Delta{t}$ and the noise corruption levels, and it consistently returns reasonable estimates for the unknown parameters. This robustness is mainly attributed to the flexibility of the underlying implicit Runge-Kutta scheme to admit an arbitrarily high number of stages, allowing the data snapshots to be very far apart in time, while not compromising the accuracy with which the nonlinear dynamics of Eq.~\eqref{eq:Burgers_DT_identification} are resolved. This is the key highlight of our discrete time formulation for identification problems, setting apart from competing approaches \cite{raissi2017hidden,brunton2016discovering}. Lastly, Table~\ref{tab:Burgers_DT_identification_2} presents the percentage error in the identified parameters, demonstrating the robustness of our estimates with respect to the underlying neural network architecture.
\begin{table}
\label{tab:Burgers_DT_identification_1}
\centering
\begin{tabular}{|l||cccc||cccc|} \hline
& \multicolumn{4}{c||}{\% error in $\lambda_1$} & \multicolumn{4}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{$\Delta{t}$}{noise} & 0\% & 1\% & 5\% & 10\% & 0\% & 1\% & 5\% & 10\% \\ \hline\hline
0.2 & $0.002$ & $0.435$ & $6.073$ & $3.273$ & $0.151$ & $4.982$ & $59.314$ & $83.969$ \\
0.4 & $0.001$ & $0.119$ & $1.679$ & $2.985$ & $0.088$ & $2.816$ & $8.396$ & $8.377$ \\
0.6 & $0.002$ & $0.064$ & $2.096$ & $1.383$ & $0.090$ & $0.068$ & $3.493$ & $24.321$ \\
0.8 & $0.010$ & $0.221$ & $0.097$ & $1.233$ & $1.918$ & $3.215$ & $13.479$ & $1.621$ \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different gap size $\Delta{t}$ between two different snapshots and for different noise levels.}
\end{table}
\begin{table}
\label{tab:Burgers_DT_identification_2}
\centering
\begin{tabular}{|c||ccc||ccc|} \hline
& \multicolumn{3}{c||}{\% error in $\lambda_1$} & \multicolumn{3}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{Layers}{Neurons} & 10 & 25 & 50 & 10 & 25 & 50 \\ \hline\hline
1 & $1.868$ & $4.868$ & $1.960$ & $180.373$ & $237.463$ & $123.539$ \\
2 & $0.443$ & $0.037$ & $0.015$ & $29.474$ & $2.676$ & $1.561$ \\
3 & $0.123$ & $0.012$ & $0.004$ & $7.991$ & $1.906$ & $0.586$ \\
4 & $0.012$ & $0.020$ & $0.011$ & $1.125$ & $4.448$ & $2.014$ \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different number of hidden layers and neurons in each layer.}
\end{table}
\section{Introduction}
With the explosive growth of available data and computing resources, recent advances in machine learning and data analytics have yielded transformative results across diverse scientific disciplines, including image recognition \cite{krizhevsky2012imagenet}, natural language processing \cite{lecun2015deep}, cognitive science \cite{lake2015human}, and genomics \cite{alipanahi2015predicting}. However, more often than not, in the course of analyzing complex physical, biological or engineering systems, the cost of data acquisition is prohibitive, and we are inevitably faced with the challenge of drawing conclusions and making decisions under partial information. In this {\em small data} regime, the vast majority of state-of-the art machine learning techniques (e.g., deep/convolutional/recurrent neural networks) are lacking robustness and fail to provide any guarantees of convergence.\\
At first sight, the task of training a deep learning algorithm to accurately identify a nonlinear map from a few -- potentially very high-dimensional -- input and output data pairs seems at best naive. Coming to our rescue, for many cases pertaining to the modeling of physical and biological systems, there a exist a vast amount of prior knowledge that is currently not being utilized in modern machine learning practice. Let it be the principled physical laws that govern the time-dependent dynamics of a system, or some empirical validated rules or other domain expertise, this prior information can act as a regularization agent that constrains the space of admissible solutions to a manageable size (for e.g., in incompressible fluid dynamics problems by discarding any non realistic flow solutions that violate the conservation of mass principle). In return, encoding such structured information into a learning algorithm results in amplifying the information content of the data that the algorithm sees, enabling it to quickly steer itself towards the right solution and generalize well even when only a few training examples are available.\\
The first glimpses of promise for exploiting structured prior information to construct data-efficient and physics-informed learning machines have already been showcased in the recent studies of \cite{raissi2017inferring, raissi2017machine, owhadi2015bayesian}. There, the authors employed Gaussian process regression \cite{Rasmussen06gaussianprocesses} to devise functional representations that are tailored to a given linear operator, and were able to accurately infer solutions and provide uncertainty estimates for several prototype problems in mathematical physics. Extensions to nonlinear problems were proposed in subsequent studies by Raissi {\em et. al.} \cite{raissi2017numerical, raissi2017hidden} in the context of both inference and systems identification. Despite the flexibility and mathematical elegance of Gaussian processes in encoding prior information, the treatment of nonlinear problems introduces two important limitations. First, in \cite{raissi2017numerical,raissi2017hidden} the authors had to locally linearize any nonlinear terms in time, thus limiting the applicability of the proposed methods to discrete-time domains and compromising the accuracy of their predictions in strongly nonlinear regimes. Secondly, the Bayesian nature of Gaussian process regression requires certain prior assumptions that may limit the representation capacity of the model and give rise to robustness/brittleness issues, especially for nonlinear problems \cite{owhadi2015brittleness}.
\subsection{Problem setup and summary of contributions}
In this work we take a different approach by employing deep neural networks and leverage their well known capability as universal function approximators \cite{hornik1989multilayer}. In this setting, we can directly tackle nonlinear problems without the need for committing to any prior assumptions, linearization, or local time-stepping. We exploit recent developments in automatic differentiation \cite{baydin2015automatic} -- one of the most useful but perhaps underused techniques in scientific computing -- to differentiate neural networks with respect to their input coordinates and model parameters to obtain {\em physics informed neural networks}. Such neural networks are constrained to respect any symmetry, invariance, or conservation principles originating from the physical laws that govern the observed data, as modeled by general time-dependent and nonlinear partial differential equations. This simple yet powerful construction allows us to tackle a wide range of problems in computational science and introduces a potentially disruptive technology leading to the development of new data-efficient and physics-informed learning machines, new classes of numerical solvers for partial differential equations, as well as new data-driven approaches for model inversion and systems identification.\\
The general aim of this work is to set the foundations for a new paradigm in modeling and computation that enriches deep learning with the longstanding developments in mathematical physics. These developments are presented in the context of two main problem classes: data-driven solution and data-driven discovery of partial differential equations. To this end, let us consider parametrized and nonlinear partial differential equations of the general form
\begin{equation*}
u_t + \mathcal{N}[u;\lambda] = 0,
\end{equation*}
where $u(t,x)$ denotes the latent (hidden) solution and $\mathcal{N}[\cdot;\lambda]$ is a nonlinear operator parametrized by $\lambda$. This setup encapsulates a wide range of problems in mathematical physics including conservation laws, diffusion processes, advection-diffusion-reaction systems, and kinetic equations. As a motivating example, the one dimensional Burgers' equation \cite{basdevant1986spectral} corresponds to the case where $\mathcal{N}[u;\lambda] = \lambda_1 u u_x - \lambda_2 u_{xx}$ and $\lambda = (\lambda_1, \lambda_2)$. Here, the subscripts denote partial differentiation in either time or space. Given noisy measurements of the system, we are interested in the solution of two distinct problems. The first problem is that of predictive inference, filtering and smoothing, or data driven solutions of partial differential equations \cite{raissi2017numerical, raissi2017inferring} which states: given fixed model parameters $\lambda$ what can be said about the unknown hidden state $u(t,x)$ of the system? The second problem is that of learning, system identification, or data-driven discovery of partial differential equations \cite{raissi2017hidden,raissi2017machine, Rudye1602614} stating: what are the parameters $\lambda$ that best describe the observed data?\\
In this first part of our two-part treatise, we focus on computing data-driven solutions to partial differential equations of the general form
\begin{eqnarray}\label{eq:PDE}
&&u_t + \mathcal{N}[u] = 0,\ x \in \Omega, \ t\in[0,T],
\end{eqnarray}
where $u(t,x)$ denotes the latent (hidden) solution, $\mathcal{N}[\cdot]$ is a nonlinear differential operator, and $\Omega$ is a subset of $\mathbb{R}^D$. In what follows, we put forth two distinct classes of algorithms, namely continuous and discrete time models, and highlight their properties and performance through the lens of different benchmark problems. All code and data-sets accompanying this manuscript are available at \url{https://github.com/maziarraissi/PINNs}.
\section{Continuous Time Models}
We define $f(t,x)$ to be given by the left-hand-side of equation \eqref{eq:PDE}; i.e.,
\begin{equation}
f := u_t + \mathcal{N}[u],\label{eq:PDE_RHS}
\end{equation}
and proceed by approximating $u(t,x)$ by a deep neural network. This assumption along with equation \eqref{eq:PDE_RHS} result in a \emph{physics informed neural network} $f(t,x)$. This network can be derived by applying the chain rule for differentiating compositions of functions using automatic differentiation \cite{baydin2015automatic}.
\subsection{Example (Burgers' Equation)}
As an example, let us consider the Burgers' equation. This equation arises in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow \cite{basdevant1986spectral}. It is a fundamental partial differential equation and can be derived from the Navier-Stokes equations for the velocity field by dropping the pressure gradient term. For small values of the viscosity parameters, Burgers' equation can lead to shock formation that is notoriously hard to resolve by classical numerical methods. In one space dimension, the Burger's equation along with Dirichlet boundary conditions reads as
\begin{eqnarray}\label{eq:Burgers}
&& u_t + u u_x - (0.01/\pi) u_{xx} = 0,\ \ \ x \in [-1,1],\ \ \ t \in [0,1],\\
&& u(0,x) = -\sin(\pi x),\nonumber\\
&& u(t,-1) = u(t,1) = 0.\nonumber
\end{eqnarray}
Let us define $f(t,x)$ to be given by
\[
f := u_t + u u_x - (0.01/\pi) u_{xx},
\]
and proceed by approximating $u(t,x)$ by a deep neural network. To highlight the simplicity in implementing this idea we have included a Python code snippet using Tensorflow \cite{abadi2016tensorflow}; currently one of the most popular and well documented open source libraries for machine learning computations. To this end, $u(t,x)$ can be simply defined as
\begin{lstlisting}[language=Python]
def u(t, x):
u = neural_net(tf.concat([t,x],1), weights, biases)
return u
\end{lstlisting}
Correspondingly, the \emph{physics informed neural network} $f(t,x)$ takes the form
\begin{lstlisting}[language=Python]
def f(t, x):
u = u(t, x)
u_t = tf.gradients(u, t)[0]
u_x = tf.gradients(u, x)[0]
u_xx = tf.gradients(u_x, x)[0]
f = u_t + u*u_x - (0.01/tf.pi)*u_xx
return f
\end{lstlisting}
The shared parameters between the neural networks $u(t,x)$ and $f(t,x)$ can be learned by minimizing the mean squared error loss
\begin{equation}\label{eq:MSE_Burgers_CT_inference}
MSE = MSE_u + MSE_f,
\end{equation}
where
\[
MSE_u = \frac{1}{N_u}\sum_{i=1}^{N_u} |u(t^i_u,x_u^i) - u^i|^2,
\]
and
\[
MSE_f = \frac{1}{N_f}\sum_{i=1}^{N_f}|f(t_f^i,x_f^i)|^2.
\]
Here, $\{t_u^i, x_u^i, u^i\}_{i=1}^{N_u}$ denote the initial and boundary training data on $u(t,x)$ and $\{t_f^i, x_f^i\}_{i=1}^{N_f}$ specify the collocations points for $f(t,x)$. The loss $MSE_u$ corresponds to the initial and boundary data while $MSE_f$ enforces the structure imposed by equation \eqref{eq:Burgers} at a finite set of collocation points.\\
In all benchmarks considered in this work, the total number of training data $N_u$ is relatively small (a few hundred up to a few thousand points), and we chose to optimize all loss functions using L-BFGS; a quasi-Newton, full-batch gradient-based optimization algorithm \cite{liu1989limited}. For larger data-sets a more computationally efficient mini-batch setting can be readily employed using stochastic gradient descent and its modern variants \cite{goodfellow2016deep,kingma2014adam}. Despite the fact that there is no theoretical guarantee that this procedure converges to a global minimum, our empirical evidence indicates that, if the given partial differential equation is well-posed and its solution is unique, our method is capable of achieving good prediction accuracy given a sufficiently expressive neural network architecture and a sufficient number of collocation points $N_f$. This general observation deeply relates to the resulting optimization landscape induced by the mean square error loss of equation \ref{eq:MSE_Burgers_CT_inference}, and defines an open question for research that is in sync with recent theoretical developments in deep learning \cite{choromanska2015loss,shwartz2017opening}.
Here, we will test the robustness of the proposed methodology using a series of systematic sensitivity studies that accompany the numerical results presented in the following.\\
Figure \ref{fig:Burgers_CT_inference} summarizes our results for the data-driven solution of the Burgers equation. Specifically, given a set of $N_u = 100$ randomly distributed initial and boundary data, we learn the latent solution $u(t,x)$ by training all $3021$ parameters of a 9-layer deep neural network using the mean squared error loss of \eqref{eq:MSE_Burgers_CT_inference}. Each hidden layer contained $20$ neurons and a hyperbolic tangent activation function. In general, the neural network should be given sufficient approximation capacity in order to accommodate the anticipated complexity of $u(t,x)$. However, in this example, our choice aims to highlight the robustness of the proposed method with respect to the well known issue of over-fitting. Specifically, the term in $MSE_f$ in equation \eqref{eq:MSE_Burgers_CT_inference} acts as a regularization mechanism that penalizes solutions that do not satisfy equation \eqref{eq:Burgers}. Therefore, a key property of {\em physics informed neural networks} is that they can be effectively trained using small data sets; a setting often encountered in the study of physical systems for which the cost of data acquisition may be prohibitive.\\
The top panel of Figure \ref{fig:Burgers_CT_inference} shows the predicted spatio-temporal solution $u(t,x)$, along with the locations of the initial and boundary training data. We must underline that, unlike any classical numerical method for solving partial differential equations, this prediction is obtained without any sort of discretization of the spatio-temporal domain. The exact solution for this problem is analytically available \cite{basdevant1986spectral}, and the resulting prediction error is measured at $6.7 \cdot 10^{-4}$ in the relative $\mathcal{L}_2$-norm. Note that this error is about two orders of magnitude lower than the one reported in our previous work on data-driven solution of partial differential equation using Gaussian processes \cite{raissi2017numerical}. A more detailed assessment of the predicted solution is presented in the bottom panel of figure \ref{fig:Burgers_CT_inference}. In particular, we present a comparison between the exact and the predicted solutions at different time instants $t=0.25,0.50,0.75$. Using only a handful of initial and boundary data, the {\em physics informed neural network} can accurately capture the intricate nonlinear behavior of the Burgers' equation that leads to the development of a sharp internal layer around $t = 0.4$. The latter is notoriously hard to accurately resolve with classical numerical methods and requires a laborious spatio-temporal discretization of equation \eqref{eq:Burgers}.\\
\begin{figure}[!t]
\includegraphics[width = 1.0\textwidth]{Burgers_CT_inference.pdf}
\caption{{\em Burgers' equation:} {\it Top:} Predicted solution $u(t,x)$ along with the initial and boundary training data. In addition we are using 10,000 collocation points generated using a Latin Hypercube Sampling strategy. {\it Bottom:} Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the white vertical lines in the top panel. The relative $\mathcal{L}_{2}$ error for this case is $6.7 \cdot 10^{-4}$. Model training took approximately 60 seconds on a single NVIDIA Titan X GPU card.}
\label{fig:Burgers_CT_inference}
\end{figure}
To further analyze the performance of our method, we have performed the following systematic studies to quantify its predictive accuracy for different number of training and collocation points, as well as for different neural network architectures. In table \ref{tab:Burgers_CT_inference_1} we report the resulting relative $\mathcal{L}_{2}$ error for different number of initial and boundary training data $N_u$ and different number of collocation points $N_f$, while keeping the 9-layer network architecture fixed. The general trend shows increased prediction accuracy as the total number of training data $N_u$ is increased, given a sufficient number of collocation points $N_f$. This observation highlights a key strength of {\em physics informed neural networks}: by encoding the structure of the underlying physical law through the collocation points $N_f$, one can obtain a more accurate and data-efficient learning algorithm.\footnote{Note that the case $N_f = 0$ corresponds to a standard neural network model, i.e., a neural network that does not take into account the underlying governing equation.} Finally, table \ref{tab:Burgers_CT_inference_2} shows the resulting relative $\mathcal{L}_{2}$ for different number of hidden layers, and different number of neurons per layer, while the total number of training and collocation points is kept fixed to $N_u = 100$ and $N_f=10,000$, respectively. As expected, we observe that as the number of layers and neurons is increased (hence the capacity of the neural network to approximate more complex functions), the predictive accuracy is increased.
\begin{table}[!t]
\centering
\begin{tabular}{|l||cccccc|}
\hline
\diagbox{$N_u$}{$N_f$} & 2000 & 4000 & 6000 & 7000 & 8000 & 10000 \\ \hline\hline
20 & 2.9e-01 & 4.4e-01 & 8.9e-01 & 1.2e+00 & 9.9e-02 & 4.2e-02 \\
40 & 6.5e-02 & 1.1e-02 & 5.0e-01 & 9.6e-03 & 4.6e-01 & 7.5e-02 \\
60 & 3.6e-01 & 1.2e-02 & 1.7e-01 & 5.9e-03 & 1.9e-03 & 8.2e-03 \\
80 & 5.5e-03 & 1.0e-03 & 3.2e-03 & 7.8e-03 & 4.9e-02 & 4.5e-03 \\
100 & 6.6e-02 & 2.7e-01 & 7.2e-03 & 6.8e-04 & 2.2e-03 & 6.7e-04 \\
200 & 1.5e-01 & 2.3e-03 & 8.2e-04 & 8.9e-04 & 6.1e-04 & 4.9e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative $\mathcal{L}_{2}$ error between the predicted and the exact solution $u(t,x)$ for different number of initial and boundary training data $N_u$, and different number of collocation points $N_f$. Here, the network architecture is fixed to 9 layers with 20 neurons per hidden layer.} \label{tab:Burgers_CT_inference_1}
\end{table}
\begin{table}[!t]
\centering
\begin{tabular}{|c||ccc|}
\hline
\diagbox{Layers}{Neurons} & 10 & 20 & 40 \\ \hline\hline
2 & 7.4e-02 & 5.3e-02 & 1.0e-01 \\
4 & 3.0e-03 & 9.4e-04 & 6.4e-04 \\
6 & 9.6e-03 & 1.3e-03 & 6.1e-04 \\
8 & 2.5e-03 & 9.6e-04 & 5.6e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative $\mathcal{L}_{2}$ error between the predicted and the exact solution $u(t,x)$ for different number of hidden layers and different number of neurons per layer. Here, the total number of training and collocation points is fixed to $N_u = 100$ and $N_f=10,000$, respectively.} \label{tab:Burgers_CT_inference_2}
\end{table}
\subsection{Example (Shr\"{o}dinger Equation)} \label{sec:schrodinger_CT}
This example aims to highlight the ability of our method to handle periodic boundary conditions, complex-valued solutions, as well as different types of nonlinearities in the governing partial differential equations. The one-dimensional nonlinear Schr\"{o}dinger equation is a classical field equation that is used to study quantum mechanical systems, including nonlinear wave propagation in optical fibers and/or waveguides, Bose-Einstein condensates, and plasma waves. In optics, the nonlinear term arises from the intensity dependent index of refraction of a given material. Similarly, the nonlinear term for Bose-Einstein condensates is a result of the mean-field interactions of an interacting, N-body system. The nonlinear Schr\"{o}dinger equation along with periodic boundary conditions is given by
\begin{eqnarray}\label{eq:Schrodinger}
&& i h_t + 0.5 h_{xx} + |h|^2 h = 0,\ \ \ x \in [-5, 5],\ \ \ t \in [0, \pi/2],\\
&& h(0,x) = 2\ \text{sech}(x),\nonumber\\
&& h(t,-5) = h(t, 5),\nonumber\\
&& h_x(t,-5) = h_x(t, 5),\nonumber
\end{eqnarray}
where $h(t,x)$ is the complex-valued solution. Let us define $f(t,x)$ to be given by
\[
f := i h_t + 0.5 h_{xx} + |h|^2 h,
\]
and proceed by placing a complex-valued neural network prior on $h(t,x)$. In fact, if $u$ denotes the real part of $h$ and $v$ is the imaginary part, we are placing a multi-out neural network prior on $h(t,x) = \begin{bmatrix}
u(t,x) & v(t,x)
\end{bmatrix}$. This will result in the complex-valued (multi-output) \emph{physic informed neural network} $f(t,x)$. The shared parameters of the neural networks $h(t,x)$ and $f(t,x)$ can be learned by minimizing the mean squared error loss
\begin{equation}\label{eq:MSE_Schrodinger}
MSE = MSE_0 + MSE_b + MSE_f,
\end{equation}
where
\[
MSE_0 = \frac{1}{N_0}\sum_{i=1}^{N_0} |h(0,x_0^i) - h^i_0|^2,
\]
\[
MSE_b = \frac{1}{N_b}\sum_{i=1}^{N_b} \left(|h^i(t^i_b,-5) - h^i(t^i_b,5)|^2 + |h^i_x(t^i_b,-5) - h^i_x(t^i_b,5)|^2\right),
\]
and
\[
MSE_f = \frac{1}{N_f}\sum_{i=1}^{N_f}|f(t_f^i,x_f^i)|^2.
\]
Here, $\{x_0^i, h^i_0\}_{i=1}^{N_0}$ denotes the initial data, $\{t^i_b\}_{i=1}^{N_b}$ corresponds to the collocation points on the boundary, and $\{t_f^i,x_f^i\}_{i=1}^{N_f}$ represents the collocation points on $f(t,x)$. Consequently, $MSE_0$ corresponds to the loss on the initial data, $MSE_b$ enforces the periodic boundary conditions, and $MSE_f$ penalizes the Schr\"{o}dinger equation not being satisfied on the collocation points.\\
In order to assess the accuracy of our method, we have simulated equation \eqref{eq:Schrodinger} using conventional spectral methods to create a high-resolution data set. Specifically, starting from an initial state $h(0,x) = 2\ \text{sech}(x)$ and assuming periodic boundary conditions $h(t,-5) = h(t,5)$ and $h_x(t,-5) = h_x(t,5)$, we have integrated equation \eqref{eq:Schrodinger} up to a final time $t=\pi/2$ using the Chebfun package \cite{driscoll2014chebfun} with a spectral Fourier discretization with 256 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step $\Delta{t} = \pi/2 \cdot 10^{-6}$. Under our data-driven setting, all we observe are measurements $\{x_0^i, h^i_0\}_{i=1}^{N_0}$ of the latent function $h(t,x)$ at time $t=0$. In particular, the training set consists of a total of $N_0 = 50$ data points on $h(0,x)$ randomly parsed from the full high-resolution data-set, as well as $N_b = 50$ randomly sampled collocation points $\{t^i_b\}_{i=1}^{N_b}$ for enforcing the periodic boundaries. Moreover, we have assumed $N_f=20,000$ randomly sampled collocation points used to enforce equation \eqref{eq:Schrodinger} inside the solution domain. All randomly sampled point locations were generated using a space filling Latin Hypercube Sampling strategy \cite{stein1987large}.\\
Here our goal is to infer the entire spatio-temporal solution $h(t,x)$ of the Schr\"{o}dinger equation (\ref{eq:Schrodinger}). We chose to jointly represent the latent function $h(t,x) = [u(t,x)\ v(t,x)]$ using a 5-layer deep neural network with $100$ neurons per layer and a hyperbolic tangent activation function. Figure \ref{fig:NLS} summarizes the results of our experiment. Specifically, the top panel of figure \ref{fig:NLS} shows the magnitude of the predicted spatio-temporal solution $|h(t,x)|=\sqrt{u^2(t,x) + v^2(t,x)}$, along with the locations of the initial and boundary training data. The resulting prediction error is validated against the test data for this problem, and is measured at $1.97 \cdot 10^{-3}$ in the relative $\mathcal{L}_2$-norm. A more detailed assessment of the predicted solution is presented in the bottom panel of Figure~\ref{fig:NLS}. In particular, we present a comparison between the exact and the predicted solutions at different time instants $t=0.59,0.79,0.98$. Using only a handful of initial data, the {\em physics informed neural network} can accurately capture the intricate nonlinear behavior of the Schr\"{o}dinger equation.\\
\begin{figure}[!t]
\includegraphics[width = 1.0\textwidth]{NLS.pdf}
\caption{{\em Shr\"{o}dinger equation:} {\it Top:} Predicted solution $|h(t,x)|$ along with the initial and boundary training data. In addition we are using 20,000 collocation points generated using a Latin Hypercube Sampling strategy. {\it Bottom:} Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the dashed vertical lines in the top panel. The relative $\mathcal{L}_{2}$ error for this case is $1.97 \cdot 10^{-3}$.}
\label{fig:NLS}
\end{figure}
One potential limitation of the continuous time neural network models considered so far, stems from the need to use a large number of collocation points $N_f$ in order to enforce physics informed constraints in the entire spatio-temporal domain. Although this poses no significant issues for problems in one or two spatial dimensions, it may introduce a severe bottleneck in higher dimensional problems, as the total number of collocation points needed to globally enforce a physics informed constrain (i.e., in our case a partial differential equation) will increase exponentially. In the next section, we put forth a different approach that circumvents the need for collocation points by introducing a more structured neural network representation leveraging the classical Runge-Kutta time-stepping schemes \cite{iserles2009first}.
\section{Discrete Time Models}\label{sec:DT_models}
Let us apply the general form of Runge-Kutta methods with $q$ stages \cite{iserles2009first} to equation (\ref{eq:PDE}) and obtain
\begin{equation}\label{eq:RungeKutta}
\arraycolsep=1.0pt\def1.5{1.5}
\begin{array}{ll}
u^{n+c_i} = u^n - \Delta t \sum_{j=1}^q a_{ij} \mathcal{N}[u^{n+c_j}], \ \ i=1,\ldots,q,\\
u^{n+1} = u^{n} - \Delta t \sum_{j=1}^q b_j \mathcal{N}[u^{n+c_j}].
\end{array}
\end{equation}
Here, $u^{n+c_j}(x) = u(t^n + c_j \Delta t, x)$ for $j=1, \ldots, q$. This general form encapsulates both implicit and explicit time-stepping schemes, depending on the choice of the parameters $\{a_{ij},b_j,c_j\}$. Equations (\ref{eq:RungeKutta}) can be equivalently expressed as
\begin{equation}
\arraycolsep=1.0pt\def1.5{1.5}
\begin{array}{ll}
u^{n} = u^n_i, \ \ i=1,\ldots,q,\\
u^n = u^n_{q+1},
\end{array}
\end{equation}
where
\begin{equation}\label{eq:RungeKutta_inference_rearranged}
\arraycolsep=1.0pt\def1.5{1.5}
\begin{array}{ll}
u^n_i := u^{n+c_i} + \Delta t \sum_{j=1}^q a_{ij} \mathcal{N}[u^{n+c_j}], \ \ i=1,\ldots,q,\\
u^n_{q+1} := u^{n+1} + \Delta t \sum_{j=1}^q b_j \mathcal{N}[u^{n+c_j}].
\end{array}
\end{equation}
We proceed by placing a multi-output neural network prior on
\begin{equation}\label{eq:RungeKutta_PU_prior_inference}
\begin{bmatrix}
u^{n+c_1}(x), \ldots, u^{n+c_q}(x), u^{n+1}(x)
\end{bmatrix}.
\end{equation}
This prior assumption along with equations (\ref{eq:RungeKutta_inference_rearranged}) result in a \emph{physics informed neural network} that takes $x$ as an input and outputs
\begin{equation}\label{eq:RungeKutta_PI_prior_inference}
\begin{bmatrix}
u^n_1(x), \ldots, u^n_q(x), u^n_{q+1}(x)
\end{bmatrix}.
\end{equation}
\subsection{Example (Burgers' Equation)}
To highlight the key features of the discrete time representation we revisit the problem of data-driven solution of the Burgers' equation. For this case, the nonlinear operator in equation \eqref{eq:RungeKutta_inference_rearranged} is given by
\[
\mathcal{N}[u^{n+c_j}] = u^{n+c_j} u^{n+c_j}_x - (0.01/\pi)u^{n+c_j}_{xx},
\]
and the shared parameters of the neural networks \eqref{eq:RungeKutta_PU_prior_inference} and \eqref{eq:RungeKutta_PI_prior_inference} can be learned by minimizing the sum of squared errors
\begin{equation}\label{eq:SSE_Burgers_DT_inference}
SSE = SSE_n + SSE_{b},
\end{equation}
where
\[
SSE_n = \sum_{j=1}^{q+1} \sum_{i=1}^{N_n} |u^n_j(x^{n,i}) - u^{n,i}|^2,
\]
and
\[
SSE_b = \sum_{i=1}^q \left(|u^{n+c_i}(-1)|^2 + |u^{n+c_i}(1)|^2\right) + |u^{n+1}(-1)|^2 + |u^{n+1}(1)|^2.
\]
Here, $\{x^{n,i}, u^{n,i}\}_{i=1}^{N_n}$ corresponds to the data at time $t^n$. The Runge-Kutta scheme now allows us to infer the latent solution $u(t,x)$ in a sequential fashion. Starting from initial data $\{x^{n,i}, u^{n,i}\}_{i=1}^{N_n}$ at time $t^n$ and data at the domain boundaries $x = -1$ and $x = 1$, we can use the aforementioned loss function \eqref{eq:SSE_Burgers_DT_inference} to train the networks of \eqref{eq:RungeKutta_PU_prior_inference}, \eqref{eq:RungeKutta_PI_prior_inference}, and predict the solution at time $t^{n+1}$. A Runge-Kutta time-stepping scheme would then use this prediction as initial data for the next step and proceed to train again and predict $u(t^{n+2},x)$, $u(t^{n+3},x)$, etc., one step at a time.\\
In classical numerical analysis, these steps are usually confined to be small due to stability constraints for explicit schemes or computational complexity constrains for implicit formulations \cite{iserles2009first}. These constraints become more severe as the total number of Runge-Kutta stages $q$ is increased, and, for most problems of practical interest, one needs to take thousands to millions of such steps until the solution is resolved up to a desired final time. In sharp contrast to classical methods, here we can employ implicit Runge-Kutta schemes with an arbitrarily large number of stages at effectively no extra cost.\footnote{To be precise, it is only the number of parameters in the last layer of the neural network that increases linearly with the total number of stages.} This enables us to take very large time steps while retaining stability and high predictive accuracy, therefore allowing us to resolve the entire spatio-temporal solution in a single step.\\
The result of applying this process to the Burgers' equation is presented in figure \ref{fig:Burgers_DT_inference}. For illustration purposes, we start with a set of $N_n=250$ initial data at $t = 0.1$, and employ a {\em physics informed neural network} induced by an implicit Runge-Kutta scheme with 500 stages to predict the solution at time $t=0.9$ in a single step. The theoretical error estimates for this scheme predict a temporal error accumulation of $\mathcal{O}(\Delta{t}^{2q})$ \cite{iserles2009first}, which in our case translates into an error way below machine precision, i.e., $\Delta{t}^{2q} = 0.8^{1000} \approx 10^{-97}$. To our knowledge, this is the first time that an implicit Runge-Kutta scheme of that high-order has ever been used. Remarkably, starting from smooth initial data at $t=0.1$ we can predict the nearly discontinuous solution at $t=0.9$ in a single time-step with a relative $\mathcal{L}_{2}$ error of $8.2 \cdot 10^{-4}$. This error is two orders of magnitude lower that the one reported in \cite{raissi2017numerical}, and it is entirely attributed to the neural network's capacity to approximate $u(t,x)$, as well as to the degree that the sum of squared errors loss allows interpolation of the training data. The network architecture used here consists of 4 layers with 50 neurons in each hidden layer.\\
\begin{figure}[!t]
\includegraphics[width = 1.0\textwidth]{Burgers_DT_inference.pdf}
\caption{{\em Burgers equation:} {\it Top:} Solution $u(t,x)$ along with the location of the initial training snapshot at $t=0.1$ and the final prediction snapshot at $t=0.9$. {\it Bottom:} Initial training data and final prediction at the snapshots depicted by the white vertical lines in the top panel. The relative $\mathcal{L}_{2}$ error for this case is $8.2 \cdot 10^{-4}$.}
\label{fig:Burgers_DT_inference}
\end{figure}
A detailed systematic study to quantify the effect of different network architectures is presented in table \ref{tab:Burgers_DT_inference_2}. By keeping the number of Runge-Kutta stages fixed to $q = 500$ and the time-step size to $\Delta{t}=0.8$, we have varied the number of hidden layers and the number of neurons per layer, and monitored the resulting relative $\mathcal{L}_{2}$ error for the predicted solution at time $t=0.9$. Evidently, as the neural network capacity is increased the predictive accuracy is enhanced.\\
\begin{table}[!t]
\centering
\begin{tabular}{|c||ccc|}
\hline
\diagbox{Layers}{Neurons} & 10 & 25 & 50 \\ \hline\hline
1 & 4.1e-02 & 4.1e-02 & 1.5e-01 \\
2 & 2.7e-03 & 5.0e-03 & 2.4e-03 \\
3 & 3.6e-03 & 1.9e-03 & 9.5e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative final prediction error measure in the $\mathcal{L}_{2}$ norm for different number of hidden layers and neurons in each layer. Here, the number of Runge-Kutta stages is fixed to 500 and the time-step size to $\Delta{t}=0.8$.} \label{tab:Burgers_DT_inference_2}
\end{table}
The key parameters controlling the performance of our discrete time algorithm are the total number of Runge-Kutta stages $q$ and the time-step size $\Delta{t}$. In table \ref{tab:Burgers_DT_inference_1} we summarize the results of an extensive systematic study where we fix the network architecture to 4 hidden layers with 50 neurons per layer, and vary the number of Runge-Kutta stages $q$ and the time-step size $\Delta{t}$. Specifically, we see how cases with low numbers of stages fail to yield accurate results when the time-step size is large. For instance, the case $q=1$ corresponding to the classical trapezoidal rule, and the case $q=2$ corresponding to the $4^{\text{th}}$-order Gauss-Legendre method, cannot retain their predictive accuracy for time-steps larger than 0.2, thus mandating a solution strategy with multiple time-steps of small size. On the other hand, the ability to push the number of Runge-Kutta stages to 32 and even higher allows us to take very large time steps, and effectively resolve the solution in a single step without sacrificing the accuracy of our predictions. Moreover, numerical stability is not sacrificed either as implicit Runge-Kutta is the only family of time-stepping schemes that remain A-stable regardless of their order, thus making them ideal for stiff problems \cite{iserles2009first}. These properties are unprecedented for an algorithm of such implementation simplicity, and illustrate one of the key highlights of our discrete time approach.
\begin{table}[!t]
\centering
\begin{tabular}{|l||cccc|}
\hline
\diagbox{$q$}{$\Delta{t}$} & 0.2 & 0.4 & 0.6 & 0.8 \\ \hline\hline
1 & 3.5e-02 & 1.1e-01 & 2.3e-01 & 3.8e-01 \\
2 & 5.4e-03 & 5.1e-02 & 9.3e-02 & 2.2e-01 \\
4 & 1.2e-03 & 1.5e-02 & 3.6e-02 & 5.4e-02 \\
8 & 6.7e-04 & 1.8e-03 & 8.7e-03 & 5.8e-02 \\
16 & 5.1e-04 & 7.6e-02 & 8.4e-04 & 1.1e-03 \\
32 & 7.4e-04 & 5.2e-04 & 4.2e-04 & 7.0e-04 \\
64 & 4.5e-04 & 4.8e-04 & 1.2e-03 & 7.8e-04 \\
100 & 5.1e-04 & 5.7e-04 & 1.8e-02 & 1.2e-03 \\
500 & 4.1e-04 & 3.8e-04 & 4.2e-04 & 8.2e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative final prediction error measured in the $\mathcal{L}_{2}$ norm for different number of Runge-Kutta stages $q$ and time-step sizes $\Delta{t}$. Here, the network architecture is fixed to 4 hidden layers with 50 neurons in each layer.} \label{tab:Burgers_DT_inference_1}
\end{table}
\subsubsection{Example (Allen-Cahn Equation)}
This example aims to highlight the ability of the proposed discrete time models to handle different types of nonlinearity in the governing partial differential equation. To this end, let us consider the Allen-Cahn equation along with periodic boundary conditions
\begin{eqnarray} \label{eq:Allen-Cahn}
&&u_t - 0.0001 u_{xx} + 5 u^3 - 5 u = 0, \ \ \ x \in [-1,1], \ \ \ t \in [0,1],\\
&&u(0, x) = x^2 \cos(\pi x),\nonumber\\
&&u(t,-1) = u(t,1),\nonumber\\
&&u_x(t,-1) = u_x(t,1).\nonumber
\end{eqnarray}
The Allen-Cahn equation is a well-known equation from the area of reaction-diffusion systems. It describes the process of phase separation in multi-component alloy systems, including order-disorder transitions. For the Allen-Cahn equation, the nonlinear operator in equation \eqref{eq:RungeKutta_inference_rearranged} is given by
\[
\mathcal{N}[u^{n+c_j}] = -0.0001 u^{n+c_j}_{xx} + 5 \left(u^{n+c_j}\right)^3 - 5 u^{n+c_j},
\]
and the shared parameters of the neural networks \eqref{eq:RungeKutta_PU_prior_inference} and \eqref{eq:RungeKutta_PI_prior_inference} can be learned by minimizing the sum of squared errors
\begin{equation}\label{eq:SSE_Allen-Cahn}
SSE = SSE_n + SSE_b,
\end{equation}
where
\[
SSE_n = \sum_{j=1}^{q+1} \sum_{i=1}^{N_n} |u^n_j(x^{n,i}) - u^{n,i}|^2,
\]
and
\begin{eqnarray*}
SSE_b &=& \sum_{i=1}^q |u^{n+c_i}(-1) - u^{n+c_i}(1)|^2 + |u^{n+1}(-1) - u^{n+1}(1)|^2 \\
&+& \sum_{i=1}^q |u_x^{n+c_i}(-1) - u_x^{n+c_i}(1)|^2 + |u_x^{n+1}(-1) - u_x^{n+1}(1)|^2.
\end{eqnarray*}
Here, $\{x^{n,i}, u^{n,i}\}_{i=1}^{N_n}$ corresponds to the data at time $t^n$. We have generated a training and test data-set set by simulating the Allen-Cahn equation \eqref{eq:Allen-Cahn} using conventional spectral methods. Specifically, starting from an initial condition $u(0,x) = x^2 \cos(\pi x)$ and assuming periodic boundary conditions $u(t,-1) = u(t,1)$ and $u_x(t,-1) = u_x(t,1)$, we have integrated equation \eqref{eq:Allen-Cahn} up to a final time $t=1.0$ using the Chebfun package \cite{driscoll2014chebfun} with a spectral Fourier discretization with 512 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step $\Delta{t} = 10^{-5}$.\\
In this example, we assume $N_n = 200$ initial data points that are randomly sub-sampled from the exact solution at time $t=0.1$, and our goal is to predict the solution at time $t=0.9$ using a single time-step with size $\Delta{t}=0.8$. To this end, we employ a discrete time {\em physics informed neural network} with 4 hidden layers and 200 neurons per layer, while the output layer predicts 101 quantities of interest corresponding to the $q=100$ Runge-Kutta stages $u^{n+c_i}(x)$, $i=1,\dots,q$, and the solution at final time $u^{n+1}(x)$. Figure \ref{fig:AC_DT_inference} summarizes our predictions after the network has been trained using the loss function of equation \eqref{eq:SSE_Allen-Cahn}. Evidently, despite the complex dynamics leading to a solution with two sharp internal layers, we are able to obtain an accurate prediction of the solution at $t=0.9$ using only a small number of scattered measurements at $t=0.1$.
\begin{figure}[!t]
\includegraphics[width = 1.0\textwidth]{AC.pdf}
\caption{{\em Allen-Cahn equation:}
{\it Top:} Solution $u(t,x)$ along with the location of the initial training snapshot at $t=0.1$ and the final prediction snapshot at $t=0.9$. {\it Bottom:} Initial training data and final prediction at the snapshots depicted by the white vertical lines in the top panel. The relative $\mathcal{L}_{2}$ error for this case is $6.99\cdot 10^{-3}$.}
\label{fig:AC_DT_inference}
\end{figure}
\section{Summary and Discussion}
We have introduced {\em physics informed neural networks}, a new class of universal function approximators that is capable of encoding any underlying physical laws that govern a given data-set, and can be described by partial differential equations. In this work, we design data-driven algorithms for inferring solutions to general nonlinear partial differential equations, and constructing computationally efficient physics-informed surrogate models. The resulting methods showcase a series of promising results for a diverse collection of problems in computational science, and open the path for endowing deep learning with the powerful capacity of mathematical physics to model the world around us. As deep learning technology is continuing to grow rapidly both in terms of methodological and algorithmic developments, we believe that this is a timely contribution that can benefit practitioners across a wide range of scientific domains. Specific applications that can readily enjoy these benefits include, but are not limited to, data-driven forecasting of physical processes, model predictive control, multi-physics/multi-scale modeling and simulation.\\
We must note however that the proposed methods should not be viewed as replacements of classical numerical methods for solving partial differential equations (e.g., finite elements, spectral methods, etc.). Such methods have matured over the last 50 years and, in many cases, meet the robustness and computational efficiency standards required in practice. Our message here, as advocated in Section~\ref{sec:DT_models}, is that classical methods such as the Runge-Kutta time-stepping schemes can coexist in harmony with deep neural networks, and offer invaluable intuition in constructing structured predictive algorithms. Moreover, the implementation simplicity of the latter greatly favors rapid development and testing of new ideas, potentially opening the path for a new era in data-driven scientific computing. This will be further highlighted in the second part of this paper in which {\em physics informed neural networks} are put to the test of data-driven discovery of partial differential equations.\\
Finally, in terms of future work, one pressing question involves addressing the problem of quantifying the uncertainty associated with the neural network predictions. Although this important element was naturally addressed in previous work employing Gaussian processes \cite{raissi2017numerical}, it not captured by the proposed methodology in its present form and requires further investigation.
\section*{Acknowledgements}
This work received support by the DARPA EQUiPS grant N66001-15-2-4055, the MURI/ARO grant W911NF-15-1-0562, and the AFOSR grant FA9550-17-1-0013. All data and codes used in this manuscript are publicly available on GitHub at \url{https://github.com/maziarraissi/PINNs}.
\bibliographystyle{model1-num-names}
| {
"timestamp": "2017-11-30T02:01:47",
"yymm": "1711",
"arxiv_id": "1711.10561",
"language": "en",
"url": "https://arxiv.org/abs/1711.10561",
"abstract": "We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this two part treatise, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct classes of algorithms, namely continuous time and discrete time models. The resulting neural networks form a new class of data-efficient universal function approximators that naturally encode any underlying physical laws as prior information. In this first part, we demonstrate how these networks can be used to infer solutions to partial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters.",
"subjects": "Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Dynamical Systems (math.DS); Numerical Analysis (math.NA); Machine Learning (stat.ML)",
"title": "Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587247081929,
"lm_q2_score": 0.8006920116079208,
"lm_q1q2_score": 0.7908905202698775
} |
https://arxiv.org/abs/2104.15025 | The Maximax Minimax Quotient Theorem | We present an optimization problem emerging from optimal control theory and situated at the intersection of fractional programming and linear max-min programming on polytopes. A naïve solution would require solving four nested, possibly nonlinear, optimization problems. Instead, relying on numerous geometric arguments we determine an analytical solution to this problem. In the course of proving our main theorem we also establish another optimization result stating that the minimum of a specific minimax optimization is located at a vertex of the constraint set. | \section{Introduction}
The field of fractional programming studies the optimization of a ratio of functions and made its debut in the 1960s with Charnes and Cooper \citep{charnes}. It has since then expanded to more complex and more general problems \citep{phuong}. However, outside of linear fractional programming, very few analytical results are available; the focus has now largely shifted to developing search algorithms \citep{abdel, pardalos_global}.
In this paper we are interested in a specific fractional optimization problem introduced in \citep{SIAM_CT} and composed of four nested optimization problems. For this reason, a search algorithm would have a high computational cost and would be especially wasteful since an analytical solution exists.
Our ratio of interest features a max-min optimization \citep{pardalos_book} belonging to the setting of semi-infinite programming \citep{semi-infinite_programming}. Because of the infinite number of constraints, it is not possible to immediately apply the classical results of linear max-min theory \citep{max-min_programming} stating that the maximum is attained on the boundary of the constraint set.
Nonetheless, thanks to the specific geometry of our problem we are able to prove a very similar result, first mentioned as Theorem 3.1 in the authors' work \citep{SIAM_CT}. However, its proof is omitted from \citep{SIAM_CT}.
Armed with this preliminary result on max-min programming, we formulate and establish the Maximax Minimax Quotient Theorem. This result concerns the maximization of a ratio of a maximum and a minimax over two polytopes. In the special case where these polytopes are symmetric, this result reduces to Theorem 3.2 of \citep{SIAM_CT}, whose the proof was again omitted for length concerns.
The remainder of this paper is organized as follows.
Section~\ref{sec:prelim} establishes the existence of the Maximax Minimax Quotient and proves a preliminary optimization result.
Section~\ref{sec:Maximax Minimax} states our central theorem and provides its proof.
Section~\ref{sec:lemmas} gathers all the lemmas involved in the proof of the Maximax Minimax Quotient Theorem.
Section~\ref{sec:continuity} justifies the continuity of two maxima functions used during the proof of our main result.
Finally, Section~\ref{sec:example} illustrates the proof of our theorem on a simple example.
\emph{Notation:}
We use $\partial X$ to denote the boundary of a set $X$ and its interior is denoted $X^\circ := X \backslash \partial X$.
In $\mathbb{R}^n$ we denote the unit sphere with $\mathbb{S} := \{ x \in \mathbb{R}^n : \|x\| = 1\}$ and the ball of radius $\varepsilon$ centered on $x$ with $B_\varepsilon(x) := \big\{ y \in \mathbb{R}^n : \|y - x\| \leq \varepsilon \big\}$.
The scalar product of vectors is denoted by $\langle \cdot, \cdot \rangle$.
For $x \in \mathbb{R}^n$ and $y \in \mathbb{R}^n$ both nonzero we denote as $\widehat{x, y}$ the signed angle from $x$ to $y$ in the 2D plane containing both of them. We take the convention that the angles are positive when going in the clockwise orientation.
\section{Preliminaries}\label{sec:prelim}
\begin{definition}
A \emph{polytope} in $\mathbb{R}^n$ is a compact intersection of finitely many half-spaces.
\end{definition}
Thus, this work only considers convex polytopes.
If $X$ and $Y$ are two nonempty polytopes in $\mathbb{R}^n$ with $-X \subset Y^\circ$, and $d \in \mathbb{S}$, we define the \emph{Maximax Minimax Quotient} as
\begin{equation}\label{eq:r_(X,Y)}
r_{X,Y}(d) := \frac{\underset{x\, \in\, X,\ y\, \in\, Y}{\max} \big\{ \|x + y\| : x + y \in \mathbb{R}^+d \big\} }{ \underset{x\, \in\, X}{\min} \big\{ \underset{y\, \in\, Y}{\max} \big\{ \|x + y\| : x + y \in \mathbb{R}^+d \big\} \big\} }.
\end{equation}
The objective of the Maximax Minimax Quotient Theorem is to determine the direction $d$ that maximizes $r_{X,Y}(d)$.
Note that in the numerator of \eqref{eq:r_(X,Y)}, $x$ and $y$ are chosen together to satisfy the constraint $x+y \in \mathbb{R}^+ d$, while in the denominator this constraint only applies to $y$.
Before starting the actual proof of this theorem, we first need to justify the existence of the minimum and the maxima appearing in \eqref{eq:r_(X,Y)}.
\begin{proposition}\label{prop: r_X,Y well-defined}
Let $X$, $Y$ be two nonempty polytopes in $\mathbb{R}^n$ with $-X \subset Y^\circ$, $\dim Y = n$ and $d \in \mathbb{S}$. Then,
\begin{enumerate}[(i)]
\item $\underset{x\, \in\, X,\, y\, \in\, Y}{\max} \big\{ \|x + y\| : x + y \in \mathbb{R}^+d \big\}$ exists,
\item $\lambda^*(x,d) := \underset{y\, \in\, Y}{\max} \big\{ \|x + y\| : x + y \in \mathbb{R}^+d \big\}$ exists for all $x \in X$,
\item $\underset{x\, \in\, X}{\min} \big\{ \lambda^*(x,d) \big\}$ exists,
\item and $\underset{x\, \in\, X}{\min} \big\{\lambda^*(x,d) \big\} > 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
(i) Let $S := \big\{ (x, y) \in X \times Y : x + y \in \mathbb{R}^+ d\big\}$. Set $S$ is a closed subset of the compact set $X \times Y$, so $S$ is compact. Since $X$ is nonempty, we take $x \in X$. Using $-X \subset Y$ we have $-x \in Y$ and $x + (-x) = 0 \in \mathbb{R}^+ d$. Then, $(x,-x) \in S$, so $S$ is nonempty. Function $f : S \rightarrow \mathbb{R}$ defined as $f(x,y) := \|x + y\|$ is continuous, so it reaches a maximum over $S$.
\vspace{2mm}
(ii) For $x \in X$ define $S(x) := \big\{ y \in Y : x + y \in \mathbb{R}^+ d \big\}$. Since $S(x)$ is a closed subset of the compact set $Y$, $S(x)$ is compact. Since $-X \subset Y$, we have $-x \in S(x)$ and so $S(x) \neq \emptyset$. Function $f_x : S(x) \rightarrow \mathbb{R}$ defined as $f_x(y) := \|x + y\|$ is continuous, so it reaches a maximum over $S(x)$, i.e., $\lambda^*$ exists.
\vspace{2mm}
(iii) For $x \in X$ and $d \in \mathbb{S}$, the argument of $\lambda^*(x,d)$ is uniquely defined as $y^*(x,d) := \lambda^*(x,d)d - x$ since $\|d\| = 1$ and
\begin{equation}\label{eq:y^*(x,d)}
y^*(x, d) = \arg \underset{y\, \in\, Y}{\max} \big\{ \|x + y\| : x + y \in \mathbb{R}^+ d \big\}.
\end{equation}
Lemma~\ref{lemma: lambda continuous} shows that $\lambda^*$ is continuous in $x$ and $d$, so $y^*$ is also continuous in $x$ and $d$.
Then, function $f : X \rightarrow \mathbb{R}$ defined as $f(x) := \|x + y^*(x,d)\|$ is continuous, so it reaches a minimum over the compact and nonempty set $X$.
\vspace{2mm}
(iv) Note that $y^*(x,d) \in \partial Y$ for all $x \in X$. Indeed, assume for contradiction purposes that there exists $\varepsilon > 0$ such that $B_\varepsilon\big(y^*(x,d) \big) \in Y$. We required $\dim Y = n$ to make this ball of full dimension, so that $z := y^*(x,d) + \varepsilon d \in Y$. Then, $x + z = \big(\lambda^*(x,d) + \varepsilon) d \in \mathbb{R}^+ d$ and $\|x + z\| = \lambda^*(x,d) + \varepsilon > \lambda^*(x,d)$ contradicting the optimality of $\lambda^*$.
Thus, $y^*(x,d) \in \partial Y$. Since $-X \subset Y^\circ$, we have $\|x + y^*(x,d)\| > 0$ for all $x \in X$.
\end{proof}
Then, with the assumptions of Proposition~\ref{prop: r_X,Y well-defined} the Maximax Minimax Quotient is well-defined.
The proof of our main theorem relies on another optimization result stating that the argument of the minimum in \eqref{eq:r_(X,Y)} lies at a vertex of $X$.
\begin{definition}
A \emph{vertex} of a set $X \subset \mathbb{R}^n$ is a point $x \in X$ such that if there are $x_1 \in X$, $x_2 \in X$ and $\lambda \in [0,1]$ with $x = \lambda x_1 + (1-\lambda)x_2$, then $x = x_1 = x_2$.
\end{definition}
With this definition, a vertex of a polytope corresponds to the usual understanding of a vertex of a polytope.
\begin{theorem}\label{thm:minimum on the vertices}
Let $d \in \mathbb{S}$, $X$ and $Y$ two polytopes of $\mathbb{R}^n$ with $-X \subset Y$ and $\dim Y = n$. Then, there exists a vertex $v$ of $X$ where $\underset{x\, \in\, X}{\min}\big\{ \lambda^*(x,d) \big\}$ is reached.
\end{theorem}
\begin{proof}
According to Proposition~\ref{prop: r_X,Y well-defined} the minimum of $\lambda^*$ exists. Then, let $x^* \in X$ such that $\lambda^*(x^*, d) = \underset{x\, \in\, X}{\min}\big\{ \lambda^*(x,d) \big\}$, i.e., $\|y^*(x^*) + x^*\| = \underset{x\, \in\, X}{\min}\ \|y^*(x) + x\|$.
Since $-x^*$ must minimize the distance between itself and $y^*(x^*) \in \partial Y$, with $-X \subset Y$ obviously $x^* \in \partial X$.
Assume now that $x^*$ is not on a vertex of $\partial X$. Let $S_x$ be the surface of lowest dimension in $\partial X$ such that $x^* \in S_x$ and $\dim S_x \geq 1$.
Let $v$ be a vertex of $S_x$ and $x(\alpha) := x^* + \alpha (v - x^*)$ for $\alpha \in \mathbb{R}$. Notice that $x(0) = x^*$ and $x(1) = v$. Due to the choice of $v$, the convexity of $S_x$ and $x^*$ not being a vertex, there exists $\varepsilon > 0$ such that $x(\alpha) \in S_x$ for all $\alpha \in [-\varepsilon, 1]$.
We also define the lengths $L(\alpha) := \|y^*\big(x(\alpha)\big) + x(\alpha)\|$ and $L^* := L(0)$.
Since $\|d\| = 1$ and $y^*\big(x(\alpha)\big) + x(\alpha) \in \mathbb{R}^+ d$, we have $L(\alpha) = \langle y^*\big(x(\alpha)\big) + x(\alpha), d \rangle$.
By definition of $x^*$, we know that $L^* \leq L(\alpha)$ for all $\alpha \in [-\varepsilon, 1]$. For contradiction purposes assume that there exists $\alpha_0 \in (0, 1]$ such that $L^* < L(\alpha_0)$. We introduce the convexity coefficient $\beta := \frac{\alpha_0}{\alpha_0 + \varepsilon} > 0$ and then
\begin{align*}
L^* &= \beta L^* + (1 - \beta)L^* < \beta L(-\varepsilon) + (1-\beta)L(\alpha_0) \\
&< \beta \langle y^*\big(x(-\varepsilon)\big) + x(-\varepsilon), d \rangle + (1-\beta) \langle y^*\big(x(\alpha_0)\big) + x(\alpha_0), d \rangle = \langle z + x^*, d\rangle,
\end{align*}
with $z := \beta y^*\big(x(-\varepsilon)\big) + (1-\beta) y^*\big(x(\alpha_0)\big)$. Indeed, note that $\beta x(-\varepsilon) + (1-\beta) x(\alpha_0) = x^*$, and $z + x^* \in \mathbb{R}^+ d$.
Note that $L^* = \underset{y\, \in\, Y}{\max} \big\{ \langle x^* + y, d\rangle : x^* + y \in \mathbb{R}^+ d \big\}$, but $L^* < \langle x^* + z, d\rangle$.
Given that $z \in Y$ by convexity of $Y$ and $x^* + z \in \mathbb{R}^+ d$, we have reached a contradiction.
Thus, there is no $\alpha_0 \in (0,1]$ such that $L^* < L(\alpha_0)$. Therefore, for all $\alpha \in [0,1]$, $L(\alpha) = L^*$. By taking $\alpha = 1$, we have $x(\alpha) = v$, so the minimum $L^*$ is also reached on the vertex $v$ of $X$.
\end{proof}
We have now all the preliminary results necessary to state our central theorem.
\section{The Maximax Minimax Quotient Theorem}\label{sec:Maximax Minimax}
\begin{theorem}[Maximax Minimax Quotient Theorem]\label{thm:varying x_M and x_N}
If $X$ and $Y$ are two polytopes in $\mathbb{R}^n$ with $-X \subset Y^\circ$, $\dim X = 1$, $\partial X = \{x_1, x_2\}$ with $x_2 \neq 0$ and $\dim Y = n$, then $\underset{d\, \in\, \mathbb{S}}{\max}\ r_{X,Y}(d) = \max \big\{ r_{X,Y}(x_2), r_{X,Y}(-x_2) \big\}$.
\end{theorem}
\begin{proof}
Since $\dim X = 1$, its extremities $x_1$ and $x_2$ are different, so at least one of them is nonzero. Then, imposing $x_2 \neq 0$ does not restrain the generality of our result.
Following Proposition~\ref{prop: r_X,Y well-defined}, $r_{X,Y}$ is well-defined. Reusing $y^*$ defined in \eqref{eq:y^*(x,d)}, we introduce $x_M^*(d) := \arg\underset{x\, \in\, X}{\min} \big\{ \|x + y^*(x,d)\| \big\}$ and $x_N^*(d) := \arg \underset{x\, \in\, X}{\max} \big\{ \|x + y^*(x,d)\| : x + y^*(x,d) \in \mathbb{R}^+ d \big\}$.
According to Theorem~\ref{thm:minimum on the vertices}, $x_M^*(d) \in \partial X$ for all $d \in \mathbb{S}$ and following Lemma~\ref{lemma: continuity of x_N}, $x_N^*$ is a continuous function of $d$.
For some $d \in \mathbb{S}$ the $\arg\min$ and $\arg\max$ in the definitions of $x_M^*$ and $x_N^*$ might not be unique; if so we take the arguments ensuring that $x_M^*(d) \in \partial X$ and that $x_N^*$ is continuous.
We also define $y_N^*(d) := y^*\big( x_N^*(d), d\big)$ and $y_M^*(d) := y^*\big( x_M^*(d), d\big)$. Then,
\begin{equation*}
r_{X,Y}(d) = \frac{\underset{y\, \in\, Y}{\max} \big\{ \|y + x_N^*(d)\| : y + x_N^*(d) \in \mathbb{R}^+d \big\} }{ \underset{y\, \in\, Y}{\max} \big\{ \|y + x_M^*(d)\| : y + x_M^*(d) \in \mathbb{R}^+d \big\} } = \frac{\|x_N^*(d) + y_N^*(d)\|}{\|x_M^*(d) + y_M^*(d)\|}.
\end{equation*}
Since $\dim X = 1$, we can take $\mathcal{P}$ to be a two-dimensional plane containing $X$. Then, we will study how $r_{X,Y}(d)$ varies when $d$ takes values in $\mathbb{S} \cap \mathcal{P}$.
We introduce the signed angles $\alpha := \widehat{d, \partial Y}$ and $\beta := \widehat{x_2, d}$. These angles are represented on Figure~\ref{fig:angles illustration} and they take value in $[0, 2\pi)$.
We parametrize all directions $d \in \mathbb{S} \cap \mathcal{P}$ by the angle $\beta$. Then, we will study how $r_{X,Y}(d)$ varies when $\beta \in [0, 2\pi)$.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.76]
\draw[|->] (11.5, 0) -- (12.5, 0);
\node at (12, 0.5) {$d$};
\draw (-2, 0) -- (10.8, 0);
\draw (8.6, 2) -- (10, -2.5);
\node at (9.1, 1.8) {$\partial Y$};
\draw[<-, blue] (-2.5, -1) -- (-2, 0);
\draw[->, red] (-2, 0) -- (-1, 2);
\node at (-1.6, 1.5) {\textcolor{red}{$x_2$}};
\node at (-2.5, -0.4) {\textcolor{blue}{$x_1$}};
\draw[blue] (-2,0) -- (8.9, 1);
\node at (4, 0.9) {\textcolor{blue}{$y_M^*$}};
\draw[->, blue] (8.9, 1) -- (8.4, 0);
\node at (8.2, 0.5) {\textcolor{blue}{$x_M^*$}};
\draw[red] (-2,0) -- (9.8,-2);
\node at (4,-0.7) {\textcolor{orange}{$y_N^*$}};
\draw[->, red] (9.8, -2) -- (10.8, 0);
\node at (10.7, -1) {\textcolor{red}{$x_N^*$}};
\draw[<-] (-0.5, 0) arc (10:67:1.5);
\node at (-0.6, 0.9) {$\beta$};
\draw[<-] (8.8, 0) arc (0:60:0.4);
\node at (8.88, 0.32) {$\beta$};
\draw[<-] (10.5, 0) arc (180:245:0.3);
\node at (10.3, -0.3) {$\beta$};
\draw[->] (9.65, 0) arc (0:-70:0.4);
\node at (9.65, -0.5) {$\alpha$};
\end{tikzpicture}
\caption{Illustration of $y_N^*$, $x_N^*$, $y_M^*$ and $x_M^*$ for a direction $d$ parametrized by $\beta$.}
\label{fig:angles illustration}
\end{figure}
We first establish in Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst on faces} that $x_N^*(d)$ and $x_M^*(d)$ are constant, different and both belong in $\partial X$ when $y_M^*(d)$, $d$ and $y_N^*(d)$ all intersect the same face of $\partial Y$, as illustrated on Figure~\ref{fig:angles illustration}.
In these situations, Lemma~\ref{lemma:r(d) cst on faces} shows that the ratio $r_{X,Y}$ is constant. Thus, $r_{X,Y}$ can only change when one of the three rays intersects a different face of $\partial Y$ than the other two. We refer to these situations as vertex crossings. Lemma~\ref{lemma: v_pi and v_2pi} introduces the vertices $v_\pi$ and $v_{2\pi}$.
We study the crossing of vertices before $v_\pi$ in Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst before v_pi}. During these crossings Lemma~\ref{lemma: crossing before v_pi} shows that $r_{X,Y}$ decreases as $\beta$ increases.
Lemma~\ref{lemma: crossing v_pi} states that $r_{X,Y}$ reaches a local minimum during the crossing of $v_\pi$.
As $\beta$ increases between $v_\pi$ and $\pi$, Lemmas~\ref{lemma: x_N^*(d) and x_M^*(d) cst after v_pi} and \ref{lemma: crossing after v_pi} prove that $r_{X,Y}$ increases during the crossing of vertices. Finally, Lemma~\ref{lemma: beta > pi} completes the revolution by showing that $r_{X,Y}$ decreases after $\beta = \pi$ until a local minimum at $v_{2\pi}$ and then increases again until $\beta = 2\pi$. Thus, the directions $d \in \mathcal{P} \cap \mathbb{S}$ maximizing $r_{X,Y}(d)$ are collinear with the set $X$. Note that Figure~\ref{fig:angles illustration} implicitly assumes that $0 \in X$. Lemma~\ref{lemma: 0 notin X} proves that even if $0 \notin X$ all above results still hold.
Therefore, $\underset{d\, \in\, \mathbb{S}}{\max}\ r_{X,Y}(d) = \underset{\mathcal{P}}{\max} \big\{ \underset{d\, \in\, \mathcal{P}\, \cap\, \mathbb{S} }{\max} r_{X,Y}(d) \big\} = \max\big\{ r_{X,Y}(x_2), r_{X,Y}(-x_2) \big\}$.
\end{proof}
In the special case where $X$ and $Y$ are symmetric polytopes, this result reduces to Theorem 3.2 of \citep{SIAM_CT}. Indeed, $r_{X,Y}$ becomes an even function which leads to $r_{X,Y}(x_2) = r_{X,Y}(-x_2)$.
\section{Supporting Lemmata}\label{sec:lemmas}
In this section we establish all the lemmas involved in the proof of the Maximax Minimax Quotient Theorem.
\begin{lemma}\label{lemma: x_N^*(d) and x_M^*(d) cst on faces}
If $d$, $y_N^*(d)$ and $y_M^*(d)$ all intersect the same face of $\partial Y$, then $x_N^*(d)$ and $x_M^*(d)$ are constant, different and both belong to $\partial X$.
\end{lemma}
\begin{proof}
We introduce the angles $\beta_M := \widehat{x_2, y_M^*}$ and $\beta_N := \widehat{x_2, y_N^*}$.
Let $\alpha_0$ be the value of $\alpha$ when $\beta = 0$, i.e., when $d$ is positively collinear with $x_2$.
We say that $y_N^*$ is \emph{leading} and $y_M^*$ is \emph{trailing} when $\beta_M < \beta_N$, and conversely when $\beta_N < \beta_M$, we say that $y_M$ is \emph{leading} and $y_N$ is \emph{trailing}.
For each $d \in \mathbb{S} \cap \mathcal{P}$ we define $D(d) := \underset{y\, \in\, Y}{\max} \big\{ \|y\| : y \in \mathbb{R}^+ d \big\}$, whose existence is justified by the compactness of $Y$.
We say that $y_N^*$ or $y_M^*$ is \emph{outside} when $\|y_N^* + x_N^*\| > D$ or $\|y_M^* + x_M^*\| > D$ respectively. Otherwise, $y_N^*$ or $y_M^*$ is \emph{inside}.
Directly related to the previous definition, we introduce
\begin{equation}\label{eq:delta}
\delta_M(d) := D(d) - \|x_M^*(d) + y_M^*(d)\|\ \ \text{and} \ \ \delta_N(d) := \|x_N^*(d) + y_N^*(d)\| - D(d).
\end{equation}
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.76]
\draw[|->] (11.5,0) -- (12.5,0);
\node at (12, 0.5) {$d$};
\draw (-2, 0) -- (8.4, 0);
\draw[dotted] (-2, 0) -- (-2, -2.5);
\draw[dotted] (9.25, 0) -- (9.25, -2.5);
\draw[<->] (-2, -2.5) -- (9.25, -2.5);
\node at (4, -2.2) {$D(d)$};
\draw[red] (9.25, 0) -- (10.8, 0);
\node at (10, 0.25) {\textcolor{red}{$\delta_N$}};
\draw[blue] (8.4, 0) -- (9.25, 0);
\node at (8.8, -0.25) {\textcolor{blue}{$\delta_M$}};
\draw (8.6, 2) -- (10, -2.5);
\node at (9.1, 1.8) {$\partial Y$};
\node at (-2.4, 0.2) {$X$};
\draw[->, black] (-2, 0) -- (-1, 2);
\node at (-0.8, 1.5) {\textcolor{black}{$x_2$}};
\draw[<-, black] (-2.5, -1) -- (-2, 0);
\node at (-2.6, -0.5) {\textcolor{black}{$x_1$}};
\draw[blue] (-2,0) -- (8.9, 1);
\node at (4, 0.9) {\textcolor{blue}{$y_M^*(d)$}};
\draw[->, blue] (8.9, 1) -- (8.4, 0);
\node at (7.5, 0.4) {\textcolor{blue}{$x_M^*(d) = x_1$}};
\draw[red] (-2,0) -- (9.8,-2);
\node at (4.8, -0.8) {\textcolor{red}{$y_N^*(d)$}};
\draw[->, red] (9.8, -2) -- (10.8, 0);
\node at (11, -1) {\textcolor{red}{$x_N^*(d)$}};
\draw[->] (8.95, 0) arc (180:110:0.3);
\node at (8.8, 0.3) {$\alpha$};
\draw[->] (9.65, 0) arc (0:-70:0.4);
\node at (9.65, -0.5) {$\alpha$};
\draw[<-] (10.4, 0) arc (180:245:0.4);
\node at (10.3, -0.45) {$\beta$};
\draw[<-] (-1.3, 0.1) arc (10:60:0.8);
\node at (-1.05, 0.6) {$\beta_M$};
\draw[<-] (3.5, 0) arc (0:5:5.5);
\node at (4.3, 0.25) {$\beta - \beta_M$};
\draw[->] (1.5, 0) arc (0:-9:3.5);
\node at (2.25, -0.3) {$\beta_N - \beta$};
\end{tikzpicture}
\caption{Illustration of $y_N^*(d)$ leading and outside, while $y_M^*(d)$ is trailing and inside the same face of $\partial Y$.}
\label{fig: x_N^*(d) = -x_M^*(d)}
\end{figure}
We know from Theorem~\ref{thm:minimum on the vertices} that $x_M^*(d) \in \partial X$ for all $d \in \mathbb{S}$. In the case illustrated on Figure~\ref{fig: x_N^*(d) = -x_M^*(d)}, $x_M^*(d) = x_1$ because it maximizes $\delta_M$.
If $\alpha + \beta \in \{\pi, 2\pi\}$, then $X$ is parallel with a face of $\partial Y$ making $x_N^*$ and $x_M^*$ not uniquely defined. Regardless, we can still take $x_N^*(d) \neq x_M^*(d)$, with $x_N^*(d) \in \partial X$ and $x_M^*(d) \in \partial X$.
Otherwise, $x_N^*$ and $x_M^*$ are uniquely defined. Since $x_N^*(d) \in X$, $x_M^*(d) \in X$ for all $d \in \mathbb{S}$ and $\dim X = 1$, vectors $x_N^*(d)$ and $x_M^*(d)$ are always collinear. We then use Thales's theorem and obtain $\delta_N(d) = \delta_M(d) \frac{\|x_N^*(d)\|}{\|x_M^*(d)\|}$. Since $x_N^*(d)$ is chosen to maximize $\delta_N$ and is independent from $\delta_M$, it must have the greatest norm, so $x_N^*(d) \in \partial X$.
In the case where $\alpha + \beta \notin \{\pi, 2\pi\}$, $\|x+y\|$ depends on the value of $x$.
Because $x_N^*(d)$ is chosen to maximize $\|x+y\|$ while $x_M^*(d)$ is minimizing it, we have $x_N^*(d) \neq x_M^*(d)$.
Since $x_N^*$ is continuous according to Lemma~\ref{lemma: continuity of x_N} and $x_N^*(d) \in \big\{x_1, x_2\big\}$, then $x_N^*(d)$ is constant on the faces of $\partial Y$. Because $x_M^*(d) \in \partial X$ too, it must also be constant.
\end{proof}
\begin{lemma}\label{lemma:r(d) cst on faces}
When $d$, $y_N^*(d)$ and $y_M^*(d)$ all intersect the same face of $\partial Y$, the ratio $r_{X,Y}(d)$ is constant.
\end{lemma}
\begin{proof}
Based on Figure \ref{fig: x_N^*(d) = -x_M^*(d)} we apply the sine law in the triangle bounded by $\partial Y$, $\delta_M$ and $x_M^*$
\begin{equation*}
\frac{\|x_M^*(d)\|}{\sin \alpha} = \frac{\delta_M(d)}{\sin(\pi - \alpha -\beta)} = \frac{\delta_M(d)}{\sin(\alpha + \beta)}, \hspace{2.4mm} \text{so} \hspace{2.4mm} \frac{\delta_M(d)}{D(d)} = \frac{\|x_M^*(d)\| \sin(\alpha + \beta)}{D(d)\sin \alpha}.
\end{equation*}
Similarly for the triangle bounded by $\partial Y$, $\delta_N$ and $x_N^*$, the law of sines yields
\begin{equation*}
\frac{\|x_N^*(d)\|}{\sin \alpha} = \frac{\delta_N(d)}{\sin(\pi - \alpha - \beta)} = \frac{\delta_N(d)}{\sin(\alpha + \beta)}, \hspace{2.5mm} \text{so} \hspace{2.5mm} \frac{\delta_N(d)}{D(d)} = \frac{\|x_N^*(d)\| \sin(\alpha + \beta)}{D(d)\sin \alpha}.
\end{equation*}
Even if the two equations above were derived for the specific situation of Figure \ref{fig: x_N^*(d) = -x_M^*(d)}, they hold as long as $y_N^*$, $D$ and $y_M^*$ intersect the same face of $\partial Y$.
Based on \eqref{eq:delta} we have
\begin{equation}\label{eq: r_X,Y}
r_{X,Y}(d) = \frac{D(d) + \delta_N(d)}{D(d) - \delta_M(d)} = \frac{1 + \frac{\delta_N}{D} }{1 - \frac{\delta_M}{D}}.
\end{equation}
We will now prove that the ratios $\delta_N / D$ and $\delta_M / D$ do not change on a face of $\partial Y$. Let $d_1 \in \mathcal{P} \cap \mathbb{S}$ and $d_2 \in \mathcal{P} \cap \mathbb{S}$ such that $D(d_1)$, $D(d_2)$, $y_M^*(d_1)$, $y_M^*(d_2)$, $y_N^*(d_1)$ and $y_N^*(d_2)$ all intersect the same face of $\partial Y$, as illustrated on Figure~\ref{fig:ratio constant on faces}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.7]
\draw (8.375, 3) -- (10, -2);
\node at (9.7, 0.4) {$\partial Y$};
\draw[<-] (-2.25, -0.5) -- (-2, 0);
\draw[->] (-2, 0) -- (-1.5, 1);
\draw[|->] (10, 2.2) -- (11, 2.42);
\node at (10.5, 2.7) {$d_1$};
\draw (-2, 0) -- (8.7, 2);
\node at (4, 1.5) {$D(d_1)$};
\draw[<-] (-1.2, 0.16) arc (0:65:0.7);
\node at (-1.1, 0.9) {$\beta_1$};
\draw[->] (8.3, 1.95) arc (190:120:0.5);
\node at (8.05, 2.4) {$\alpha_1$};
\draw[|->] (11,-1.7) -- (12,-1.8);
\node at (11.5, -1.4) {$d_2$};
\draw (-2, 0) -- (9.8, -1.5);
\node at (4, -0.4) {$D(d_2)$};
\draw[<-] (-1.5, -0.05) arc (-2:65:0.5);
\node at (-1.5, -0.4) {$\beta_2$};
\draw[->] (9.3, -1.4) arc (170:106:0.5);
\node at (9, -1) {$\alpha_2$};
\draw[<-] (0, -0.2) arc (-15:20:1);
\node at (1,0) {$\beta_2 - \beta_1$};
\end{tikzpicture}
\caption{Ratio $r_{X,Y}(d)$ is constant on a face of $\partial Y$.}
\label{fig:ratio constant on faces}
\end{figure}
The sum of the angles of the triangle in Figure~\ref{fig:ratio constant on faces} is
\begin{equation}\label{eq:alpha + beta = cst on a face}
(\beta_2 - \beta_1) + \alpha_2 + (\pi - \alpha_1) = \pi \qquad \text{so} \qquad \beta_2 + \alpha_2 = \beta_1 + \alpha_1.
\end{equation}
Therefore, $\alpha + \beta$ is constant on faces of $\partial Y$.
We use the sine law in the triangle in Figure~\ref{fig:ratio constant on faces} and obtain
\begin{equation*}
\frac{D(d_1)}{\sin \alpha_2} = \frac{D(d_2)}{\sin (\pi - \alpha_1)} = \frac{D(d_2)}{\sin \alpha_1}, \quad so,\quad D(d_1)\sin \alpha_1 = D(d_2) \sin \alpha_2.
\end{equation*}
According to Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst on faces} we also know that $x_N^*(d_1) = x_N^*(d_2)$, thus
\begin{equation*}
\frac{\delta_N(d_1)}{D(d_1)} = \frac{\|x_N^*(d_1)\| \sin(\alpha_1 + \beta_1)}{D(d_1) \sin \alpha_1} = \frac{\|x_N^*(d_2)\| \sin(\alpha_2 + \beta_2)}{D(d_2) \sin \alpha_2} = \frac{\delta_N(d_2)}{D(d_2)}.
\end{equation*}
The same holds for $\delta_M / D$. Hence, \eqref{eq: r_X,Y} yields $r_{X,Y}(d_1) = r_{X,Y}(d_2)$.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: v_pi and v_2pi}
There are two vertices of $Y \cap \mathcal{P}$, namely $v_\pi$ and $v_{2\pi}$ whose crossing by $d$ makes the angle $\alpha+\beta$ become greater than $\pi$ and $2\pi$ respectively.
\end{lemma}
\begin{proof}
We have taken the convention that the angles are positively oriented in the clockwise orientation.
According to \eqref{eq:alpha + beta = cst on a face}, the angle $\alpha + \beta$ is constant on a face of $\partial Y$. When $d$ crosses a vertex of external angle $\varepsilon$ as represented on Figure~\ref{fig: x_v}, the value of $\alpha$ has a discontinuity of $+\varepsilon$. Let $q$ be the number of vertices of $\partial Y$ and $\varepsilon_i$ the external angle of the $i^{th}$ vertex $v_i$. Since $Y \cap \mathcal{P}$ is a polygon, $\sum_{i = 1}^q \varepsilon_i = 2\pi$.
We can then represent the evolution of $\alpha + \beta$ as a function of $\beta$ with Figure~\ref{fig:graph of alpha + beta}.
Instead of labeling the horizontal axis with the values taken by $\beta$ as the corresponding vector $d(\beta)$ crosses the vertex $v_i$, we directly use $v_i$ with a slight abuse of notation.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.7]
\draw[<->] (0, 5) -- (0, 0) -- (10.5, 0);
\node at (11, 0) {$\beta$};
\node at (0, 5.3) {$\alpha + \beta$};
\node at (0, -0.5) {$0$};
\draw (10, -0.1) -- (10, 0.1);
\node at (10, -0.5) {$2\pi$};
\draw (-0.1, 0.4) -- (2, 0.4) -- (2, 0.9) -- (3, 0.9) -- (3, 1.5) -- (5, 1.5) -- (5, 2.5) -- (6, 2.5) -- (6, 2.8) -- (7, 2.8) -- (7, 3.5) -- (8.5, 3.5) -- (8.5, 4.2) -- (10, 4.2);
\draw (2, -0.1) -- (2, 0.1);
\node at (2, -0.5) {$v_1$};
\draw (3, -0.1) -- (3, 0.1);
\node at (3, -0.5) {$v_2$};
\draw (5, -0.1) -- (5, 0.1);
\node at (5, -0.5) {$v_3$};
\draw (6, -0.1) -- (6, 0.1);
\draw (7, -0.1) -- (7, 0.1);
\draw (8.5, -0.1) -- (8.5, 0.1);
\node at (8.5, -0.5) {$v_q$};
\node at (-0.5, 0.4) {$\alpha_0$};
\draw (-0.1, 0.9) -- (0.1, 0.9);
\node at (-1, 0.9) {$\alpha_0 + \varepsilon_1$};
\draw (-0.1, 1.5) -- (0.1, 1.5);
\node at (-1.6, 1.5) {$\alpha_0 + \varepsilon_1 + \varepsilon_2$};
\draw (-0.1, 2.5) -- (0.1, 2.5);
\node at (-2.2, 2.5) {$\alpha_0 + \varepsilon_1 + \varepsilon_2+\varepsilon_3$};
\draw (-0.1, 2.8) -- (0.1, 2.8);
\draw (-0.1, 3.5) -- (0.1, 3.5);
\draw (-0.1, 4.2) -- (0.1, 4.2);
\node at (-1, 4.2) {$\alpha_0 + 2\pi$};
\end{tikzpicture}
\caption{Evolution of $\alpha + \beta$ with $\beta$ increasing clockwise in $[0, 2\pi)$.}
\label{fig:graph of alpha + beta}
\end{figure}
Recall that $\alpha_0$ is the value of $\alpha$ when $\beta = 0$. After a whole revolution $\alpha + \beta = \alpha_0 + 2\pi$. So there are two vertices $v_{\pi}$ and $v_{2\pi}$ where $\alpha + \beta$ first crosses $\pi$ and then $2\pi$.
In the eventuality that $\alpha + \beta = \pi$ or $2\pi$ on a face of $\partial Y$, we define $v_\pi$ or $v_{2\pi}$ as the vertex preceding the face.
\end{proof}
\begin{lemma}\label{lemma: x_N^*(d) and x_M^*(d) cst before v_pi}
During the crossing of vertices before $v_\pi$ as $\beta$ increases, $x_N^*(d) = x_2$ and $x_M^*(d) = x_1$. They are constant, different and both belong in $\partial X$.
\end{lemma}
\begin{proof}
We study the crossing of a vertex $v$ of angle $\varepsilon$ between the faces $F_1$ and $F_2$ of $\partial Y$.
For each vertex $v$ we introduce $x_v$ the vector collinear with $X$, going from $v$ to the ray directed by $d$, as illustrated on Figure~\ref{fig: x_v} and we say that the crossing of $v$ is ongoing as long as $\|x_v\| < \max\{\|x_1\|, \|x_2\|\}$. We also define $\delta_v := \|v + x_v\| - D$.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.8]
\draw (-6, 0) -- (0, 0) -- (5, -2.1);
\draw[dashed] (0.1, 0) -- (5,0);
\node at (-5.5, 0.3) {$F_1$};
\node at (5, -1.8) {$F_2$};
\node at (0, -0.3) {$v$};
\draw[->] (4,0) arc (0:-22:4);
\node at (4.1, -0.8) {$\varepsilon$};
\draw (-3.45, -2) -- (-2.4, -1);
\draw (-0.6, 0.6) -- (0.65, 1.8);
\node at (-2.8, -1.6) {$d$};
\draw[red] (-0.5, -2) -- (2, -0.8);
\node at (1.1, -1.6) {\textcolor{red}{$y_N^*$}};
\draw[->, red] (2, -0.8) -- (0, 1.2);
\node at (1.2, 0.5) {\textcolor{red}{$x_N^*$}};
\draw[red, dotted] (-1.3, 0) -- (-2.1, 0.8);
\draw[red, dotted] (0, 1.2) -- (-0.8, 2);
\draw[red, <->] (-2.1, 0.8) -- (-0.8, 2);
\node at (-1.6, 1.7) {\textcolor{red}{$\delta_N$}};
\draw[blue] (-4.9, -2) -- (-3.4, 0);
\node at (-4.7, -1) {\textcolor{blue}{$y_M^*$}};
\draw[->, blue] (-3.4, 0) -- (-2.4, -1);
\node at (-3.1, -0.7) {\textcolor{blue}{$x_M^*$}};
\draw[blue] (-2.4, -1) -- (-1.3, 0);
\node at (-1.5, -0.7) {\textcolor{blue}{$\delta_M$}};
\draw[->, green] (0, 0) -- (-0.6, 0.6);
\node at (-0.1, 0.4) {\textcolor{green}{$x_v$}};
\draw[green] (-1.3, 0) -- (-0.6, 0.6);
\node at (-1.1, 0.5) {\textcolor{green}{$\delta_v$}};
\draw[<-] (-2.2, -0.8) arc (60:114:0.5);
\node at (-2.5, -0.5) {$\beta$};
\draw[<-] (-1.9, 0) arc (180:230:0.5);
\node at (-2.1, -0.3) {$\alpha$};
\end{tikzpicture}
\caption{Illustration of $x_v$ during the crossing of a vertex $v$, with $y_N^*$ leading.}
\label{fig: x_v}
\end{figure}
Before starting the crossing of $v_\pi$ we have $\alpha + \beta \in (\alpha_0, \pi)$. This situation is depicted on Figure~\ref{fig: x_N^*(d) = -x_M^*(d)}, where $y_N^*$ is leading and outside, so $y_N^*$ reaches the vertex before $y_M^*$ and $d$. The length of $x_N^*(d)$ can vary to maximize $\delta_N$, so $y_N^*$ could still intersect $F_1$, even if the crossing is ongoing. We have seen in Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst on faces} that if $y_N^*$ is still on $F_1$, then it must be the furthest possible to maximize $\delta_N$, in that case $y_N^* = v$. Otherwise, $y_N^*$ intersects $F_2$. We want to establish a criterion to distinguish these two possible scenarios.
We first consider the scenario where $y_N^* = v$ and $x_N^*(d) = x_v$. We take $y \in F_2 \backslash \{v\}$ such that $x_2 + y \in \mathbb{R}^+d$ as represented on Figure~\ref{fig: y_N^* in v} and we define $\delta := \|x_2 + y\| - D$.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.9]
\draw (-5, 0) -- (1.5, 0) -- (3.6, -2);
\draw[dashed] (1.6, 0) -- (3.5,0);
\node at (-4.5, 0.3) {$F_1$};
\node at (3.5, -1.5) {$F_2$};
\draw[->] (2.4,0) arc (0:-28:1.5);
\node at (2.6, -0.4) {$\varepsilon$};
\draw (-2.05, -2) -- (-0.9, -0.7);
\draw (0, 0.35) -- (0.55, 1);
\node at (-1.4, -1.5) {$d$};
\draw[red] (0.9, -2) -- (2.5, -1);
\node at (1.9, -1.6) {\textcolor{red}{$y$}};
\draw[->, red] (2.5, -1) -- (0, 0.35);
\node at (1.2, -0.6) {\textcolor{red}{$x_2$}};
\draw[red] (-0.3, 0) -- (0, 0.35);
\node at (-0.25, 0.3) {\textcolor{red}{$\delta$}};
\draw[green] (-0.7, -2) -- (1.5, 0);
\node at (0.4, -1.4) {\textcolor{green}{$y_N^*$}};
\draw[green, ->] (1.5, 0) -- (0.25, 0.675);
\node at (1.5, 0.5) {\textcolor{green}{$x_v = x_N^*$}};
\draw[green, dotted] (-0.3, 0) -- (-1, 0.4);
\draw[green, dotted] (0.25, 0.675) -- (-0.45, 1.075);
\draw[green, <->] (-1, 0.4) -- (-0.45, 1.075);
\node at (-1.4, 0.9) {\textcolor{green}{$\delta_N = \delta_v$}};
\draw[blue] (-3.5, -2) -- (-2.15, 0);
\node at (-3.2, -1) {\textcolor{blue}{$y_M^*$}};
\draw[->, blue] (-2.15, 0) -- (-0.9, -0.7);
\node at (-1.7, -0.5) {\textcolor{blue}{$x_M^*$}};
\draw[blue] (-0.9, -0.7) -- (-0.3, 0);
\node at (-0.2, -0.4) {\textcolor{blue}{$\delta_M$}};
\draw[<-] (-0.75, -0.55) arc (60:135:0.3);
\node at (-1.1, -0.35) {$\beta$};
\draw[<-] (-0.6, 0) arc (180:230:0.3);
\node at (-0.8, -0.2) {$\alpha$};
\end{tikzpicture}
\caption{Illustration of the crossing scenario where $y_N^* = v$.}
\label{fig: y_N^* in v}
\end{figure}
Since $\delta_N$ must be maximized by the choice of $y_N^*$ and $y \neq y_N^*$, we have $\delta < \delta_N = \delta_v$. But $\|x_2\| > \|x_v\|$, so the line segment corresponding to $x_2$ crosses the interior of $Y$. Focusing on this part of Figure~\ref{fig: y_N^* in v} we obtain Figure~\ref{fig: zoom x inside}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.8]
\draw (-5, 0) -- (1.5, 0) -- (3.6, -2);
\draw[dashed] (1.7, 0) -- (3.5,0);
\node at (0.3, 0.25) {$F_1$};
\node at (3, -1) {$F_2$};
\node at (1.5, 0.2) {$v$};
\draw[->] (2.4,0) arc (0:-28:1.5);
\node at (2.6, -0.4) {$\varepsilon$};
\draw (-5, -1) -- (-4, 0);
\draw (-3, 1) -- (-2.75, 1.25);
\node at (-4.5, -0.8) {$d$};
\draw[red] (-4, 0) -- (-3, 1);
\node at (-3.7, 0.6) {\textcolor{red}{$\delta$}};
\draw[red, ->] (3.3, -1.7) -- (-3, 1);
\node at (0, -0.6) {\textcolor{red}{$x_2$}};
\draw[red] (2.7, -2) -- (3.3, -1.7);
\draw[<-] (-3.2, 0.8) arc (230:320:0.4);
\node at (-2.8, 0.4) {$\beta$};
\draw[<-] (-3.6, 0) arc (0:47:0.4);
\node at (-3.4, 0.2) {$\alpha$};
\end{tikzpicture}
\caption{Illustration of the line segment corresponding to $x_2$ crossing the interior of $Y$ in Figure~\ref{fig: y_N^* in v}.}
\label{fig: zoom x inside}
\end{figure}
Two of the angles of the triangle delimited by $F_1$, $F_2$ and $x_2$ are $\pi - \alpha - \beta$ and $\pi - \varepsilon$. Therefore, their sum is in $(0, \pi)$ and thus $\alpha + \beta + \varepsilon > \pi$. Since we assumed that $\alpha + \beta \in (\alpha_0, \pi)$, the vertex $v$ must in fact be $v_\pi$ for this scenario to happen.
\vspace{2mm}
Thus, the crossing of a vertex preceding $v_\pi$ follows the second scenario as depicted on Figure~\ref{fig: x_v} with $y_N^* \in F_2$. We study Figure~\ref{fig: zoom x_v x_N^*} which is a more detailed view of Figure~\ref{fig: x_v}, with $\delta_0$ depending solely on $d$ and $\varepsilon$.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.7]
\draw (-5, 0) -- (0, 0) -- (2.5, -0.5);
\draw[dashed] (-2.6, 0.55) -- (0, 0) -- (2.5,0);
\node at (-4.5, 0.3) {$F_1$};
\node at (2.9, -0.6) {$F_2$};
\node at (0, -0.3) {$v$};
\draw[->] (2,0) arc (0:-11:2);
\node at (2.3, -0.2) {$\varepsilon$};
\draw (-4, -1) -- (-3, 0);
\node at (-3.3, -0.5) {$d$};
\draw[blue] (-3, 0) -- (-2.5, 0.5);
\node at (-3.1, 0.4) {\textcolor{blue}{$\delta_0$}};
\draw[green] (-2.5, 0.5) -- (-1.5, 1.5);
\node at (-2.8, 1.2) {\textcolor{green}{$\delta_v - \delta_0$}};
\draw[red] (-1.5, 1.5) -- (-1, 2);
\node at (-2.1, 2) {\textcolor{red}{$\delta_N - \delta_v$}};
\draw (-1, 2) -- (-0.75, 2.25);
\draw[green, ->] (0,0) -- (-1.5, 1.5);
\node at (-1.1, 0.7) {\textcolor{green}{$x_v$}};
\draw[red, ->] (1.3, -0.27) -- (-1, 2);
\node at (0.5, 1.1) {\textcolor{red}{$x_N^*$}};
\end{tikzpicture}
\caption{Illustration of $x_v$ and $x_N^*$ in Figure~\ref{fig: x_v}.}
\label{fig: zoom x_v x_N^*}
\end{figure}
Since $x_v$ and $x_N^*(d)$ are collinear, we can apply Thales's theorem in Figure~\ref{fig: zoom x_v x_N^*} and obtain that $\delta_N - \delta_0 = (\delta_v - \delta_0) \frac{\|x_N^*(d)\|}{\|x_v(d)\|}$. Then, $\delta_N$ is maximized when $\|x_N^*(d)\|$ is maximal, so $x_N^*(d) = x_2$ during the crossing.
We know from Theorem~\ref{thm:minimum on the vertices} that $x_M^*(d) \in \partial X$ for all $d \in \mathbb{S}$.
Then, as in Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst on faces}, $x_N^*$ and $x_M^*$ are constant and different since $x_N^*$ is continuous in $d$, so $x_M^*(d) = x_1$.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: crossing before v_pi}
During the crossing of vertices before $v_\pi$ as $\beta$ increases, $r_{X,Y}(d)$ decreases.
\end{lemma}
\begin{proof}
The leading vector $y_N^*$ is outside and crosses a vertex $v$ between the faces $F_1$ and $F_2$ of $\partial Y$ while $\beta$ increases. We separate the vertex crossing into two parts: when only $y_N^* \in F_2$, and when both $d \in F_2$ and $y_N^* \in F_2$.
Let $\varepsilon > 0$ be the external angle of the vertex as shown on Figure~\ref{fig:y_N^* crossing a vertex}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.8]
\draw (-5, 0) -- (0.2,0) -- (5, -2.1);
\node at (0.3, 0.15) {$v$};
\draw[dashed] (0.1, 0) -- (5,0);
\node at (-4.5, 0.2) {$F_1$};
\node at (5, -1.8) {$F_2$};
\draw[->] (4,0) arc (0:-22:4);
\node at (4.2, -0.8) {$\varepsilon$};
\draw (-1.98, -2) -- (-1.15, -1);
\draw (1.3, 2) -- (1.45, 2.2);
\node at (-1.4, -1.6) {$d$};
\draw[red] (0.75, -2) -- (2.5, -1);
\node at (1, -1.4) {\textcolor{red}{$y_N^*$}};
\draw[->, red] (2.5, -1) -- (0.5, 1);
\node at (1.55, 0.4) {\textcolor{red}{$x_N^*$}};
\draw[blue] (-3.2, -2) -- (-2.2, 0);
\node at (-3.2, -1) {\textcolor{blue}{$y_M^*$}};
\draw[->, blue] (-2.2, 0) -- (-1.15, -1);
\node at (-1.9, -0.65) {\textcolor{blue}{$x_M^*$}};
\draw[->, red] (3.3, 0) -- (1.3, 2);
\node at (2.65, 1.1) {\textcolor{red}{$x_2$}};
\draw[green] (2.5, -1) -- (3.3, 0);
\node at (3.1, -0.6) {\textcolor{green}{$l$}};
\draw[green] (0.5, 1) -- (1.3, 2);
\node at (0.8, 1.8) {\textcolor{green}{$l$}};
\draw[<-] (2.67, -0.75) arc (60:114:0.5);
\node at (2.2, -0.5) {$\beta$};
\draw[<-] (-0.8, 0) arc (180:230:0.5);
\node at (-1, -0.2) {$\alpha$};
\draw[blue] (-1.15, -1) -- (-0.3, 0);
\node at (-0.3, -0.5) {\textcolor{blue}{$\delta_M$}};
\draw[red] (-0.3, 0) -- (0.5, 1);
\node at (-0.2, 0.6) {\textcolor{red}{$\delta_N$}};
\draw[<-] (2.8, 0) arc (180:230:0.5);
\node at (2.6, -0.2) {$\alpha$};
\draw[dotted] (1.5, 0) -- (1.5, -1.9);
\draw[dotted] (3.3, 0) -- (3.3, -1.9);
\draw[<->] (1.5, -1.9) -- (3.3, -1.9);
\node at (2.4, -1.7) {$m$};
\end{tikzpicture}
\caption{Part I of the crossing of vertex $v$ by $y_N^*$ leading and outside as $\beta$ increases.}
\label{fig:y_N^* crossing a vertex}
\end{figure}
According to Lemma~\ref{lemma:r(d) cst on faces}, $r_{X,Y}$ is constant on faces of $\partial Y$ and we call $r_{F_1}$ its value on the face $F_1$.
If $F_1$ was prolonged past $v$ with a straight line (dashed line on Figure~\ref{fig:y_N^* crossing a vertex}), then we would have $y_N^*(d) \in F_1$ and $r_{X,Y}(d) = r_{F_1}$.
But, $y_N^*(d) \in F_2$ as proven in Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst before v_pi} because the crossing occurs before $v_\pi$. We call $l$ the resulting difference in $\delta_N$ as illustrated on Figure~\ref{fig:y_N^* crossing a vertex}. Notice that the two green segments of length $l$ in Figure~\ref{fig:y_N^* crossing a vertex} are parallel. We parametrize the position of $y_N^*$ on $F_2$ with the length $m$ as defined on Figure~\ref{fig:y_N^* crossing a vertex}. When $y_N^* = v$, $m = 0$, and $m$ increases with $\beta$. Using the sine law we obtain
\begin{equation}\label{eq:loss crossing part 1}
\frac{m}{\sin \beta} = \frac{l}{\sin (\pi - \alpha - \beta)} = \frac{l}{\sin(\alpha + \beta)}.
\end{equation}
Then,
\begin{equation}\label{eq:r(d) r_F_1}
r_{X,Y}(d) = \frac{D + \delta_N}{D - \delta_M} = \frac{D + \delta_N + l}{D - \delta_M} - \frac{l}{D - \delta_M} = r_{F_1} - \frac{m \sin (\alpha + \beta)}{(D - \delta_M) \sin(\beta)}.
\end{equation}
By definition the length $m$ is positive. Since $-x_M^* \in Y^\circ$ but $y_M^* \in \partial Y$, we have $D - \delta_M = \|y_M^* + x_M^* \| > 0$. Before $v_\pi$ we have $\alpha + \beta \in (\alpha_0, \pi)$. In that case $\sin(\alpha + \beta) > 0$ and $\sin(\beta) > 0$.
Therefore, the term subtracted from $r_{F_1}$ is positive, i.e., $r_{X,Y}(d) < r_{F_1}$.
\vspace{2mm}
We can now tackle the second part of the crossing, when $y_N^*$ and $d$ both have crossed the vertex as illustrated on Figure~\ref{fig:y_N^* and d crossing a vertex}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.8]
\draw (-5, 0) -- (0,0) -- (5, -2);
\node at (0.05, 0.2) {$v$};
\draw[dashed] (-4, 1.6) -- (0,0) -- (5,0);
\node at (-5.4, 0) {$F_1$};
\node at (5.3, -2) {$F_2$};
\draw[->] (1.5,0) arc (0:-21:1.5);
\node at (1.7, -0.2) {$\varepsilon$};
\draw (-2, -2.5) -- (-1.35, -1.95);
\draw (1.7, 0.9) -- (2.9, 2);
\node at (2.2, 1.8) {$d$};
\draw[red] (0.8, -2.5) -- (4.5, -1.8);
\node at (2.3, -1.9) {\textcolor{red}{$y_N^*$}};
\draw[->, red] (4.5, -1.8) -- (1.7, 0.9);
\node at (2.7, -0.5) {\textcolor{red}{$x_N^*$}};
\draw[blue] (-4.25, -2.5) -- (-3.35, 0);
\node at (-4.3, -1.5) {\textcolor{blue}{$y_M^*$}};
\draw[->, blue] (-3.35, 0) -- (-1.35, -2);
\node at (-2.6, -1.2) {\textcolor{blue}{$x_M^*$}};
\draw[->, blue] (-2.4, 0.95) -- (-0.4, -1.05);
\draw[green] (-3.35, 0) -- (-2.4, 0.95);
\node at (-3, 0.7) {\textcolor{green}{$l$}};
\draw[green] (-1.35, -1.95) -- (-0.4, -1.05);
\node at (-0.7, -1.6) {\textcolor{green}{$l$}};
\draw[<-] (-2.9, 0) arc (0:50:0.4);
\node at (-2.4, 0.2) {$\alpha - \varepsilon$};
\draw[->] (-2.15, 0.7) arc (-50:-130:0.4);
\node at (-2.3, 1.3) {$\beta$};
\draw[<-] (-1.1, -1.7) arc (50:130:0.4);
\node at (-1.5, -1.4) {$\beta$};
\draw[<-] (0.16, -0.1) arc (140:230:0.3);
\node at (-0.15, -0.3) {$\alpha$};
\draw[blue] (0.5, -0.25) -- (-0.4, -1.05);
\node at (0.5, -0.8) {\textcolor{blue}{$\delta_M - l$}};
\draw[red] (0.5, -0.25) -- (1.7, 0.9);
\node at (1, 0.6) {\textcolor{red}{$\delta_N$}};
\draw[dotted] (-3.35, 0) -- (-3.35, 1.8);
\draw[dotted] (-1.5, 0) -- (-1.5, 1.8);
\draw[<->] (-3.35, 1.8) -- (-1.5, 1.8);
\node at (-2.4, 2) {$m$};
\end{tikzpicture}
\caption{Part II of the crossing of vertex $v$ by $y_N^*$ leading and outside as $\beta$ increases.}
\label{fig:y_N^* and d crossing a vertex}
\end{figure}
If $F_2$ was prolonged with a straight line before $v$ and $y_M^* \in F_2$, then we would have $r_{X,Y}(d) = r_{F_2}$, value of $r_{X,Y}$ on $F_2$. But that is not the case, $y_M^*(d) \in F_1$ and the resulting difference in $\delta_M$ is called $l$. Using the sine law in Figure~\ref{fig:y_N^* and d crossing a vertex}, we can relate $l$ to $m$
\begin{equation}\label{eq:gain crossing part 2}
\frac{m}{\sin \beta} = \frac{l}{\sin(\pi - \beta - \alpha + \varepsilon)} = \frac{l}{\sin(\alpha + \beta - \varepsilon)}.
\end{equation}
We have $\alpha + \beta \in (\alpha_0, \pi)$, so $\sin(\beta) > 0$.
If $\alpha$ was still measured between $d$ and $F_1$, then its value would be $\alpha_{F_1} = \alpha - \varepsilon$. Since we are before the crossing of $v_\pi$, $\alpha_{F_1} + \beta \in (\alpha_0, \pi)$, i.e., $\alpha + \beta - \varepsilon \in (\alpha_0, \pi)$.
This yields $\sin(\alpha + \beta - \varepsilon) > 0$, which makes $l > 0$, because the length $m$ is positive by definition.
Then,
\begin{equation}\label{eq:r(d) r_F_2}
r_{F_2} = \frac{D + \delta_N}{D - (\delta_M - l)} = \frac{D + \delta_N}{D - \delta_M + l} < \frac{D + \delta_N}{D - \delta_M} = r_{X,Y}(d).
\end{equation}
Thus, the ratio $r_{X,Y}$ decreases during the crossing of a vertex before $v_\pi$.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: crossing v_pi}
During the crossing of $v_\pi$, the ratio $r_{X,Y}(d)$ reaches a local minimum.
\end{lemma}
\begin{proof}
Recall that before the crossing, $x_N^*(d) = x_2$ and $x_M^*(d) = x_1$.
During the crossing of $v_\pi$, i.e., when $\| x_{v_\pi} \| < \max \{ \|x_1\|, \|x_2\| \}$, we have $\alpha + \beta \leq \pi$ but $\alpha + \beta + \varepsilon > \pi$. The situation is illustrated on Figure~\ref{fig: y_N^* crossing v_pi}. We showed in Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst before v_pi} that $y_N^* = v_\pi$ and $x_N^*(d) = x_{v_\pi}$.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.8]
\draw (-5, 0) -- (1.5, 0) -- (3.6, -2);
\draw[dashed] (1.7, 0) -- (3.5,0);
\node at (-4.5, 0.3) {$F_1$};
\node at (3.6, -1.5) {$F_2$};
\draw[->] (2.4,0) arc (0:-28:1.5);
\node at (2.6, -0.4) {$\varepsilon$};
\draw (-2.05, -2) -- (-1.5, -1.35);
\draw (0.75, 1.25) -- (1, 1.5);
\node at (-1.6, -1.7) {$d$};
\draw[red] (-0.7, -2) -- (1.5, 0);
\node at (0.8, -1.1) {\textcolor{red}{$y_N^*$}};
\draw[red, ->] (1.5, 0) -- (0.25, 0.675);
\node at (1.1, 0.5) {\textcolor{red}{$x_{v_\pi}$}};
\draw[red] (-0.3, 0) -- (0.25, 0.675);
\node at (-1, 0.5) {\textcolor{red}{$\delta_N = \delta_{v_\pi}$}};
\draw[blue] (-4.5, -2) -- (-4, 0);
\node at (-4.6, -1) {\textcolor{blue}{$y_M^*$}};
\draw[->, blue] (-4, 0) -- (-1.5, -1.35);
\node at (-3, -0.9) {\textcolor{blue}{$x_M^*$}};
\draw[blue] (-1.5, -1.35) -- (-0.3, 0);
\node at (-0.5, -0.8) {\textcolor{blue}{$\delta_M$}};
\draw[<-] (-1.25, -1.1) arc (60:130:0.5);
\node at (-1.7, -0.8) {$\beta$};
\draw[<-] (-0.8, 0) arc (180:230:0.5);
\node at (-1, -0.2) {$\alpha$};
\draw[->, green] (3, 0) -- (0.75, 1.25);
\node at (2, 0.8) {\textcolor{green}{$x_2$}};
\draw[green] (0.25, 0.675) -- (0.75, 1.25);
\node at (0.45, 1.2) {\textcolor{green}{$l$}};
\end{tikzpicture}
\caption{Crossing of $v_\pi$, with $y_N^* = v_\pi$.}
\label{fig: y_N^* crossing v_pi}
\end{figure}
If $F_1$ was prolonged with a straight line (dashed line of Figure~\ref{fig: y_N^* crossing v_pi}), we would have $y_N^* \neq v_\pi$, $x_N^*(d) = x_2$ and the ratio would be $r_{F_1} = \frac{D + \delta_{v_\pi} + l}{D - \delta_M}$, which is the value of $r_{X,Y}$ on $F_1$. Since $d$ has not yet crossed $v_\pi$, $\alpha + \beta < \pi$ and thus \eqref{eq:loss crossing part 1} and \eqref{eq:r(d) r_F_1} still hold, leading to $r_{X,Y}(d) < r_{F_1}$.
\vspace{2mm}
Once $d$ has crossed $v_\pi$, we still have $y_N^* = v_\pi$ to maximize $\delta_N$. Then, the equality $x_N^*(d) = x_{v_\pi}$ holds during the whole crossing, i.e., as $x_{v_\pi}$ goes from $x_2$ to $x_1$. The second part of the crossing is illustrated on Figure~\ref{fig: d passed v_pi}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.9]
\draw (-5, -1.15) -- (0, 0) -- (3.4, -2.65);
\draw[dashed] (0, 0) -- (-1.4, 1);
\node at (-1.8, -0.2) {$F_1$};
\node at (1.9, -1.2) {$F_2$};
\draw (-2.7, -2.65) -- (-0.95, -1.5);
\draw (1.6, 0.25) -- (2, 0.5);
\node at (2.1, 0.4) {$d$};
\draw[blue, ->] (-3.75, -0.85) -- (-0.95, -1.5);
\node at (-2.4, -0.9) {\textcolor{blue}{$x_M^* = x_1$}};
\draw[blue] (-4.5, -2.5) -- (-3.75, -0.85);
\node at (-3.8, -1.8) {\textcolor{blue}{$y_M^*$}};
\draw[blue] (-0.95, -1.5) -- (0.6, -0.45);
\node at (0.2, -1.1) {\textcolor{blue}{$\delta_M$}};
\draw[blue, ->] (3.2, -2.5) -- (-0.95, -1.5);
\node at (1, -1.8) {\textcolor{blue}{$x_2$}};
\draw[blue] (2.2, -2.65) -- (3.2, -2.5);
\draw[red, ->] (0, 0) -- (0.9, -0.22);
\node at (0.55, 0) {\textcolor{red}{$x_{v_\pi}$}};
\draw[red] (-3.4, -2.65) -- (0, 0);
\node at (-0.5, -0.65) {\textcolor{red}{$y_N^*$}};
\draw[red] (0.6, -0.45) -- (0.9, -0.22);
\node at (1, -0.5) {\textcolor{red}{$\delta_N$}};
\draw[green, ->] (-1.2, 0.9) -- (1.6, 0.25);
\node at (0.4, 0.7) {\textcolor{green}{$x_1$}};
\draw[green] (0.9, -0.22) -- (1.6, 0.25);
\node at (1.4, -0.15) {\textcolor{green}{$l$}};
\end{tikzpicture}
\caption{Illustration of the endpoint of $y_M^*$ switching from $F_1$ to $F_2$ during the crossing of $v_\pi$.}
\label{fig: d passed v_pi}
\end{figure}
Assume that during the entire crossing of $v_\pi$, $x_M^*(d) = x_1$. Then, at the end of the crossing we will have $y_M^* = v_\pi$ and $x_M^*(d) = x_{v_\pi} = x_N^*(d)$, which contradicts the definitions of $x_M^*(d)$ and $x_N^*(d)$, they must be different. Thus, $x_M^*(d)$ does not remain equal to $x_1$ during the entire crossing. Since $x_M^* \in \big\{x_1, x_2\big\}$, at some point $x_M^*$ switches to $x_2$ as $y_M^*$ switches from $F_1$ to $F_2$. This switching point is illustrated on Figure~\ref{fig: d passed v_pi}, and $y_M^*$ becomes the leading vector.
After this switch, $y_M^* \in F_2$ and $x_M^*(d) = x_2$. If $F_2$ was prolonged with the dashed line on Figure~\ref{fig: d passed v_pi}, we would have $x_N^* = x_1$ instead of $x_{v_\pi}$ with a gain of $l$ for $\delta_N$ making the ratio equal to $r_{F_2} = \frac{D + \delta_N + l}{D - \delta_M}$, value of $r_{X,Y}$ on $F_2$. But $x_N^* = x_{v_\pi}$ and $l > 0$, thus $r_{F_2} > \frac{D + \delta_N}{D - \delta_M} = r_{X,Y}(d)$. Therefore, $r_{X,Y}$ reaches a local minimum during the crossing of $v_\pi$.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: x_N^*(d) and x_M^*(d) cst after v_pi}
During the crossing of vertices after $v_\pi$ as $\beta$ increases until $\pi$, $x_N^*(d) = x_1$ and $x_M^*(d) = x_2$. They are constant different and both in $\partial X$.
\end{lemma}
\begin{proof}
After the crossing of $v_\pi$, $\alpha + \beta \in (\pi, \alpha_0 + \pi)$ and $y_M^*$ is leading and inside as established in Lemma~\ref{lemma: crossing v_pi}. Thus, $y_M^*$ is the first to reach vertex $v$. Since $x_M^* \in \{x_1, x_2\}$ we cannot have $x_M^* = x_v$ during the entire crossing because $x_v$ is a continuous function of $\beta$. Thus $y_M^*$ passes $v$ and belongs to $F_2$.
In Lemma~\ref{lemma: continuity of x_N} we showed that $x_N^*$ is continuous in $d$. Thus, $x_N^*(d)$ cannot switch like $x_M^*(d)$ did around $v_\pi$ to take the lead. Instead, $x_N^*(d)$ is trailing as illustrated on Figure~\ref{fig: crossing after v_pi}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.8]
\draw[->] (0, 0) -- (-3.2, 0);
\node at (-2, 0.2) {$x_2$};
\draw[->] (0, 0) -- (2, 0);
\node at (2.3, 0.1) {$x_1$};
\draw (-3, 4) -- (3, 3) -- (6, 0);
\node at (-2, 3.5) {$F_1$};
\node at (5.5, 1) {$F_2$};
\draw[dashed] (3, 3) -- (6, 2.5);
\draw (0, 0) -- (1.2, 1.6);
\draw (2.5, 3.4) -- (2.95, 4);
\node at (3.1, 3.9) {$d$};
\draw[red] (0, 0) -- (0.5, 3.4);
\node at (-0.1, 2) {\textcolor{red}{$y_N^*$}};
\draw[red, ->] (0.5, 3.4) -- (2.5, 3.4);
\node at (1.4, 3.65) {\textcolor{red}{$x_1 = x_N^*$}};
\draw[red] (2.5, 3.4) -- (2.3, 3.1);
\node at (2.9, 3.3) {\textcolor{red}{$\delta_N$}};
\draw[blue] (0, 0) -- (4.4, 1.6);
\node at (1.9, 1) {\textcolor{blue}{$y_M^*$}};
\draw[blue, ->] (4.4, 1.6) -- (1.2, 1.6);
\node at (2.8, 1.85) {\textcolor{blue}{$x_M^* = x_2$}};
\draw[blue] (1.2, 1.6) -- (2.3, 3.1);
\node at (1.4, 2.4) {\textcolor{blue}{$\delta_M$}};
\draw[->] (-0.5, 0) arc (180:54:0.5);
\node at (-0.4, 0.6) {$\beta$};
\end{tikzpicture}
\caption{Crossing of a vertex $v$ after $v_\pi$.}
\label{fig: crossing after v_pi}
\end{figure}
Since $y_N^* \in F_1$ during the crossing, we can apply Thales's theorem on Figure~\ref{fig: crossing after v_pi} and obtain that for a fixed $d$, $\delta_N$ is proportional to $\|x_N^*(d)\|$. Thus, to maximize $\delta_N$ we have $x_N^*(d) \in \partial X$ and, since $y_N^*$ is trailing, we have $x_N^*(d) = x_1$ during the entire crossing. By the definitions of $x_N^*(d)$ and $x_M^*(d)$, we have $x_N^*(d) \neq x_M^*(d)$. Since both $x_N^*(d)$ and $x_M^*(d)$ belong to $\partial X = \big\{x_1, x_2\big\}$, then $x_M^*(d) = x_2$ during the entire crossing.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: crossing after v_pi}
During the crossing of vertices after $v_\pi$ as $\beta$ increases until $\pi$, $r_{X,Y}(d)$ increases.
\end{lemma}
\begin{proof}
The leading vector $y_M^*$ is inside and crosses a vertex $v$ between faces $F_1$ and $F_2$ as $\beta$ increases.
We define $\beta' : = \pi - \beta$.
Then, reversing the crossing illustrated on Figure~\ref{fig: crossing after v_pi} is exactly the crossing illustrated on Figure~\ref{fig:y_N^* crossing a vertex} with $\beta'$ increasing and $x_1$ and $x_2$ exchanged. According to Lemma~\ref{lemma: crossing before v_pi}, in that reversed crossing $r_{X,Y}$ is decreasing.
Therefore, $r_{X,Y}$ increases during the crossing of vertices after $v_\pi$ as $\beta$ increases until $\pi$.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: beta > pi}
For $\beta > \pi$, $r_{X,Y}(d)$ decreases until $v_{2\pi}$ where it reaches a local minimum. After $v_{2\pi}$ as $\beta$ increases until $2\pi$, $r_{X,Y}(d)$ increases.
\end{lemma}
\begin{proof}
Let us change the angle convention, so that angles are now positively oriented in the counterclockwise orientation. The vertex that was previously labeled as $v_{2\pi}$ becomes the new $v_\pi$.
Then, we only need to apply Lemmas~\ref{lemma: x_N^*(d) and x_M^*(d) cst before v_pi}, \ref{lemma: crossing before v_pi}, \ref{lemma: crossing v_pi}, \ref{lemma: x_N^*(d) and x_M^*(d) cst after v_pi} and \ref{lemma: crossing after v_pi} to this new configuration to conclude the proof.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: 0 notin X}
All above results hold even if $0 \notin X$.
\end{lemma}
\begin{proof}
In all the figures we made the implicit assumption that $0 \in X$, so that $x_1$ and $x_2$ were negatively collinear.
Let $x_1$ be positively collinear with $x_2$ and $\|x_2\| > \|x_1\|$.
On Figure~\ref{fig: x_N^*(d) = -x_M^*(d)}, we would now have $y_N^*(d)$ and $y_M^*(d)$ both outside. Then, the definition of $\delta_M$ should be adapted. Let $\delta_M(d) := \|x_M^*(d) + y_M^*(d)\| - D(d)$ and then $r_{X,Y}(d) = \frac{D+ \delta_N}{D + \delta_M}$. Except for this modification, we would still have $x_N^*(d) = x_2$ and $x_M^*(d) = x_1$. Thales theorem can be used similarly to show that $x_N^*(d) \in \partial X$. Therefore, Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst on faces} holds.
In the proof of Lemma~\ref{lemma:r(d) cst on faces} we still have $\delta_N / D$ and $\delta_M / D$ invariant with respect to $d$ on a given face of $\partial Y$, so $r_{X,Y}$ is still constant on faces.
Lemma~\ref{lemma: v_pi and v_2pi} is not affected at all.
The first part of the crossing of a vertex before $v_\pi$ as $\beta$ increases is illustrated by Figure~\ref{fig: y_N^* crossing with 0 not in X}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.9]
\draw[->] (0, 0) -- (-0.5, 0.5);
\node at (-0.5, 0.2) {$x_1$};
\draw[->] (-0.5, 0.5) -- (-1.5, 1.5);
\node at (-1.1, 0.8) {$x_2$};
\draw (-3, 2) -- (2, 2) -- (5, 1.25);
\node at (-2.7, 2.25) {$F_1$};
\node at (4.5, 1.6) {$F_2$};
\draw[dashed] (2.1, 2) -- (5, 2);
\draw[->, blue] (1.8, 2) -- (1.3, 2.5);
\node at (1.85, 2.27) {\textcolor{blue}{$x_M^*$}};
\draw[blue] (0, 0) -- (1.8, 2);
\node at (1.5, 1.3) {\textcolor{blue}{$y_M^*$}};
\draw[blue] (1.05, 2) -- (1.3, 2.5);
\node at (0.9, 2.3) {\textcolor{blue}{$\delta_M$}};
\draw[->, red] (3.2, 1.7) -- (1.7, 3.2);
\node at (1.9, 2.8) {\textcolor{red}{$x_N^*$}};
\draw[red] (0, 0) -- (3.2, 1.7);
\node at (2, 0.8) {\textcolor{red}{$y_N^*$}};
\draw[red] (1.3, 2.5) -- (1.7, 3.2);
\node at (0.8, 2.9) {\textcolor{red}{$\delta_N - \delta_M$}};
\draw[->, green] (3.4, 2) -- (1.9, 3.5);
\node at (2.8, 3) {\textcolor{green}{$x_2$}};
\draw[green] (1.7, 3.2) -- (1.9, 3.5);
\node at (1.6, 3.4) {\textcolor{green}{$l$}};
\draw (0,0) -- (1.05, 2);
\draw (1.9, 3.5) -- (2.2, 4);
\node at (2.2, 3.7) {$d$};
\draw[->] (-0.3, 0.3) arc (135:45:0.3);
\node at (-0.1, 0.6) {$\beta$};
\draw[<-] (0.75, 2) arc (180:240:0.3);
\node at (0.6, 1.8) {$\alpha$};
\end{tikzpicture}
\caption{Part I of the crossing of a vertex before $v_\pi$ with $0 \notin X$.}
\label{fig: y_N^* crossing with 0 not in X}
\end{figure}
For $\delta_M$ to be minimized and $\delta_N$ to be maximized, the Thales theorem clearly proves that $x_M^* \in \partial X$ and $x_N^* \in \partial X$ during the crossing. We still have $x_N^*(d) = x_2$ and $x_M^*(d) = x_1$, so Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst before v_pi} holds.
Following the reasoning in Lemma~\ref{lemma: crossing before v_pi}, we have $l > 0$, which leads to
\begin{equation*}
r_{F_1} = \frac{D + \delta_N + l}{D + \delta_M} > \frac{D + \delta_N}{D + \delta_M} = r_{X,Y}(d).
\end{equation*}
During the second part, both $y_N^* \in F_2$ and $y_M^* \in F_2$ but $d \in F_1$. This situation is illustrated on Figure~\ref{fig: y_N^* and y_M^* crossing with 0 not in X}.
\begin{figure}[htbp!]
\centering
\begin{tikzpicture}[scale = 0.9]
\draw[->] (-1, 0) -- (-1.6, 0.8);
\node at (-1.5, 0.2) {$x_1$};
\draw[->] (-1.6, 0.8) -- (-2.5, 1.9);
\node at (-2.3, 1.2) {$x_2$};
\draw (-3, 2) -- (1, 2) -- (4, 0.5);
\node at (-2.7, 2.25) {$F_1$};
\node at (3.7, 0.9) {$F_2$};
\draw[dashed] (1.1, 2) -- (4, 2);
\draw[dashed] (1, 2) -- (-1, 3);
\draw[->, blue] (1.6, 1.7) -- (1, 2.5);
\node at (1.6, 2.2) {\textcolor{blue}{$x_M^*$}};
\draw[blue] (-1, 0) -- (1.6, 1.7);
\node at (1, 1) {\textcolor{blue}{$y_M^*$}};
\draw[blue] (0.7, 2.15) -- (1, 2.5);
\node at (0.4, 2.5) {\textcolor{blue}{$\delta_M - l$}};
\draw[->, red] (3, 1) -- (1.5, 3.15);
\node at (2.4, 2.4) {\textcolor{red}{$x_N^*$}};
\draw[red] (-1, 0) -- (3, 1);
\node at (1.5, 0.3) {\textcolor{red}{$y_N^*$}};
\draw[red] (1, 2.5) -- (1.5, 3.15);
\node at (0.7, 3.1) {\textcolor{red}{$\delta_N - \delta_M$}};
\draw[green] (0.6, 2) -- (0.7, 2.15);
\node at (0.5, 2.15) {\textcolor{green}{$l$}};
\draw (-1,0) -- (0.6, 2);
\draw (1.5, 3.15) -- (1.75, 3.5);
\node at (-0.3, 1.3) {$d$};
\end{tikzpicture}
\caption{Part II of the crossing of a vertex before $v_\pi$ with $0 \notin X$.}
\label{fig: y_N^* and y_M^* crossing with 0 not in X}
\end{figure}
We compare the current value of $r_{X,Y}(d)$ with $r_{F_2}$, its value on $F_2$:
\begin{equation*}
r_{F_2} = \frac{D + (\delta_N - l)}{D + (\delta_M - l)} \qquad \text{and} \qquad r_{X,Y}(d) = \frac{D + \delta_N}{D + \delta_M}.
\end{equation*}
Since $l > 0$ and $\delta_N > \delta_M$, a simple calculation shows that $r_{X,Y}(d) < r_{F_2}$. Therefore, $r_{X,Y}$ is decreasing during the crossing of a vertex before $v_\pi$ as $\beta$ increases, Lemma~\ref{lemma: crossing before v_pi} holds.
During the crossing of $v_\pi$, $y_N^* = v_\pi$ and $x_N^* = x_{v_\pi}$ with its norm decreasing continuously until $x_N^* = x_1$, while $x_M^*$ will switch to $x_2$ in order to minimize $\delta_M$. This is the same process as described in Lemma~\ref{lemma: crossing v_pi}, so $r_{X,Y}$ also reaches a local minimum.
Because all the results studied so far still hold, then Lemmas~\ref{lemma: x_N^*(d) and x_M^*(d) cst after v_pi}, \ref{lemma: crossing after v_pi} and \ref{lemma: beta > pi} hold too because they rely on those earlier results.
\end{proof}
We have now established all the lemmas directly involved in the proof of the Maximax Minimax Quotient Theorem, but we still have a few claims of continuity to prove.
\section{Continuity of Extrema}\label{sec:continuity}
In Proposition~\ref{prop: r_X,Y well-defined} (iii) we needed the continuity of $\lambda^*$ to prove it has a minimum and in Lemma~\ref{lemma: x_N^*(d) and x_M^*(d) cst on faces} we used the continuity of $x_N^*$ and $y_N^*$.
In this section we will thus prove the continuity of these two maxima functions relying on the Berge Maximum Theorem \citep{inf_dim_analysis}.
\begin{lemma}\label{lemma:phi continuous}
Let $X$ and $Y$ be two nonempty polytopes in $\mathbb{R}^n$ with $-X \subset Y$. Then, the set-valued function $\varphi : X \times \mathbb{S} \rightrightarrows Y$ defined as $\varphi(x,d) := Y \cap \big\{ \lambda d - x : \lambda \geq 0 \big\}$ satisfies Definition 17.2 of \citep{inf_dim_analysis}.
\end{lemma}
\begin{proof}
We define $\Omega := X \times \mathbb{S}$, so that $\varphi : \Omega \rightrightarrows Y$. On the space $\Omega$ we introduce the norm $\|\cdot\|_\Omega$ as $\|(x,d)\|_\Omega = \|x\| + \|d\|$. Since $\|\cdot\|$ is the Euclidean norm, $\|\cdot\|_\Omega$ is a norm on $\Omega$.
By Definition 17.2 of \citep{inf_dim_analysis}, we need to prove that $\varphi$ is both upper and lower hemicontinuous at all points of $\Omega$.
\vspace{2mm}
First, using Lemma~17.5 of \citep{inf_dim_analysis} we will prove that $\varphi$ is lower hemicontinuous by showing that for an open subset $A$ of $Y$, $\varphi^l(A)$ is open. The lower inverse image of $A$ is defined in \citep{inf_dim_analysis} as
\begin{align*}
\varphi^l(A) &:= \big\{ \omega \in \Omega : \varphi(\omega) \cap A \neq \emptyset \big\} \\
&= \big\{(x,d) \in X \times \mathbb{S} : Y \cap \{ \lambda d - x : \lambda \geq 0 \} \cap A \neq \emptyset\big\} \\
&= \big\{(x,d) \in X \times \mathbb{S} : \{\lambda d - x : \lambda \geq 0\} \cap A \neq \emptyset\big\},
\end{align*}
because $A \subset Y$.
Let $\omega = (x,d) \in \varphi^l(A)$. Then, there exists $\lambda \geq 0$ such that $\lambda d - x \in A$. Since $A$ is open, there exists $\varepsilon > 0$ such that the ball $B_\varepsilon(\lambda d - x) \subset A$. Now let $\omega_1 = (x_1, d_1) \in \Omega$ and denote $\varepsilon_x := \|x_1 - x\|$ and $\varepsilon_d := \|d_1 - d\|$. Then,
\begin{align*}
\|\lambda d_1 - x_1 - (\lambda d - x) \| &= \| \lambda (d_1 - d) - (x_1 - x)\| \leq \lambda \varepsilon_d + \varepsilon_x.
\end{align*}
Since $\lambda \geq 0$ is fixed, we can choose $\varepsilon_d$ and $\varepsilon_x$ positive and small enough so that $\lambda \varepsilon_d + \varepsilon_x \leq \varepsilon$.
Then, we have showed that for all $\omega_1 = (x_1, d_1) \in \Omega$ such that $\|\omega - \omega_1\|_\Omega \leq \min(\varepsilon_d, \varepsilon_x)$, i.e., such that $\|x_1 - x\| \leq \varepsilon_x$ and $\|d_1 - d\| \leq \varepsilon_d$, we have $\lambda d_1 - x_1 \in B_\varepsilon(\lambda d - x) \subset A$, i.e., $\omega_1 \in \varphi^l(A)$. Therefore, $\varphi^l(A)$ is open, and so $\varphi$ is lower hemicontinuous.
\vspace{3mm}
To prove the upper hemicontinuity of $\varphi$, we will use Lemma~17.4 of \citep{inf_dim_analysis} and prove that for a closed subset $A$ of $Y$, the lower inverse image of $A$ is closed. Let $\{\omega_k\}$ be a sequence in $\varphi^l(A)$ converging to $\omega = (x,d) \in \Omega$. We want to prove that the limit $\omega \in \varphi^l(A)$.
For $k \geq 0$, we have $\omega_k = (x_k, d_k)$ and define $\Lambda_k := \big\{ \lambda_k \geq 0 : \lambda_k d_k - x_k \in A \big\} \neq \emptyset$. Since $A$ is a closed subset of the compact set $Y$, then $A$ is compact. Thus $\Lambda_k$ has a minimum and a maximum; we denote them by $\lambda_k^{min}$ and $\lambda_k^{max}$ respectively.
Since sequences $\{d_k\}$ and $\{x_k\}$ converge, they are bounded. The set $A$ is also bounded, thus sequence $\{\lambda_k^{max}\}$ is bounded. Let $\lambda^{max} := \underset{k\, \geq\, 0}{\sup}\ \lambda_k^{max} > 0$.
For $k \geq 0$, we define segments $S_k := \big\{\lambda d_k - x_k : \lambda \in [0, \lambda^{max}] \big\}$, and $S := \big\{\lambda d - x : \lambda \in [0, \lambda^{max}] \big\}$. These segments are all compact sets.
We also introduce the sequences $a_k := \lambda_k^{min}d_k - x_k \in A \cap S_k$ and $b_k := \lambda_k^{min}d - x \in S$.
Take $\varepsilon > 0$. Since sequences $\{d_k\}$ and $\{x_k\}$ converge toward $d$ and $x$ respectively, there exists $N \geq 0$ such that for $k \geq N$, we have $\|d_k - d\| \leq \frac{\varepsilon}{2 \lambda^{max}}$ and $\|x_k - x\| \leq \frac{\varepsilon}{2}$. Then, for any $\lambda_k \in [0, \lambda^{max}]$ as
\begin{equation*}
\| \lambda_k d_k - x_k - (\lambda_k d - x)\| = \| \lambda_k(d_k - d) - (x_k - x)\| \leq \lambda_k \frac{\varepsilon}{2 \lambda^{max}} + \frac{\varepsilon}{2} \leq \varepsilon.
\end{equation*}
Since $\lambda_k^{min} \in [0, \lambda^{max}]$, we have $\|a_k - b_k\| \xrightarrow[k \rightarrow \infty]{} 0$. We define the distance between the sets $A$ and $S$
\begin{equation*}
dist(A,S) := \min \big\{ \|a - s_\lambda\| : a \in A,\ s_\lambda \in S\big\}.
\end{equation*}
The minimum exists because $A$ and $S$ are both compact and the norm is continuous. Since $a_k \in A$ and $b_k \in S$, we have $dist(A, S) \leq \|a_k - b_k\|$ for all $k \geq 0$. Therefore, $dist(A, S) = 0$. So, $A \cap S \neq \emptyset$, leading to $\omega \in \varphi^l(A)$. Then, $\varphi^l(A)$ is closed and so $\varphi$ is upper hemicontinuous.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: lambda continuous}
Let $X$ and $Y$ be two nonempty polytopes in $\mathbb{R}^n$ with $-X \subset Y$. Then, $\lambda^*(x ,d) := \underset{y\, \in\, Y}{\max} \big\{ \|x + y\| : x + y \in \mathbb{R}^+ d \big\}$ is continuous in $x \in X$ and $d \in \mathbb{S}$.
\end{lemma}
\begin{proof}
According to Proposition~\ref{prop: r_X,Y well-defined} (ii), whose proof does not rely on the current lemma, $\lambda^*$ is well-defined. We introduce the set-valued function $\varphi : X \times \mathbb{S} \rightrightarrows Y$ defined by $\varphi(x,d) := \big\{ y \in Y : x + y \in \mathbb{R}^+ d \big\} = Y \cap \big( \mathbb{R}^+ d - \{x\} \big)$, where $\mathbb{R}^+ d - \{x\} = \big\{ \lambda d - x : \lambda \geq 0\big\}$.
We define the graph of $\varphi$ as $\text{Gr}\, \varphi := \big\{ (x,d,y) \in X \times \mathbb{S} \times Y : y \in \varphi(x,d) \big\}$, and the continuous function $f: \text{Gr}\, \varphi \rightarrow \mathbb{R}^+$ as $f(x,d,y) = \|x + y\|$. Set $X \times \mathbb{S}$ is compact and nonempty. Since $Y$ is compact and $\mathbb{R}^+ d - \{x\}$ is closed, their intersection $\varphi(x,d)$ is compact.
Because $-X \subset Y$, for all $x \in X$ we have $-x \in \varphi(x,d)$, so $\varphi(x,d) \neq \emptyset$. According to Lemma~\ref{lemma:phi continuous}, $\varphi$ satisfies Definition $17.2$ of \citep{inf_dim_analysis}. Then, we can apply the Berge Maximum Theorem \citep{inf_dim_analysis} and conclude that $\lambda^*$ is continuous in $x$ and $d$.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemma: continuity of x_N}
Let $X$ and $Y$ be two nonempty polytopes in $\mathbb{R}^n$ with $-X \subset Y$. Then, the functions $\big( x_N^*, y_N^* \big) (d) = \arg \underset{x\, \in\, X,\, y\, \in\, Y}{\max} \big\{ \|x+y\| : x+y \in \mathbb{R}^+ d \big\}$ are continuous in $d \in \mathbb{S}$.
\end{lemma}
\begin{proof}
Let $Z := X + Y = \big\{x+y : x \in X,\ y \in Y\big\}$. Then $Z$ is the Minkowski sum of two polytopes, so it is also a polytope \citep{sum_polytopes}. According to Proposition~\ref{prop: r_X,Y well-defined} (i), whose proof does not rely on the current lemma, $\underset{x\, \in\, X,\, y\, \in\, Y}{\max} \big\{ \|x+y\| : x+y \in \mathbb{R}^+ d \big\}$ exists and thus $\underset{z\, \in\, Z}{\max} \big\{ \|z\| : z \in \mathbb{R}^+ d \big\}$ is also well-defined.
Since $-X \subset Y$, for all $x \in X$, $-x \in Y$ and thus $0 \in Z$. Then, $\{0\}$ and $Z$ are two polytopes in $\mathbb{R}^n$ with $\pm 0 \in Z$. According to Lemmma~\ref{lemma: lambda continuous} the function $\lambda^*(0, d) := \underset{z\, \in\, Z}{\max} \big\{ \|z + 0\| : z+0 \in \mathbb{R}^+ d \big\}$ is continuous in $d \in \mathbb{S}$.
Then, we define the continous function $z(d) := \lambda^*(0, d) d \in Z$ for $d \in \mathbb{S}$. Note that $z(d) = \arg\underset{z\, \in\, Z}{\max} \big\{\|z\| : z \in \mathbb{R}^+d \big\} = \big(x_N^*, y_N^*\big)(d)$, so these functions are continuous.
\end{proof}
\section{Illustration}\label{sec:example}
We will now illustrate the Maximax Minimax Quotient Theorem on a simple example.
We consider polygon $X$ delimited by the vertices $x_1 = (0,-0.5)$ and $x_2 = (0,1)$ in $\mathbb{R}^2$ and polygon $Y$ with vertices $(\pm 1, \pm 2)$ and $(\pm 3, 0)$ as represented on Figure~\ref{fig:sets X Y}.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.5]{Sets_X_Y.eps}
\caption{Illustration of polygons $X$ and $Y$.}
\label{fig:sets X Y}
\end{figure}
Since $-X \subset Y^\circ$, $\dim X = 1$, $x_2 \neq 0$ and $\dim Y = 2$, the assumptions of the Maximax Minimax Quotient Theorem are satisfied. To illustrate the proof of the theorem, for all $d \in \mathbb{S}$ we define the angle $\beta := \widehat{x_2, d}$ positively oriented clockwise. We also enumerate the vertices in the clockwise direction and we note that $v_2 = v_\pi$ and $v_5 = v_{2\pi}$ as defined in Lemma~\ref{lemma: v_pi and v_2pi}.
Then, we compute $r_{X,Y}$ for $\beta \in [0, 2\pi)$ as shown on Figure~\ref{fig:r_XY}. The red spikes denote when the ray $d(\beta)$ hits a vertex of $Y$.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.45]{r_XY.eps}
\caption{Graph of $r_{X,Y}$ as a function of $\beta$.}
\label{fig:r_XY}
\end{figure}
As demonstrated by the Maximax Minimax Quotient Theorem, $r_{X,Y}$ has two local maxima achieved at $\beta = 0$ and $\beta = \pi$. These two values are different because polygon $X$ is not symmetric. Note also that the Maximax Minimax Quotient Theorem does not state that the maximum is \emph{only} reached when $\beta \in \{0, \pi\}$.
Indeed as shown in Figure~\ref{fig:r_XY} and established in Lemma~\ref{lemma:r(d) cst on faces}, $r_{X,Y}$ is constant on the faces of $\partial Y$. Thus, the two local maxima are achieved on the faces $[v_1, v_6]$ and $[v_3, v_4]$.
As proven in Lemma~\ref{lemma: crossing v_pi} and in Lemma~\ref{lemma: beta > pi}, $r_{X,Y}$ reaches a local minimum during the crossing of the vertices $v_\pi$ and $v_{2\pi}$.
A video illustrating the Maximax Minimax Quotient Theorem on a different polytope can be found following the link \href{https://www.youtube.com/watch?v=rjKzHyDJX40}{\underline{here}} or in the footnote\footnote{\url{https://www.youtube.com/watch?v=rjKzHyDJX40}}.
\section{Conclusion}
In this paper we considered an optimization problem arising from optimal control and pertaining to both fractional programming and max-min programming.
We first justified the existence of the Maximax Minimax Quotient. Then, relying on numerous geometrical arguments and on the continuity of two maxima functions we were able to establish the Maximax Minimax Quotient Theorem.
This result provides an analytical solution to the maximization of a ratio of a maximum and a minimax over two polytopes.
We illustrated our theorem and its proof on a simple example in $\mathbb{R}^2$.
This work filled the theoretical gap left in \citep{SIAM_CT}, and because of our less restrictive assumptions we also open the way for a more general framework than that of \citep{SIAM_CT}.
A possible avenue for future work on this theorem is to study the case where $\dim X > 1$.
\bibliographystyle{abbrv}
| {
"timestamp": "2021-11-19T02:22:45",
"yymm": "2104",
"arxiv_id": "2104.15025",
"language": "en",
"url": "https://arxiv.org/abs/2104.15025",
"abstract": "We present an optimization problem emerging from optimal control theory and situated at the intersection of fractional programming and linear max-min programming on polytopes. A naïve solution would require solving four nested, possibly nonlinear, optimization problems. Instead, relying on numerous geometric arguments we determine an analytical solution to this problem. In the course of proving our main theorem we also establish another optimization result stating that the minimum of a specific minimax optimization is located at a vertex of the constraint set.",
"subjects": "Optimization and Control (math.OC)",
"title": "The Maximax Minimax Quotient Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587272306607,
"lm_q2_score": 0.8006919925839875,
"lm_q1q2_score": 0.7908905034985411
} |
https://arxiv.org/abs/1701.02200 | Bounding a global red-blue proportion using local conditions | We study the following local-to-global phenomenon: Let $B$ and $R$ be two finite sets of (blue and red) points in the Euclidean plane $\mathbb{R}^2$. Suppose that in each "neighborhood" of a red point, the number of blue points is at least as large as the number of red points. We show that in this case the total number of blue points is at least one fifth of the total number of red points. We also show that this bound is optimal and we generalize the result to arbitrary dimension and arbitrary norm using results from Minkowski arrangements. | \section{Introduction}
Consider the following scenario in wireless networks. Suppose we have $n$ clients and $m$ antennas where both are represented as points in the plane (see Figure \ref{fig:hypothesis}). Each client has a wireless device that can communicate with the antennas. Assume also that each client is associated with some disk centered at the client's location and having radius representing how far in the plane his device can communicate. Suppose also, that some communication protocol requires that in each of the clients disks, the number of antennas is at least some fixed proportion $\lambda>0$ of the number of clients in the disk. Our question is: does such a local requirement imply a global lower bound on the number of antennas in terms of the number of clients? In this paper we answer this question and provide exact bounds. Let us formulate the problem more precisely.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{exampleHyp.pdf}
\caption{In each device range (each disk) there are at least as many antennas (black dots) as devices (white dots), so the hypothesis holds for $\lambda=1$.}
\label{fig:hypothesis}
\end{figure}
Let $B$ and $R=\{p_1,\ldots,p_n\}$ be two finite sets in $\Re^2$.
Let $\mathcal{D} = \{D_1,\ldots,D_n\}$ be a set of Euclidean disks centered at the red
points,
i.e., the center of $D_i$ is $p_i$. Let $\{\rho_1,\ldots,\rho_n\}$ be the radii
of the disks in $\mathcal{D}$.
\begin{theorem}\label{thm:euclideanplane}
Assume that for each $i$ we have $\cardin{D_i \cap B} \geq \cardin{D_i \cap
R}$. Then
$\cardin{B} \geq \frac{n}{5}$.
Furthermore, the multiplicative constant $\frac{1}{5}$ cannot be improved.
\end{theorem}
Such a local-to-global ratio phenomenon was shown to be useful in a more combinatorial setting. Pach et. al. \cite{Pach2015}, solved a conjecture by Richter and Thomassen \cite{Richter1995} on the number of total ``crossings" that a family of pairwise intersecting curves in the plane in general position can have. Lemma 1 from their paper is a first step in the proof and it consists of a local-to-global phenomenon as described above.
We will obtain Theorem \ref{thm:euclideanplane} from a more general result. In order to state it,
we introduce some terminology.
Let $K$ be an origin-symmetric convex body in $\Re^d$, that is, the unit ball of
a norm.
A \emph{strict Minkowski arrangement} is a family $\mathcal{D}=\{K_1=p_1+\rho_1K,\ldots,
K_n=p_n+\rho_nK\}$ of homothets of $K$, where $p_i\in\Re^d$ and $\rho_i>0$, such
that no member of the family contains the center of another member. An
\emph{intersecting family} is a family of sets that all share at least one
element.
We denote the \emph{maximum cardinality of an intersecting strict Minkowski
arrangement} of homothets of $K$ by $M(K)$. It is known that $M(K)$ exists for every $K$ and $M(K)\leq 3^d$ (see, e.g., Lemma~21 of \cite{NPS16}).
On the other hand (somewhat surprisingly), there is an origin-symmetric convex body
$K$ in $\Re^d$ such that $M(K)=\Omega\left(\sqrt{7}^d\right)$, \cites{T98,NPS16}. For more on Minkowski arrangements
see, e.g., \cites{FL94}.
We need the following auxiliary Lemma.
\begin{lemma}\label{lemma:minkowski}
Let $K$ be an origin-symmetric convex body in $\Re^d$. Let $R=\{p_1,\ldots,p_n\}$ be a set of points in $\Re^d$ and let
$\mathcal{D}=\{K_1=p_1+\rho_1K,\ldots, K_n=p_n+\rho_nK\}$ be a family of homothets of
$K$. Then there exists a subfamily $\mathcal{D}' \subset \mathcal{D}$ that covers $R$ and forms a strict Minkowski arrangement.
Moreover, $\mathcal{D}^\prime$ can be found using a greedy algorithm.
\end{lemma}
As a corollary, we will obtain the following theorem.
\begin{theorem}\label{thm:minkowskispecific}
Let $K$ be an origin-symmetric convex body in $\Re^d$. Let $R=\{p_1,\ldots,p_n\}$ be a set of points in $\Re^d$ and let
$\mathcal{D}=\{K_1=p_1+\rho_1K,\ldots, K_n=p_n+\rho_nK\}$ be a family of homothets of
$K$ where $\rho_1,\ldots,\rho_n>0$. Let $B$ be another set of points
in $\Re^d$, and assume that, for some $\lambda>0$, we have
\begin{equation}
\frac{\cardin{B\cap K_i}}{\cardin{R\cap K_i}} \geq \lambda,
\end{equation}
for all $i\in[n]$.
Then $\frac{\cardin{B}}{\cardin{R}} \geq \frac{\lambda}{3^d}$.
\end{theorem}
In Theorem \ref{thm:euclideanplane} the convex body $K$ is a Euclidean unit disk in the plane. Another case of special interest is when the convex body $K$ is a unit cube and thus it induces the $\ell_\infty$ norm. In this situation we get a sharper and optimal inequality.
\begin{theorem}
\label{thm:cubes}
If $K$ is the unit cube in $\mathbb{R}^d$, then the conclusion in Theorem \ref{thm:minkowskispecific} can be strengthened to $\frac{|B|}{|R|}\geq \frac{\lambda}{2^d}$. Furthermore, the multiplicative constant $\frac{1}{2^d}$ cannot be improved.
\end{theorem}
In the results above, the points $p_i$ play the role of the centers of the sets of the Minkowski arrangement. One might ask if this restriction is essential. As a final result, we give a general construction to show that it is.
\begin{theorem}\label{thm:badexample}
Let $K$ be any convex body in the plane and $\epsilon,\lambda$ any positive real numbers. There exist sets of points $R=\{p_1,\ldots,p_n\}$ and $B$ in the plane such that $|B|<\epsilon n$ and that for each $i$ there is a translate $K_i$ of $K$ that contains $p_i$ for which $|B\cap K_i|\geq \lambda |R\cap K_i|$.
\end{theorem}
In particular, even if each red point is contained in a unit disk with many blue points, the global blue to red ratio can be as small as desired. This is a possibly counter-intuitive fact in view of Theorem \ref{thm:euclideanplane}.
\section{Proofs}
\begin{proof}[Proof of Lemma~\ref{lemma:minkowski}]
We construct a subfamily $\mathcal{D}^{\prime}$ of $\mathcal{D}$ with the property that no
member of $\mathcal{D}^{\prime}$ contains the center of any member of $\mathcal{D}^{\prime}$,
and $\bigcup \mathcal{D}^{\prime}$ covers the red points, $R$.
Assume without loss of generality that the labels of the points in $R$ are
sorted in non-increasing order of the homothety ratio, that
is, $\rho_1 \geq \cdots \geq \rho_n$. See Figure \ref{fig:cover} for an example.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{cover.pdf}
\caption{The centers of the disks are labeled in decreasing order of corresponding radii. The shaded disks cover the white points and no shaded disk contains the center of another.}
\label{fig:cover}
\end{figure}
We construct $\mathcal{D}^{\prime}$ in a greedy manner as follows: Add $K_1$ to
$\mathcal{D}^{\prime}$. Among
all red points that are not already covered by $\mathcal{D}^{\prime}$ pick a point $p_j$ whose
corresponding homothet $K_j$ has maximum homothety ratio $\rho_j$. Add $K_j$ to $\mathcal{D}^{\prime}$
and repeat until all red points are covered by $\mathcal{D}^{\prime}$. Note that the
homothets in $\mathcal{D}^{\prime}$ are not necessarily disjoint.
Clearly, $R\subset \bigcup \mathcal{D}^{\prime}$. Now we show that no member of $\mathcal{D}^{\prime}$
contains the center of another. Suppose to the contrary that $K_i$ contains the center
of $K_j$. If $i<j$, then $\rho_i\geq \rho_j$ so $K_i$ was chosen first, a contradiction to the fact that $p_j$ was chosen among the points not covered by previous homothets. If $i>j$, then $K_j$ also contains the center of $K_i$, and we get a similar contradiction.
This finishes the proof of Lemma~\ref{lemma:minkowski}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:minkowskispecific}]
By Lemma~\ref{lemma:minkowski}, there exists a subfamily $\mathcal{D}' \subset \mathcal{D}$ that covers $R$ and form a strict Minkowski arrangement.
Namely, $\bigcup \mathcal{D}^{\prime}$ covers $R$, and no point of $B$ is
contained in more than $M(K)$ members of $\mathcal{D}^{\prime}$.
In particular, it follows that
$$
\cardin{R} \leq \sum_{K \in \mathcal{D}'}\cardin{R\cap K} \leq \sum_{K \in \mathcal{D}'}\frac{\cardin{B \cap K}}{\lambda}\leq \frac{M(K)}{\lambda}\cardin{B}
$$
so $$\frac{\cardin{B}}{\cardin{R}} \geq \frac{\lambda}{M(K)} \geq \frac{\lambda}{3^d}.$$ This completes the proof.
\end{proof}
\begin{lemma}\label{lem:euclideanMbound}
Let $K$ be the Euclidean unit disk centered at the origin. Then $M(K)=5$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:euclideanMbound}]
Five unit disks centered in the vertices of a unit-radius regular pentagon show that $M(K)\geq 5$. See Figure \ref{fig:optimal}a.
\begin{figure}
\centering
\includegraphics[width=0.30\textwidth]{optimalH.pdf}
\caption{Optimal Minkowski arrangements in the plane for a) Euclidean disks, b) axis-parallel squares.}
\label{fig:optimal}
\end{figure}
To prove the other direction, suppose that
there is a point $b$ in the plane that is contained in $6$ Euclidean disks in a
strict Minkowski arrangement. Then, by the pigeonhole principle, there are two
centers of those disks, say $p$ and $q$ such that the angle
$\sphericalangle(pbq)$ is at most
$60^\circ$.
Assume without loss of generality that $pb \geq qb$.
It is easily verified e.g., by the law of cosines, that the distance $pq$ is
less than $pb$. Hence, the disk centered at $p$ contains $q$, a contradiction. This completes the proof.
\end{proof}
\begin{lemma}\label{lem:cubebound}
Let $K$ be the unit cube of $\mathbb{R}^d$ centered at the origin. Then $M(K)=2^d$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:cubebound}] Let $d$ be a positive integer and $e_1,e_2,\ldots,e_n$ the canonical base of $\mathbb{R}^d$. Consider all the cubes of radius $1$ centered at each point of the form $\pm e_1 \pm e_2 \pm \ldots \pm e_d$. This family shows that $M(K)\geq 2^d$. See Figure \ref{fig:optimal}b for an example on the plane.
Now we show the other direction. Consider the $2^d$ closed regions of $\mathbb{R}^d$ bounded by the hyperplanes $x_i=0$ $i=1,2,\ldots,d$ and suppose on the contrary that we have an example with $2^d+1$ cubes or more that contain the origin. By the pidgeon-hole principle there is a region with at least two cube centers $u$ and $v$. By applying a rotation we may assume that it is the region of vectors with non-negative entries. We may also assume $\delta:=\|u\|_\infty \geq \|v\|_\infty$.
Since the $d$-cube centered at $u$ contains the origin, its radius must be at least $\delta$. We claim that this cube contains $v$. Indeed, each of the entries of $u$ and $v$ are in the interval $[0,\delta]$. So each of the entries of $u-v$ are in $[-\delta,\delta]$. Then $\|u-v\|_\infty \leq \delta$ as claimed. This contradiction finishes the proof.
\end{proof}
Theorem~\ref{thm:euclideanplane} clearly follows from combining the proof of
Theorem~\ref{thm:minkowskispecific} (with $\lambda=1$) and
Lemma~\ref{lem:euclideanMbound}. The result is sharp because we have equality when $R$ is the set of vertices of a regular pentagon with center $p$ and $B=\{p\}$. Similarly, Theorem \ref{thm:cubes} and its optimality follow from Lemma \ref{lem:cubebound}.
\begin{rem}
Lemma~\ref{lem:euclideanMbound} can be generalized to arbitrary dimension. This
implies
that Theorem~\ref{thm:euclideanplane} can be generalized to arbitrary dimension
almost verbatim.
\end{rem}
\begin{proof}[Proof of Theorem~\ref{thm:badexample}] Let $K$ be any convex body in the plane. We construct sets $R$ and $B$ as follows. Let $\ell$ be a tangent line of $K$ which intersects $K$ at exactly one point $t$. Let $I$ be a non-degenerate closed line segment contained in $K$ and parallel to $\ell$. Let $J$ be the (closed) segment that is the locus of the point $t$ as $K$ varies through all its translations in direction $d$ that contain $I$. See Figure \ref{fig:intervals}.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{example.pdf}
\caption{Construction of example without local-to-global phenomenon.}
\label{fig:intervals}
\end{figure}
We construct $R$ by taking any $n$ points from $J$ and we construct $B$ by taking any $m$ points from $I$. For any point in $R$ there is a translation of $K$ that contains exactly one point of $R$ and $m$ points of $B$, which makes the local $B$ to $R$ ratio equal to $m$. But globally we can make the ratio $\frac{m}{n}$ arbitrarily small.
\end{proof}
\section*{Acknowledgements}
M. Nasz\'odi acknowledges the support of the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, and
the National Research, Development, and Innovation Office, NKFIH Grant PD-104744, as well as the support of the Swiss National Science
Foundation grants 200020-144531 and 200020-162884.
L. Martinez-Sandoval's research was partially carried out during the author's visit at EPFL. The project leading to this application has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant No. 678765 and from the Israel Science Foundation grant No. 1452/15.
S. Smorodinsky's research was partially supported by Grant 635/16 from the Israel Science Foundation. A part of this research was carried out
during the author's visit at EPFL, supported by Swiss National Science Foundation grants 200020-162884 and 200021-165977.
\bibliographystyle{plain}
| {
"timestamp": "2017-04-21T02:05:47",
"yymm": "1701",
"arxiv_id": "1701.02200",
"language": "en",
"url": "https://arxiv.org/abs/1701.02200",
"abstract": "We study the following local-to-global phenomenon: Let $B$ and $R$ be two finite sets of (blue and red) points in the Euclidean plane $\\mathbb{R}^2$. Suppose that in each \"neighborhood\" of a red point, the number of blue points is at least as large as the number of red points. We show that in this case the total number of blue points is at least one fifth of the total number of red points. We also show that this bound is optimal and we generalize the result to arbitrary dimension and arbitrary norm using results from Minkowski arrangements.",
"subjects": "Computational Geometry (cs.CG)",
"title": "Bounding a global red-blue proportion using local conditions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587272306606,
"lm_q2_score": 0.8006919925839875,
"lm_q1q2_score": 0.7908905034985411
} |
https://arxiv.org/abs/1809.03926 | Constructive regularization of the random matrix norm | We show a simple local norm regularization algorithm that works with high probability. Namely, we prove that if the entries of a $n \times n$ matrix $A$ are i.i.d. symmetrically distributed and have finite second moment, it is enough to zero out a small fraction of the rows and columns of $A$ with largest $L_2$ norms in order to bring the operator norm of $A$ to the almost optimal order $O(\sqrt{\log \log n \cdot n})$. As a corollary, we also obtain a constructive procedure to find a small submatrix of $A$ that one can zero out to achieve the same goal. This work is a natural continuation of our recent work with R. Vershynin, where we have shown that the norm of $A$ can be reduced to the optimal order $O(\sqrt{n})$ by zeroing out just a small submatrix of $A$, but did not provide a constructive procedure to find this small submatrix. Our current approach extends the norm regularization techniques developed for the graph adjacency (Bernoulli) matrices in the works of Feige and Ofek, and Le, Levina and Vershynin to the considerably broader class of matrices. | \section{Introduction} \label{introduction}
What should we call an \emph{optimal} order of an operator norm of a random $n \times n$ matrix? If we consider a matrix $A$ with independent standard Gaussian entries, then by the classical Bai-Yin law (see, for example, \cite{Tao})
$$
\|A\|/\sqrt{n} \to 2 \quad \text{ almost surely, }
$$
as the dimension $n \to \infty$. Moreover, the $2\sqrt{n}$ asymptotic holds for more general classes of matrices. By \cite{Bai-Yin-Kri}, if the entries of $A$ have zero mean and bounded fourth moment, then
$$
\|A\| = (2+o(1)) \sqrt{n}
$$
with high probability. If we are concerned to get an explicit (non-asymptotic) probability estimate for all large enough $n$, an application of Bernstein's inequality (see, for example, in \cite{V, V-HDP}) gives
$$\mathbb{P}\{\|A\| \le t \sqrt{n}\} \ge 1 - e^{-c_0t^2n} \quad \text{ for } t \ge C_0$$
for the matrices with i.i.d. subgaussian entries. Here, $c_0, C_0 > 0$ are absolute constants. The non-asymptotic extensions to more general distributions are also available, see \cite{Seginer, Latala, BvH, vH}.
Also, note that the order $\sqrt{n}$ is the best we can generally hope for. Indeed, if the entries of $A$ have variance $C$, then the typical magnitude of the Euclidean norm of a row of $A$ is $ \sim \sqrt{n}$, and the operator norm of $A$ cannot be smaller than that. So, it is natural to assume $O(\sqrt{n})$ as the ``ideal order'' of the operator norm of an $n \times n$ i.i.d. random matrix.
However, if we do not assume that the matrix entries have four finite moments, we do not have ideal order $O(\sqrt{n})$: the weak fourth moment is necessary for the convergence in probability of $\|A\|/\sqrt{n}$ when $n$ grows to infinity (see \cite{Silv}). Moreover, for the matrices with the entries having two finite moments, an explicit family of examples, constructed in \cite{LS}, shows that $A$ can have $\|A\| \sim O(n^{\alpha})$ for any $\alpha \le 1$ with substantial probability.
\bigskip
This motivates the following questions: what are the obstructions in the structure of $A$ that make its operator norm too large? Under what conditions and how can we regularize the matrix restoring the optimal $O(\sqrt{n})$ norm with high probability? Clearly, interesting regularization would be the one that does not change $A$ too much, for example, that changes only a small fraction of the entries of $A$. We call such regularization \emph{local}.
The first question was answered in our previous work with R. Vershynin (\cite{ReV}). We have shown that one can enforce the norm bound $\|A\| \sim \sqrt{n}$ by modifying the entries in a small submatrix of $A$ if and only if the i.i.d. entries of $A$ have zero moment and finite variance. The proof strategy was to construct a way to regularize $\|.\|_{\infty \to 2}$ norm of $A$, and to apply a form of Grothendieck-Pietsch theorem (see \cite[Proposition 15.11]{LT}) to claim that some additional small correction regularizes the operator norm $\|A\|$. This last step made it impossible to find the submatrix explicitly.
\bigskip
In the current work we give an (almost optimal) answer to the remaining constructiveness question, namely, \emph{when local regularization is possible, how to fix the norm of $A$ by a small change to the optimal order?} The main result of the paper is
\begin{theorem}[Constructive regularization] \label{main}
Let $A$ be a random $n~\times~n$ matrix with i.i.d. entries $A_{ij}$ having symmetric distribution such that $\E A_{ij}^2 = 1$. Then for any $\varepsilon \in (0, 1/6]$, $r \ge 1$ with probability $1 - n^{0.1-r}$ the following holds: if we replace with zeros at most $\varepsilon n$ rows and $\varepsilon n$ columns with largest $L_2$-norms (as vectors in $\mathbb{R}^n$), then the resulting matrix $\tilde{A}$ will have a well-bounded operator norm
\begin{equation}\label{main norm estimate}
\|\tilde A\| \le C r\sqrt{c_{\varepsilon} n \cdot \ln \ln n}.
\end{equation}
Here $c_{\varepsilon} =(\ln \varepsilon^{-1})/\varepsilon$ and $C > 0$ is a sufficiently large absolute constant.
\end{theorem}
\begin{remark}\label{regularization by row norm}
Typically, all the rows and columns of the matrix $\tilde{A}$ have $L_2$-norms bounded by $O(\sqrt{c_{\varepsilon} n})$. One way to check this is via the inconstructive regularization result proved in~\cite{ReV}. Indeed, with probability $1 - 7\exp(- \varepsilon n/12)$, removing some $\varepsilon n \times \varepsilon n$ sub-matrix of $A$, we get a matrix $\bar A$ such that $\|\bar A\| \lesssim \sqrt{c_{\varepsilon} n}$ (\cite[Theorem~1]{ReV}). It implies that all the rows and columns of $\bar A$ have well-bounded $L_2$-norms (of order at most $\sqrt{c_{\varepsilon} n}$). Since all but $\varepsilon n$ rows and $\varepsilon n$ columns of $A$ coincide with those of $\bar A$, there can be at most $\varepsilon n$ rows and columns in $A$ having larger $L_2$-norms. Thus, regularization described in the statement of Theorem~\ref{main} zeros out them all.
Moreover, the proof Theorem~\ref{main} holds without changes if we define $\tilde A$ as the result of zeroing out of all rows and columns having $L_2$-norm bigger than $C\sqrt{c_{\varepsilon}n}$. As we just discussed, this is an even more delicate change of the matrix $A$ with high probability.
\end{remark}
The regularization procedures discussed above (in Theorem~\ref{main} and Remark~\ref{regularization by row norm}) are local, as they change only a small fraction of the matrix entries. However, they still change more than $\varepsilon n \times \varepsilon n$ submatrix as promised by \cite[Theorem~1.1]{ReV}. As a corollary of Theorem~\ref{main}, we also obtain a polynomial algorithm that regularizes the norm of $A$ with high probability by zeroing out its \emph{small submatrix}.
This algorithm addresses separately subsets of matrix entries having similar magnitude. We define these subsets via order statistics of i.i.d. samples $A_{ij}$: let $\hat A_1, \ldots, \hat A_{n^2}$ be the non-increasing rearrangement of the entries $A_{ij}$ (in sense of absolute values, namely, $|\hat A_1| \ge \ldots \ge |\hat A_{n^2}|$). Then,
\begin{equation} \label{aal}
\mathcal{A}_l := \{\hat A_{ \lceil 2^{l-1} n \varepsilon + 1 \rceil }, \ldots, \hat A_{ \lceil 2^{l} n \varepsilon \rceil }\} \quad \text{ for any } l \in \mathbb{Z}_{\ge 0}. \end{equation}
We are ready to state submatrix regularization algorithm:
\vspace*{0.2cm}
\begin{center}
\begin{tabular}{l}
\hline
\textbf{Algorithm 1: Local norm regularization} \\ \hline
\textbf{Input}: matrix $A = (A_{ij})_{i,j = 1}^n$, constants $\varepsilon, c_{\varepsilon} > 0$, positive integer $l_{\max}$, \\
\;\;\;\;\;\;\;\;\;\;\; disjoint entry subsets $\mathcal{A}_l$ defined by \eqref{aal} for $l \le l_{\max}$\\
\textbf{Output}: $\tilde A$ - $n \times n$ matrix, regularized version of $A$ \\
\hline
\textbf{1.} Zero out $\lceil n \varepsilon/2 \rceil$ entries $A_{ij}$ with the largest absolute values;\\
\textbf{2.} For $l = 0, \ldots, l_{\max}$ find column index subset $J_l$ in the following way:\\
\;\;\; \textbf{2a.} For $j \in [n]$ define $e_j^{row}(\mathcal{A}_l) := |\{i: A_{ij} \in \mathcal{A}_l \}|$;\\
\;\;\; \textbf{2b.} For every $i, j \in [n]$ define the weight \\
\;\;\; \;\;\; \; $
W^l_{ij} :=
\begin{cases}
1, &\text{if } e_j^{row}(\mathcal{A}_l) \le c_\varepsilon np_l\text{ or } A_{ij} \notin \mathcal{A}_l, \\
c_\varepsilon np_l/e_j^{row}(\mathcal{A}_l), &\text{otherwise,}
\end{cases}
$ \\
\;\;\; \;\;\; \; where we denoted $p_l = 2^{l}\varepsilon/n$;
\\
\;\;\; \textbf{2c.} Then, define $J_l := \{j: \prod_{i=1}^n W^l_{ij} \le 0.1\}$; \\
\textbf{3.} Find subset $\hat J$ of $n\varepsilon/4$ indices corresponding to the columns of $A$ \\
\;\;\; with the largest $L_2$-norms, define $J := (\cup_l J_l) \cup \hat J$; \\
\textbf{4.} Repeat Steps 2-3 for $A^T$ to find row subset $I := (\cup_l I_l) \cup \hat I$; \\
\textbf{5.} Zero out all the entries of $A$ in the product subset $I \times J$ to get $\tilde A$. \\
\hline
\end{tabular}
\end{center}
If the matrix $A$ is taken from the same model as in Theorem~\ref{main}, the regularization provided by Algorithm~1 finds $\varepsilon n \times \varepsilon n$ submatrix $I \times J$ that one can replace with zeros to get a matrix $\tilde A$ with well-bounded norm. This is proved in the following
\begin{corollary}[Constructive regularization, submatrix version] \label{subblock regularization}
Let $A$ be a random $n~\times~n$ matrix with i.i.d. entries $A_{ij}$ having symmetric distribution such that $\E A_{ij}^2 = 1$. Let $\varepsilon \in (0, 1/6]$, $c_{\varepsilon} =(\ln \varepsilon^{-1})/\varepsilon$ and $l_{\max} = \lfloor\log_2 (\ln n/\ln \varepsilon^{-4})\rfloor$. Subsets $\mathcal{A}_l$ are defined by \eqref{aal} for $l = 0, \ldots, l_{\max}$.
Suppose that matrix $\tilde A$ is constructed from $A$ by Algorithm~1. Then with probability $1 - n^{0.1-r}$ matrix $\tilde A$ differs from $A$ on at most $\varepsilon n \times \varepsilon n$ submatrix, and
$$
\|\tilde A\| \le C r^{3/2} \sqrt{c_{\varepsilon} n \cdot \ln \ln n},
$$
where and $C > 0$ is a sufficiently large absolute constant.
\end{corollary}
The fact that the sub-matrix regularization algorithm is more involved than the one presented in Theorem~\ref{main} is somewhat natural. Zeroing out a small submatrix must still bring the $L_2$-norms of all rows and columns to the order $O(\sqrt{n})$. Since the majority of rows and columns stays untouched in such regularization, essentially, one needs to find the most ``dense" part of the matrix.
The procedure of assigning weights to the matrix entries row-wise, multiplying them to set column weights, and then thresholding columns with the low weights is a delicate way to do so. This weight construction was originally used in \cite{ReT, ReV} for the matrices with i.i.d. scaled Bernoulli entries. Here we employ the same construction to regularize the entries at every ``level" independently (by definition, $k$-th ``level" contains the entries of $A$ that belong to $2^{-k}$-quantile of the distribution of $A_{ij}^2$). Additionally, to make the algorithm distribution oblivious, we estimate quantiles by order statistics of the matrix entries (since a random matrix naturally contains $n^2$ samples of the distribution $\xi \sim A_{ij}$). The idea to estimate quantiles of some distribution by the order statistics of a set of samples is both natural and well-known in the statistics literature (see, e.g. \cite{ZZ, OrderStatistics}).
\subsection{Notations and structure of the paper}
We use the following standard notations throughout the paper. Positive absolute constants are denoted $C, C_1, c, c_1$, etc. Their values
may be different from line to line. We often write $a \lesssim b$ to indicate that $a \le C b$ for some absolute constant $C$.
The discrete interval $\{1,2,\ldots,n\}$ is denoted by $[n]$. Given a finite set $S$, by $|S|$ we denote its cardinality. The standard inner product in $\mathbb{R}^n$ shall be denoted by $\langle\cdot,\cdot\rangle$. Given $p\in[1,\infty]$, $\|\cdot\|_p$ is the standard $\ell_p^n$-norm in $\mathbb{R}^n$.
Given a matrix $A$, $\|.\|$ denotes the operator $l_2 \to l_2$ norm of the matrix:
$$
\|A\| := \max_{x \in S^{n-1}} \|Ax\|_2.
$$
We write $row_1(A), \ldots, row_n(A) \in \mathbb{R}^m$ to denote the rows of any $m \times n$ matrix $A$ and $col_1(A), \ldots, col_m(A) \in \mathbb{R}^n$ to denote its columns. We are going to use sparsity of the matrices in the proof. We denote by $e_i^{row}(A)$ the number of non-zero entries in the $i$-th row of the matrix $A$, also $e_i^{col}(A)$ denotes the number of non-zero entries in the $i$-th column of $A$.
The rest of the paper is structured as follows. The proof of Theorem~\ref{main} is based on the previously known regularization results developed for Bernoulli random matrices (mainly in the works of Feige and Ofek \cite{FO}, and Le, Levina and Vershynin \cite{LV}). In Section~\ref{bernoulli_section} we review some results specific to the Bernoulli matrices and briefly explain how they will be used later in the text. In Section~\ref{middle section} we show how to extend the Bernoulli techniques to more general class of matrices and prove central Proposition~\ref{middle entries}. In Section~\ref{proof section} we combine these techniques to conclude the proof of Theorem~\ref{main}. In Section~\ref{corollary section} we prove Corollary~\ref{subblock regularization}, and the last Section~\ref{discussion section} contains discussion of the results and related open questions.
\subsection*{Acknowledgement}
The author is grateful to Roman Vershynin for the suggestion to look at the work of Feige and Ofek, helpful and encouraging discussions, as well as comments related to the presentation of the paper. The author is also grateful to Konstantin Tikhomirov for mentioning the idea to estimate quantiles of the entries distribution from their order statistics, which made Algorithm~1 more elegant.
\section{Auxiliary results for Bernoulli random matrices}\label{bernoulli_section}
General idea of the proof of Theorem~\ref{main} is to split the entries of the matrix $A$ into subsets of entries having similar absolute values, and bound them from above by the properly scaled $0$-$1$ Bernoulli variables. Then we use some known regularization results that hold for Bernoulli random matrices for each subset separately.
The goal of this section is to review several useful results related to the regularization of the norms of Bernoulli matrices.
\subsection{Regularization of the norms of Bernoulli random matrices}
Consider a $n \times n$ Bernoulli matrix $B$ with independent $0$-$1$ entries such that $\mathbb{P}\{B_{ij} = 1\} = p$. Since the second moment of its entries $\E B_{ij}^2 \sim p$, from the facts discussed in the beginning of Section~\ref{introduction}, one would expect an ideal operator norm $\|B\| \sim \sqrt{np}$.
This is exactly what happens with high probability when success probability $p$ is large enough ($p \gtrsim \sqrt{\ln n}$, see, e.g. \cite{FO}). If $p \ll \sqrt{\ln n}$ the norm can stay larger than optimal (\cite{KS}). However, it is known that the regularization procedure described in Theorem~\ref{main} works in the case of Bernoulli matrices, and moreover, results in the optimal norm order $\|\tilde B \| \sim \sqrt{np}$. Since all non-zero entries in $B$ have the same size, the $L_2$ norm description can be simplified in terms of the number of non-zero entries in each row or column.
Namely, Feige an Ofek proved in \cite{FO} that if we zero out all rows and columns that contain more than $C np$ non-zero entries, then the resulting matrix satisfies $\|\tilde B \| \sim \sqrt{np}$. This result was improved by Le, Levina and Vershynin. In \cite{LV}, the authors demonstrate that it is enough to zero out any part of the rows and columns with too many non-zeros, or reweigh them in any way, to satisfy $e^{row}_i \le Cnp$ and $e^{col}_i \le Cnp$ for any $i \in [n]$ to obtain the resulting matrix $\|\tilde B \| \sim \sqrt{np}$.
\subsection{Regularization and the quadratic form}
So, zeroing out all rows and columns that contain more than $C np$ non-zero entries regularizes the quadratic form
\begin{equation} \label{normsum}
\|B\| = \sup_{u, v \in S^{n-1}} \big|\sum_{ij} B_{ij} u_i v_j\big|
\end{equation}
to the optimal order $\sqrt{np}$.
We will need the following Lemma~\ref{tail_conds}, addressing the part of the sum \eqref{normsum} (over the indices $i,j$ such that $\{|u_iv_j| \ge \sqrt{p/n}\}$). It was first proved by Feige and Ofek in \cite{FO} and later appears in \cite{CRV}. We give a short sketch of its proof here for the sake of completeness and common notations. Let us also emphasize that even though for the regularization procedure introduced in \cite{FO} it is crucial to zero out a \emph{product} subset of the entries, in the framework of the Lemma~\ref{tail_conds} it is possible to zero out \emph{any} subset of the entries.
\begin{lemma} \label{tail_conds}
Let $B$ be a $n \times n$ Bernoulli matrix with independent $0$-$1$ entries such that $\mathbb{P}\{B_{ij} = 1\} = p$. Let $r \ge 1$. Let $\mathcal{B} \subset [n] \times [n]$ be an index subset, such that if we zero out all $B_{ij}$ with $(i,j) \notin \mathcal{B}$, then every row and column of $B$ has at most $C_0rpn$ non-zero entries. Then with probability $1 - n^{-r}$
$$
\sup_{u, v \in S^{n-1}} \sum_{{\begin{subarray}{c}(i,j) \in \mathcal{B}:\\
|u_iv_j| \ge \sqrt{p/n}\end{subarray} }} B_{ij} |u_i v_j| \le Cr \sqrt{np},
$$
where $C$ is a large enough absolute constant.
\end{lemma}
The proof of Lemma~\ref{tail_conds} is based on a technical Lemma~\ref{tail_conds_sq_form} stated below. The proof of Lemma~\ref{tail_conds_sq_form} is completely deterministic and can be found in \cite{FO, CRV}.
\begin{lemma}\label{tail_conds_sq_form}\cite[Lemma 21]{CRV}
Let $B$ be an $n \times n$ matrix with $0$-$1$ elements. Let $p > 0$, such that every row and column of $B$ contains at most $C_0np$ ones. For index subsets $S, T \subset [n]$ define
$$e(S,T):= \sum_{i \in S, j \in T} B_{ij}$$
(i.e. number of non-zero elements in the submatrix spanned by $S \times T$). Suppose that for any $S, T \subset [n]$ one of the following conditions holds:
\begin{enumerate}
\item[(A)] $ e(S, T) \le C_1 |S||T| p $, or
\item[(B)] $ e(S, T) \cdot \log\left(\frac{e(S, T)}{ |S||T| p}\right) \le C_2 |T| \log\left(\frac{n}{|T|}\right),$
\end{enumerate}
with some constants $C_1$ and $C_2$ independent from $S, T$ and $n$.
Then for any $u, v \in S^{n-1}$
$$ \sum_{i,j: |u_i v_j| \ge \sqrt{p/n}} B_{ij} |u_iv_j| \le C \sqrt{n p},$$
where the constant $C = \max\{ 16, 3C_0, 32C_1, 32C_3\}$.
\end{lemma}
\begin{proof}(of Lemma~\ref{tail_conds}) In view of Lemma~\ref{tail_conds_sq_form} it is enough to show that with probability $1 - n^{-r}$ for every $S, T \subset [n]$ and
$$e_{\mathcal{B}}(S,T)= \sum_{(i,j) \in \mathcal{B} \cap (S\times T)} B_{ij}$$
one of the conditions $(A)$ and $(B)$ holds. Without loss of generality let us assume that $|T| \ge |S|$.
If $|T| \ge n/e$, we have
$
e_{\mathcal{B}}(S, T) \le |S|\cdot C_0rpn \le C_0rpe|T||S|.
$
Hence, condition $(A)$ holds with $C_1 = C_0re$.
If both $|S|, |T| < n/e$, then $\mathbb{P}\{\mathcal{E}\} \le n^{-r}$ for the event
$$
\mathcal{E} := \big\{\exists S,T: |S|, |T| < n/e, \, e_{\mathcal{B}} (S,T) > l_{S,T} p |S| |T|\big\}
$$
and $l_{S,T}$ being a number such that
$$
l_{S,T} \ln l_{S,T} := \ln \frac{n}{|T|} \cdot \frac{3(r+6) |T|}{p|S||T|}.
$$
Indeed, the probability estimate for $\mathcal{E}$ follows from the Chernoff's inequality applied to the sum of independent variables $e(S,T) \ge e_{\mathcal{B}}(S,T)$, combined with the Stirling formula estimating the number of proper sets $S$ and $T$, and the fact that the function $f(x) = (x/n)^x$ is monotonically increasing on $[1, n/e]$ (see \cite[Section~2.2.5]{FO} for the computation details).
Thus, with probability $1 - n^{-r}$, for any $S, T$ , such that $|S|, |T| < n/e$, condition $(B)$ holds:
\begin{align*}
e_{\mathcal{B}}(S, T) \cdot \ln\left(\frac{e_{\mathcal{B}}(S, T)}{ |S||T| p}\right) &\le l_{S,T} p |S| |T|\cdot \ln l_{S,T} \\
&\le 3(r+6) |T| \ln \frac{n}{|T|}.
\end{align*}
This concludes Lemma~\ref{tail_conds} with $C = \max\{ 32C_0re, 100(r+6)\}$.
\end{proof}
\subsection{Decomposition of Bernoulli matrices}\label{bernoulli section}
The idea is to apply an approach developed for Bernoulli matrices for the truncations of the entries of $A$, having absolute values on the same ``level", and then to sum over these ``levels". However, there is an obstacle: there is no simple way to see which rows and columns in the Bernoulli ``levels" have too many non-zeros. Even if we know that the rows and columns of the general matrix $A$ are well-bounded, some of the ``levels" might be too large if the others are small enough.
To address this issue without making regularization procedure more complex, we are going to use an additional structural decomposition for Bernoulli random matrices, first shown in the work of Le, Levina and Vershynin~\cite{LV}. The next proposition is a direct corollary of \cite[Theorem~2.6]{LV}:
\begin{proposition}[Decomposition lemma]\label{decomposition lemma}
Let $B$ be a $n \times n$ Bernoulli matrix with independent $0$-$1$ entries such that $\mathbb{P}\{B_{ij} = 1\} = p$. Then, for any $n \ge 4$, $p \ge 0$ and $r \ge 1$, with probability at least $1 - 3n^{-r}$ all entries of $B$ can be divided into three disjoint classes $[n] \times [n] = \mathcal{B}_1 \sqcup \mathcal{B}_2 \sqcup \mathcal{B}_3$, such that
\begin{itemize}
\item $e^{row}_{i}(\mathcal{B}_1) \le C_1 r^3np$ and $e^{col}_{i}(\mathcal{B}_1) \le C_1 r^3np$
\item $e^{row}_{i}(\mathcal{B}_2) \le C_2r$
\item $e^{col}_{i}(\mathcal{B}_3) \le C_2r,$
\end{itemize}
where $e^{col/row}_{i}(\mathcal{B})$ is the number of non-zero elements in $i$-th row or column of $B$ belonging to the class $\mathcal{B}$ and $C_1$, $C_2$ are absolute constants.
\end{proposition}
\begin{remark}\label{r_order}
Following the same methods as was employed in the proof of \cite[Theorem~2.6]{LV}, one can check that Proposition~\ref{decomposition lemma} actually holds with linear (instead of cubic) dependence on $r$, namely $e^{row}_{i}(\mathcal{B}_1) \le C_1 rnp$ and $e^{col}_{i}(\mathcal{B}_1) \le C_1 rnp$.
\end{remark}
\section{From Bernoulli to general matrices}\label{middle section}
The goal of this section is to prove Proposition~\ref{middle entries}, that provides a way to generalize the regularization results known for Bernoulli random matrices to the general case:
\begin{proposition}\label{middle entries}
Suppose $A$ is a random $n \times n$ matrix with i.i.d. symmetric entries $A_{ij}$ with $\E A_{ij}^2 = 1$. Let $\tilde A$ be the resulting matrix after we zeroed out row and column subsets of $A$ in any way, such that
\begin{equation}
\|row_{i}(\tilde{A})\|^2_2 \le c_{\varepsilon} n \text{ and } \|col_{i}(\tilde{A})\|^2_2 \le c_{\varepsilon} n
\end{equation}
for all $i = 1, \ldots, n$. Let
$$
\tilde M := \tilde A \cdot {\mathds 1}_{\{|\tilde A_{ij}| \in (2^{k_0}, 2^{k_1}]\}}, \text{ and } k_1 - k_0 =: \kappa.
$$
Then with probability at least $1 - 10\kappa n^{-r}$ we have
$$
\|\tilde M\| \le C r \sqrt{c_\varepsilon n \kappa}.
$$
Here $c_\varepsilon$ is any positive $\varepsilon$-dependence and $C > 0$ is an absolute constant.
\end{proposition}
Let us first collect several auxiliary lemmas, that will be used in the proof of Proposition~\ref{middle entries}.
\begin{lemma}\label{light_members}
Consider a $n \times n$ random matrix $M$ with independent symmetric entries and $\E M^2_{ij} \le 1$. Consider two vectors $u = (u_i)_{i = 1}^n$ and $v = (v_j)_{j =1}^n$ such that $u, v \in S^{n-1}$. Denote the event
$$ \mathcal{M}^{light}_{ij} = \{|M_{ij}| |u_iv_j| \le 2/\sqrt{n}\}$$
and let $Q \subset [n] \times [n]$ be an index subset.
Then for any constant $C \ge 3$
$$ | \sum_{i j} u_i M_{ij} {\mathds 1}_{\{(i,j) \in Q\}} {\mathds 1}_{\mathcal{M}^{light}_{ij}} v_j | \le C\sqrt{n} $$
with probability at least $1 - 2\exp(-Cn/2)$.
\end{lemma}
\begin{proof}
Let $R_{ij} := M_{ij}{\mathds 1}_{\{(i,j) \in Q\}} {\mathds 1}_{\mathcal{M}^{light}_{ij}}$. Note that $R_{ij}$ are centered due to the symmetric distribution of $M_{ij}$, and they are independent as $M_{ij}$ are. So we can apply Bernstein's inequality for bounded distributions (see, for example, \cite[Theorem 2.8.4]{V-HDP}) to bound the sum:
$$
\mathbb{P}\{ | \sum_{ij} u_i R_{ij} v_j| \ge t\} \le 2 \exp\left( - \frac{t^2/2}{\sigma^2 + Kt/3}\right),
$$
where
$$K = \max_{i,j} |u_i R_{ij} v_j| \le 2/\sqrt{n} \quad \text{ and } \quad \sigma^2 = \sum_{ij} \E(u_i R_{ij} v_j)^2.$$
Note that $ \E R^2_{ij} \le \E M^2_{ij}$, as $R^2_{ij} \le M^2_{ij}$ almost surely, and $\E M^2_{ij} \le 1$. So,
$$
\sigma^2 = \sum_{ij} u^2_i \E R^2_{ij} v^2_j \le \sum_{i,j} u_i^2 v_j^2 = 1,
$$
as $\sum_i u_i^2 = \sum_j v_j^2 = 1$. So, taking $t = C\sqrt{n}$, we obtain
\begin{align*}
\mathbb{P}\{ | \sum_{(i,j)} u_i M_{ij} {\mathds 1}_{\{(i,j) \in Q\}} &{\mathds 1}_{\mathcal{M}^{light}_{ij}} v_j | \ge C \sqrt{n}\} \le 2 \exp(-Cn/2)
\end{align*}
for any $C \ge 3$. This concludes the statement of the lemma.
\end{proof}
The following lemma is a version of \cite[Lemma~3.3]{LV}.
\begin{lemma}\label{l2const}
For any matrix $Q$ and vectors $u,v \in S^{n-1}$, we have
$$
\sum_{ij} Q_{ij} u_{i} v_{j} \le \ \max_j \|col_j(Q)\|_2 \cdot (\max_{i} e_i^{row}(Q) )^{1/2}.
$$
\end{lemma}
\begin{proof}
Indeed,
\begin{align*}
\sum_{ij} Q_{ij} u_{i} v_{j}
&\le \sum_j v_j (\sum_{i: Q_{ij} \ne 0} Q_{ij} u_i) \\
&\le \sum_j v_j (\sum_{i: Q_{ij} \ne 0} Q^2_{ij})^{1/2} (\sum_{i: Q_{ij} \ne 0} u^2_i)^{1/2} \quad (*) \\
&\le \max_j \|col_j(Q)\|_2 \sum_j v_j (\sum_{i: Q_{ij} \ne 0} u^2_i)^{1/2}\\
&\le \max_j \|col_j(Q)\|_2 (\sum_j v^2_j)^{1/2} (\sum_j \sum_{i: Q_{ij} \ne 0} u^2_i)^{1/2} \quad (*) \\
&\le \max_j \|col_j(Q)\|_2 \cdot 1\cdot (\sum_i u^2_i\sum_{j: Q_{ij} \ne 0} 1)^{1/2} \quad \text{(since $\|v\|_2 = 1$)} \\
&\le \max_j \|col_j(Q)\|_2 (\sum_i u^2_ie_i^{row}(Q))^{1/2}\\
&\le \max_j \|col_j(Q)\|_2 \cdot (\max_{i} e_i^{row}(Q) )^{1/2}. \quad \text{(since $\|u\|_2 = 1$)}
\end{align*}
Steps (*) hold by the Cauchy-Schwarz inequality. Lemma~\ref{l2const} is proved.
\end{proof}
\bigskip
In the proof of Proposition~\ref{middle entries} we are going to use a standard splitting ``by size" of a non-negative random variable. Let $X = 0$ or $X \in (2^{k_0}, 2^{k_1}]$ almost surely. Then, clearly,
$$
X \le \sum_{k = k_0+1}^{k_1} 2^k {\mathds 1}_{\{X \in (2^{k-1}, 2^k]\}}.
$$
If additionally $\E X^2 \le 1$ and we denote
\begin{equation}\label{pk}
p_k := \mathbb{P}\{|M_{ij}| \in (2^{k-1}, 2^{k}]\},
\end{equation}
the following estimates hold for $p_k$. First, the sum
\begin{equation}\label{pksumestimate}
\sum p_k 2^{2k} \le 4 \sum p_k 2^{2(k-1)} \le 4\E X^2 \le 4.
\end{equation}
\bigskip
Now we are ready to prove Proposition~\ref{middle entries}.
\subsection{Proof of Proposition~\ref{middle entries}}
\textbf{Step 1. Net approximation.}
Let $\mathcal{N}$ be a $1/2$-net on $S^{n-1}$ with cardinality $|\mathcal{N}| \le 5^n$ (the existence of such net is a standard fact that can be found, e.g. in \cite{V-HDP}). We will use a simple net approximation of the norm (see, e.g., \cite[Lemma~4.4.1]{V-HDP}), namely,
$$\|\tilde M \| \le 4 \max_{u, v \in \mathcal{N}} \langle \tilde Mu, v\rangle = 4 \max_{u, v \in \mathcal{N}} |\sum_{ij} \tilde M_{ij}u_i v_j|.$$
We will split the sum into two parts and bound each of them separately, based on the absolute value of the element.
Let $M := A \cdot {\mathds 1}_{\{|A_{ij}| \in (2^{k_0}, 2^{k_1}]\}}$. For any fixed $u, v \in \mathcal{N}$ and every $i, j \in [n]$ we define an event
$$\mathcal{M}^{light}_{ij} = \{|M_{ij}||u_iv_j| \ge 2/\sqrt{n}\}.$$
Then,
\begin{align*}
\max_{u, v \in \mathcal{N}} &|\sum_{i,j} \tilde M_{ij}u_i v_j| \\
&\le \max_{u, v \in \mathcal{N}} |\sum_{i,j} \tilde M_{ij} ({\mathds 1}_{\mathcal{M}^{light}_{ij}} + {\mathds 1}_{(\mathcal{M}^{light}_{ij})^c}) u_i v_j|\\
&\le \max_{u, v \in \mathcal{N}} |\sum_{i,j} \tilde M_{ij}{\mathds 1}_{\mathcal{M}^{light}_{ij}} u_i v_j| + \max_{u, v \in \mathcal{N}} |\sum_{ij} \tilde M_{ij}{\mathds 1}_{(\mathcal{M}^{light}_{ij})^c}u_i v_j|.
\end{align*}
\bigskip
\textbf{Step 2. Light members.}
By Lemma~\ref{light_members}, for any fixed $u, v \in S^{n-1}$ and a fixed subset of indices $Q$ (assuming that $Q^c$ is a set of rows and columns to delete),
\begin{equation}\label{est_light}
| \sum_{i,j} u_i M_{ij} {\mathds 1}_{\{(i,j) \in Q\}} {\mathds 1}_{\mathcal{M}^{light}_{ij}} v_j | \le 12\sqrt{n}
\end{equation}
with probability at most $2\exp(-6n)$. Now, taking union bound over $5^n$ choices for $u \in \mathcal{N}$, as many choices for $v \in \mathcal{N}$, and $2^{2n}$ choices for the row and column subset $Q^c$, we obtain that
\begin{equation}\label{eq: light_part}
\mathbb{P} \{ | \sum_{i,j} u_i \tilde M_{ij} {\mathds 1}_{\mathcal{M}^{light}_{ij}} v_j |\le 12\sqrt{n}\} \ge 1 - 2\exp(-n).
\end{equation}
\bigskip
\textbf{Step 3. Other members.}
The second sum can be roughly bounded by the sum of absolute values of its members:
\begin{align*}
|\sum_{i,j} \tilde M_{ij}&{\mathds 1}_{(\mathcal{M}^{light}_{ij})^c}u_i v_j| \\
&\le \sum_{i,j} |\tilde M_{ij}|{\mathds 1}_{(\mathcal{M}^{light}_{ij})^c}|u_i v_j| \\
&\le \sum_{i,j}\big[\sum_{k = k_0+1}^{k_1}2^k {\mathds 1}_{\{|\tilde M_{ij}| \in (2^{k-1}, 2^k]\}}\big] {\mathds 1}_{\{|M_{ij}||u_iv_j| \ge 2/\sqrt{n}\}}|u_i v_j|
\end{align*}
Note that as long as ${\mathds 1}_{\{|\tilde M_{ij}| \in (2^{k-1}, 2^k]\}} = 1$ we also have that $|M_{ij}| \le 2^k$. Indeed, $|M_{ij}| > 2^k$ implies either $|\tilde M_{ij}| > 2^k$ or $|\tilde M_{ij}| = 0$. In any case, $|\tilde M_{ij}| \notin (2^{k-1}, 2^k]$. So, the last expression is bounded above by
$$
\sum_{i,j}\sum_{k = k_0+1}^{k_1}2^k {\mathds 1}_{\{|\tilde M_{ij}| \in (2^{k-1}, 2^k]\}} {\mathds 1}_{\{2^k|u_iv_j| \ge 2/\sqrt{n}\}}|u_i v_j| $$
Since $\E M_{ij}^2 \le \E A_{ij}^2 = 1$, from \eqref{pksumestimate} we have $2^{1 - k} \ge \sqrt{p_k}$ for any $k$, where $p_k$ is probability of the $k$-th level (defined in \eqref{pk}).
As a result, in Step 3 we got
\begin{align}\label{second term}
|\sum_{i,j} \tilde M_{ij}&{\mathds 1}_{(\mathcal{M}^{light}_{ij})^c} u_i v_j| \nonumber\\
&\le \sum_{k = k_0+1}^{k_1} 2^k \sum_{i,j:|u_iv_j| \ge \sqrt{p_k/n}} {\mathds 1}_{\{|\tilde M_{ij}| \in (2^{k-1}, 2^k]\}} |u_i v_j|.
\end{align}
\bigskip
\textbf{Step 4. Bernoulli matrices.}
For each ``size" $k = k_0+1, \ldots, k_1$ let us define a $n \times n$ matrix $B^k$ with independent Bernoulli entries
$$B^k_{ij}:= {\mathds 1}_{\{|M_{ij}| \in (2^{k-1}, 2^k]\}}, \quad \E B^k_{ij} = p_k.
$$
By Decomposition Lemma~\ref{decomposition lemma} (and Remark~\ref{r_order}), with probability $1 - 3n^{-r}$, the entries of every $B^k$ can be assigned to one of three disjoint classes: $\mathcal{B}_1^k$, where all rows and columns have at most $C_1r p_k n$ non-zero entries; $\mathcal{B}_2^k$, where all the columns have at most $C_2 r$ non-zero entries; and $\mathcal{B}_3^k$, where all the rows have at most $C_2 r$ non-zero entries. We are going to further split the sum \eqref{second term} into three sums containing elements of these three classes, and estimate each of them separately.
\begin{itemize}
\item[$\mathcal{B}_1:$]
The part with the entries from $\mathcal{B}_1^k$ satisfies the conditions of Lemma~\ref{tail_conds}.
For any $k = k_0 + 1, \ldots, k_1$
$$
\sum_{\begin{subarray}{c}(i,j) \in \mathcal{B}_1^k:\\
|u_iv_j| \ge \sqrt{p_k/n}\end{subarray} } {\mathds 1}_{\{|\tilde M_{ij}| \in (2^{k-1}, 2^k]\}} |u_i v_j| \le \sum_{{\begin{subarray}{c}(i,j) \in \mathcal{B}_1^k:\\
|u_iv_j| \ge \sqrt{p_k/n}\end{subarray} }} B_{ij}^k |u_i v_j|.
$$
By Lemma~\ref{tail_conds}, this sum is bounded by $C r\sqrt{p_k n}$ with probability at least $1 - n^{-r} $. So, for any $u, v \in S^{n-1}$
$$
\sum_{k = k_0+1}^{k_1} 2^k \sum_{{\begin{subarray}{c}(i,j) \in \mathcal{B}_1^k:\\
|u_iv_j| \ge \sqrt{p_k/n}\end{subarray} }} B_{ij}^k |u_i v_j| \le Cr\sqrt{n} \sum_{k = k_0+1}^{k_1} 2^k \sqrt{p_k}.
$$
Then, by the Cauchy-Schwarz inequality and estimate \eqref{pksumestimate},
$$
\sum_{k = k_0+1}^{k_1} 2^k \sqrt{p_k} \le \sqrt{\sum_{k = k_0+1}^{k_1} 2^{2k} p_k} \sqrt{\sum_{k = k_0+1}^{k_1} 1} \le 2\sqrt{\kappa}.
$$
\item[$\mathcal{B}_2:$]
The part with the entries from $\mathcal{B}_2^k$ can be estimated by Lemma~\ref{l2const}. We have that
$$
\sum_{k = k_0+1}^{k_1} 2^k \sum_{{\begin{subarray}{c}(i,j) \in \mathcal{B}_2^k:\\
|u_iv_j| \ge \sqrt{p_k/n}\end{subarray} }} {\mathds 1}_{\{|\tilde M_{ij}| \in (2^{k-1}, 2^k]\}} |u_i v_j| \le \sum_{i,j} Q_{ij} |u_i v_j|,
$$
where
$$
Q_{ij} := \sum_{k = k_0+1}^{k_1} 2^k {\mathds 1}_{\{(i,j) \in \mathcal{B}_2\}} {\mathds 1}_{\{|\tilde M_{ij}| \in (2^{k-1}, 2^k]\}}
$$
Note that for every fixed $i = 1, \ldots, n$, number of non-zero entries $Q_{ij}$ in the row $i$ is at most $C_2r \kappa$. Also, $|Q_{ij}| \le 2 |\tilde M_{ij}| \le 2 |\tilde A_{ij}|$ almost surely, so maximum $L_2$-norm of the column of $Q$ is $\sqrt{c_{\varepsilon} n}$. By Lemma~\ref{l2const}, this implies that for any $u, v \in S^{n-1}$
$$\sum_{i,j} Q_{ij} |u_i v_j| \le \sqrt{C_2 r \kappa c_{\varepsilon} n}$$
\item[$\mathcal{B}_3:$] The part with the entries from $\mathcal{B}_3^k$ can be estimated in the same way as $\mathcal{B}_2^k$, repeating the argument for $A^T$.
\end{itemize}
\textbf{Step 5. Conclusion.}
Now we can combine the estimates obtained for the light members~\eqref{eq: light_part} and all three parts of the non-light sum, to get that
$$
\|\tilde M \| \le 12 \sqrt{n} + C r \sqrt{\kappa n} + 2\sqrt{C_2 r \kappa c_{\varepsilon} n} \lesssim r \sqrt{c_{\varepsilon} \kappa n }
$$
with probability at least $1 - 2 e^{-n} - 3\kappa n^{-r} - 6 n^{-r} \ge 1 - 10 n^{-r}\kappa$ for all $n$ large enough. Proposition~\ref{middle entries} is proved.
\section{Conclusions: proof of Theorem~\ref{main} and further directions} \label{proof section}
In this section we conclude the proof of Theorem~\ref{main}. As we have seen in the previous section, splitting the entries of the matrix $A$ into $\kappa$ ``levels" with the similar absolute value produces extra $\sqrt{\kappa}$ factor in the norm estimate. Hence, we care to make the number of levels as small as possible. We are going to show that this number can be as small as $C\ln \ln n$, where $n$ is the size of the matrix. The reason is that we need only to consider the ``average" entries of the matrix, those with the absolute values between $O(\sqrt{n / \ln n})$ and $O(\sqrt{n})$. The ``large" entries will be all replaced by zeros in regularization, and restriction to the ``small" entries creates a matrix with the optimal norm (no regularization is needed). One way to check this is applying the following result of Bandeira and Van Handel:
\begin{theorem}\label{theor: bvh}\cite[Lemma 21]{CRV}
Let $X$ be an $n \times n$ matrix whose entries $X_{ij}$ are independent centered random variables. For any $\varepsilon \in (0, 1/2]$ there exists a constant $c_{\varepsilon}$ such that for every $t \ge 0$
$$
\mathbb{P}\{\|X\| \ge (1 + \varepsilon) 6\sigma + t \} \le n \exp( - t^2 / c_{\varepsilon} \sigma_*^2),
$$
where $\sigma$ is a maxium expected row and column norm:
$$
\sigma^2 := \max(\sigma^2_1, \sigma^2_2), \text{ where } \quad \sigma^2_1 = \max_i \sum_j \E (X_{ij}^2), \quad \sigma^2_2 = \max_j \sum_i \E (X_{ij}^2);
$$
and $ \sigma_*$ is a maximum absolute value of an entriy:
$$
\sigma_* := \max_{ij} \|X_{ij}\|_{\infty}.
$$
\end{theorem}
\begin{lemma}\label{small entries}
Suppose $S$ is a random $n \times n$ matrix with i.i.d. mean zero entries $S_{ij}$, such that $\E S_{ij}^2 \le 1$ and $|S_{ij}| < \bar c\sqrt{n}/\sqrt{\ln n}$. Let $r \ge 1$.
If $\bar c < c$, then with probability at least $1 - n^{-r}$
$$
\|S\| \le 13r \sqrt{n}.
$$
Here $c$ is a small enough absolute constant.
\end{lemma}
\begin{proof}
The proof follows directly from Theorem~\ref{theor: bvh} with $t = r\sqrt{n}$ and $\varepsilon = 1$. It is enough to take $c = 1/11c_1$, where $c_1$ is a constant from the statement of Theorem~\ref{theor: bvh}.
\end{proof}
Now we are ready to prove Theorem~\ref{main}.
\subsection{Proof of Theorem~\ref{main}}
Let us decompose $A$ into a sum of three $n \times n$ matrices
\begin{equation}\label{general split}
A := S + M + L,
\end{equation}
where $S$ contains the entries of $A$ that satisfy $|A_{ij}| \le 2^{k_0}$, the matrix $M$ contains the entries for which $2^{k_0} < |A_{ij}| \le 2^{k_1}$,
and $L$ contains large entries, satisfying $|A_{ij}| > 2^{k_1}$. Here,
$$k_0 := \left\lfloor \frac{1}{2} \log_2 \frac{c_1 n}{\ln n} \right\rfloor, \quad k_1:= \left\lceil \frac{1}{2}\log_2 (C_2 c_\varepsilon n)\right\rceil, \, \text{ where } c_\varepsilon = (\ln \varepsilon^{-1})/\varepsilon
$$
and $c_1, C_2 > 0$ are absolute constants.
Note that $S$, $M$ and $L$ inherit essential properties of the matrix $A$. First, they also have i.i.d. entries (since they are obtained by independent individual truncations from the i.i.d. elements $A_{ij}$). Due to the symmetric distribution of the entries of $A$, the entries of $S$, $M$ and $L$ have mean zero. Also, their second moment is bounded from above by $\E A_{ij}^2 = 1$.
Note that all the entries in $S$ satisfy $|A_{ij}| \le \sqrt{c_1 n/\ln n}$. Thus, as long as we choose constant $c_1$ small enough to satisfy the condition in Lemma~\ref{small entries}, the norm of $S$ can be estimated as
\begin{equation}\label{norm1}
\mathbb{P} \{\|S\| > 13 \sqrt{n}\} < n^{-r}.
\end{equation}
Clearly, replacing by zeros some rows and column subset can only decrease the norm of $S$.
By Remark~\ref{regularization by row norm}, with probability at least $ 1 - 7 \exp(\varepsilon n/12)$, all rows and columns of $\tilde{A}$ have bounded norms: for $i = 1, \ldots, n$
$$\|row_i(\tilde{A})\|_2 \le C \sqrt{c_\varepsilon n} \text{ and } \|col_i(\tilde{A})\|_2 \le C \sqrt{c_\varepsilon n}.$$
In particular, it implies that all the entries of $\tilde A$ have absolute values bound by $C \sqrt{c_\varepsilon n}$. So, by taking constant $C_2 \ge C^2$, we achieve that $L$ will be empty after the regularization.
Proposition~\ref{middle entries} estimates the norm of $M$ after the regularization (zeroing out row and columns with large norms):
\begin{equation}\label{norm2}
\mathbb{P} \{\|\tilde A \cdot {\mathds 1}_{\{|\tilde A_{ij}| \in (2^{k_0}, 2^{k_1}]\}}\| > C c_{\varepsilon} r \sqrt{n \kappa} \} \le 10n^{-r}\kappa.
\end{equation}
By definition,
$$
\kappa := k_1 - k_0 \le \frac{1}{2} \left[\log_2 (C_2 c_\varepsilon n) - \log_2 \frac{c_1n }{\ln n} + 2\right] \le \log_2 \ln n
$$
for all large enough $n$.
Using triangle inequality to combine norm estimates~\eqref{norm1} and \eqref{norm2}, we get $\| \tilde A\| \lesssim c_{\varepsilon} r \sqrt{n \cdot \ln \ln n }$
with probability at least
$$
1- n^{-r} - 7e^{-\varepsilon n/12} - 10n^{-r}\ln\ln n \ge 1 - n^{-r +0.1}
$$
for all $n$ large enough. This concludes the proof of Theorem~\ref{main}.
\begin{remark}\label{regularization by row norm_s}
Note that the only place in the argument where it matters what entry subset we replace by zeros is Step~2 of the proof of Proposition~\ref{middle entries}. To have a manageable union bound estimate, we need to be sure that the number of options for the potential subset to be deleted is of order $\exp(n)$ (so, $\exp(\ln 2 \cdot n^2)$ options for a general entry subset would be too much).
Hence, also recalling Remark~\ref{regularization by row norm}, we emphasize that the norm estimate \eqref{main norm estimate} holds with the probability $1 - n^{0.1- r}$ as long as we achieve $L_2$-norm of all rows and columns bounded by $C\sqrt{c_\varepsilon n}$ by zeroing out any product subset of the entries of $A$.
\end{remark}
\section{Proof of Corollary~\ref{subblock regularization}}\label{corollary section}
General idea of the proof of Corollary~\ref{subblock regularization} is to show that after the regularization procedure all rows and columns of the matrix have well-bounded $L_2$-norms, and then apply Theorem~\ref{main}.
Originally, the core part of the algorithm (Step 2) was presented for Bernoulli random matrices in \cite{ReV, ReT}. We will use the following version of \cite[Lemma~5.1]{ReV} (based on the ideas developed in \cite[Proposition~2.1]{ReT}):
\begin{lemma}\label{constructive column cut}
Let $B$ be a $n \times n$ matrix with independent $0$-$1$ valued entries, $\E B_{ij} \le p$. Then for any $ L \ge 10$ with probability $1 - \exp(-n \exp(-L pn))$
the following holds. If we define
$$
W_{ij} :=
\begin{cases}
1, &\text{if } e_j^{row}(B) \le L np \text{ or } B_{ij} = 0, \\
L np/e_j^{row}(B) , &\text{otherwise.}
\end{cases}
$$
and $V_j := \prod_{i = 1}^n W_{ij}$ , and $J := \{j: V_j < 0.1\}$, then
$$
|J| \le n \exp(-L n p) \quad \text{ and } \, \sum_{j \in J^c} B_{ij} \le 10Lnp, \text{ for any } i \in [n] .
$$
\end{lemma}
In order to pass from Bernoulli case to the general distribution case we are going to use some version of ``level truncation" idea once again. Note that here we need the probabilities of the levels $p_l$ to be both not too large (for the joint cardinality estimate) and not too small (for the probability estimate union bound). This motivates the idea to group ``similar size" entries $A_{ij}$ not by absolute value $|A_{ij}|$, but by common $2^{-l}$-quantiles of the distribution of $\xi \sim A_{ij}^2$.
\begin{remark}
A version of Corollary~\ref{subblock regularization} can be proved, when one would define the sets $\mathcal{A}_l$ to contain all $A_{ij}$ such that $A_{ij}^2 \in (q_{k-1}, q_l]$, where $q_l$ is $2^l$-th quantile of the distribution of $A_{ij}^2$, namely,
\begin{equation}\label{quant}
q_l := \inf\{t: \mathbb{P} \{A^2_{ij} > t\} \le 2^{-l}\}.
\end{equation}
The proof of this version is actually almost identical to the one presented below and gives smaller absolute constants. However, an additional requirement \emph{to know quantiles} of the distribution of the entries in order to regularize the norm of the matrix seems undesirable. So we are going to prove the distribution oblivious version as presented by Algorithm~1.
\end{remark}
The next lemma shows that the order statistics used in the statement of Corollary~\ref{subblock regularization} approximate quantiles of the distribution of $A_{ij}^2$.
\begin{lemma} [Order statistics as approximate quantiles]
Let $\hat A_1, \ldots \hat A_{n^2}$ be all the entries of $n \times n$ random matrix $A$ in an non-increasing order and $q_k$ be $2^{-k}$ quantiles of the distribution of $A_{ij}^2$ defined by \eqref{quant}.
Then with probability at least $1 - 4\exp(-n^2 2^{-k_1 - 2})$ for all $k = 1, \ldots, k_1$
\begin{equation}\label{approx quantile estimate}
q_{k-2} \le \hat A^2_{ \lceil n^2 2^{1-k}\rceil } \le q_k.
\end{equation}
\end{lemma}
\begin{proof}
A direct application of Chernoff's inequality shows that for any $k$
$$
\mathbb{P} \{ \nu_1 > \lceil n^2 2^{1-k} \rceil\} \le \exp(-n^2 2^{-k}/4),
$$where $\nu_1$ is a number of entries $A_{ij}$ such that $A_{ij}^2 > q_k$.
Another application of Chernoff's inequality lower bound shows that
$$
\mathbb{P} \{ \nu_2 < \lceil n^2 2^{1-k}\rceil \} \le \exp(-n^2 2^{-k}/4),
$$where $\nu_2$ is a number of entries $A_{ij}$ such that $A_{ij}^2 > q_{k-2}$.
Then, with probability at least $1 - 2 \exp(-n^2 2^{-k})$ the order statistics $\hat A^2_{n^2 2^{2-k}}$ is at least $q_{k-2}$ and at most $q_k$. Taking union bound, equation \eqref{approx quantile estimate} holds with probability $1 - 4 \exp(-n^2 2^{-k_1}/4)$ for all $k = 1, \ldots, k_1$.
\end{proof}
\begin{remark}\label{quantile estimation remark}
We will use $k_1 = \lceil \log_2 (8 n/\varepsilon)\rceil$. An easy computation using \eqref{approx quantile estimate} shows that
$$q_{k_1 - l - 3} \le \hat A^2_{\lceil2^{4 + l - k_1} n^2 \rceil} \le \hat A^2_{\lceil 2^l n \varepsilon\rceil} \le \hat A^2_{\lceil2^{3 + l - k_1} n^2 \rceil} \le q_{k_1- l - 2}$$
for all $l = 0, \ldots, l_{\max}$ with probability $1 - 4 \exp(-n\varepsilon/4)$. Then, for $\mathcal{A}_l$ as defined in \eqref{aal} and all $l \le l_{\max}$,
\begin{equation}\label{plest}
\mathbb{P}\{A_{ij} \in \mathcal{A}_l\} \le 2^{3 + l - k_1} \le 2^{l}\varepsilon/n.
\end{equation}
\end{remark}
Now we are ready to prove Corollary~\ref{subblock regularization}.
\subsection{Proof of Corollary~\ref{subblock regularization}}
Let $k_1 = \lceil \log_2 (8 n/\varepsilon)\rceil$, and $q_{k_1}$ is a corresponding $2^{-k_1}$ quantile of the distribution of $A_{ij}^2$ (as defined by~\eqref{quant}). It is easy to check by Chernoff's inequality that the total number of entries in $A$ such that $A_{ij}^2 \ge q_{k_1}$ is at most $\varepsilon n/2$ with probability at least $1 - e^{-\varepsilon n/4}$. So, all these ``large" entries will be replaced by zeros at the Step 1 of regularization Algorithm~1.
To prove Corollary~\ref{subblock regularization}, it is enough to show that:
\begin{enumerate}
\item Algorithm~1 makes all rows and columns of the truncated matrix $A \cdot {\mathds 1}_{\{A_{ij}^2 < q_{k_1}\}}$ have norms bounded by $C\sqrt{c_\varepsilon n}$. Then, in view of Remark~\ref{regularization by row norm_s}, we can apply Theorem~\ref{main} to conclude the desired norm estimate.
\item cardinalities of the exceptional index subsets $|I|, |J| \le \varepsilon n/2$ with high probability. Then the regularization procedure is indeed \emph{local}.
\end{enumerate}
The matrix $\bar A := A \cdot {\mathds 1}_{\{A_{ij}^2 < q_{k_1}\}}$ is naturally decomposed into union of $l_{\max}$ ``levels" with the entries coming from sets $\mathcal{A}_l$ and ``the leftover" part that contains $A_{ij}$ such that $A^2_{ij} < \hat A^2_{\lceil n \varepsilon 2^{l_{\max}} \rceil}$. So,
$$
\bar A_{ij} = A_{ij} {\mathds 1}_{\{A_{ij} \in \cup \mathcal{A}_l\}} + A_{ij} {\mathds 1}_{\{|A_{ij}| < \hat A_{\lceil n \varepsilon 2^{l_{\max}} \rceil}\}} =: A^{Large} + A^{Small}.
$$
All the rows and columns of $A^{Small}$ have $L_2$-norms at most $C \sqrt{n r c_\varepsilon}$ with probability at least $1 - n^{-r}$ (without any regularization). This follows from an application of Bernstein inequality (e.g., \cite[Theorem 2.8.4]{V-HDP}) for a sum of independent centered entries bounded by $\sqrt{Cn/\ln n}$. Indeed, we just need to check boundedness condition. Recall that $l_{\max} = \lfloor\log_2 (\ln n/\ln \varepsilon^{-4})\rfloor$. By definition of quantiles $q_k$ and Markov's inequality
$$
\mathbb{P}\{A_{ij}^2 \ge q_{k_1 - 2 - l_{\max}} \} \ge 2^{-k_1 + 1} \frac{\ln n}{ \ln \varepsilon^{-4}} \ge \mathbb{P}\{ A_{ij}^2 \ge \frac{32 c_\varepsilon n}{\ln n}\}.
$$
Hence, the entries of $A^{Small}$ can be estimated from above
$$
\hat A^2_{\lceil n \varepsilon 2^{l_{\max}} \rceil} \le q_{k_1 - 2 - l_{\max}} \lesssim \frac{32 c_\varepsilon n}{\ln n}.
$$
\bigskip
For the part $A^{Large}$ we use Lemma~\ref{constructive column cut} applied to $n\times n$ matrices with i.i.d. entries $B^l_{ij} = {\mathds 1}_{\{A_{ij} \in \mathcal{A}_l\}}$ for each $l = 0, \ldots, l_{\max}$ with $L = 2c_{\varepsilon}$ and $p_l = 2^{l}\varepsilon/n$ (which is a valid choice by Remark~\ref{quantile estimation remark}). From the union bound estimate we can conclude that the statement of Lemma~\ref{constructive column cut} holds for all $l \le l_{\max}$ with high probability
$$
1 - \sum_{l \le l_{\max}} \exp(-n \exp(-2c_{\varepsilon} n 2^{l}\varepsilon/n) )\ge 1 - \exp(-n^{0.5}).
$$
Recall that $\bar J = \cup_{l} J_l$ is the union of all exceptional column index subsets found for all matrices $A\cdot {\mathds 1}_{\{A_{ij} \in \mathcal{A}_l\}}$ with $l = 0, \ldots, l_{\max}$. Note that by the definition of quantiles and second moment condition,
\begin{equation}\label{secmom}
\sum_{s=0}^{\infty} q_s 2^{-s-1} \le \E A_{ij}^2 \le 1.
\end{equation}
By Lemma~\ref{constructive column cut} we can estimate for every $i \in [n]$
\begin{align*}
\|row_i(A^{Large}_{[n] \times \bar J^c})\|_2^2 &\le \sum_{l \le l_{\max}} q_{k_1 - l - 2} 20 c_\varepsilon n p_l \\
&\le \sum_{l \le l_{\max}} q_{k_1 - l - 2} 20 c_\varepsilon n \frac{2^{l - k_1} \varepsilon}{n} 2^{k_1} \le 160 c_{\varepsilon} n,
\end{align*}
as $2^{k_1} \le 16 n/\varepsilon$, we used \eqref{secmom} with $s = k_1 - l - 2$ in the last step.
Then, by the $L_2$-norm triangle inequality applied to the rows of $A_{[n] \times \bar J^c}^{Large}$ and $A^{Small}$, we have the row boundedness condition satisfied for $\bar A_{[n] \times \bar J^c}$. Next, on Step 3 we add the set $\hat J$ of columns with largest $L_2$-norms. The same argument as in Remark~\ref{regularization by row norm} shows that with probability at least $1- n^{-r}$ there are no columns with the norm larger than $C \sqrt{c_\varepsilon n}$ outside the set $\hat J$. So, matrix $\tilde A_{[n] \times J^c}$ has all rows and columns norms well-bounded (recall that $J := \bar J \cup \hat J$). Then, by Theorem~\ref{main}, with high probability $1 - C (\ln \ln n)n^{- r}$
\begin{equation}\label{1}
\|\tilde A_{[n] \times J^c}\| \lesssim r^{3/2} \sqrt{c_{\varepsilon} n \ln\ln n}.
\end{equation}
Repeating the same argument for the transpose, we have that
\begin{equation}\label{2}
\|\tilde A_{I^c \times J}\| \le \|\tilde A_{I^c \times [n]}\| \lesssim r^{3/2} \sqrt{c_{\varepsilon} n \ln\ln n}.
\end{equation}
Now we can combine \eqref{1} and \eqref{2} by triangle inequality for the operator norm to conclude the desired norm estimate for $\tilde A$ on the intersection of good events, namely, with probability
$$1 - \exp^{-\varepsilon n/4} - n^{-r} - \exp(-n^{0.5}) - 2C (\ln \ln n)n^{- r} \ge 1 - n^{0.1-r}.$$
Finally, let us check that the regularization is local. Again by Lemma~\ref{constructive column cut}, the total number of exceptional columns
$$
|\bar J| = |\bigcup_l J_l| \le \sum_{l \le l_{\max}} n \exp(-2c_\varepsilon n 2^{l}\varepsilon/n) \le n \varepsilon/4,
$$
since we are summing a geometric progression and $l \ge 0$. Since the same argument holds for the cardinality of $\bar I$, we can conclude that with high probability Algorithm~1 makes changes only in a $n \varepsilon \times n \varepsilon$ submatrix of $A$. This concludes the proof of Corollary~\ref{subblock regularization}.
\section{Discussion} \label{discussion section}
\subsection*{Regularization by the individual corrections of entries.}
Do we actually need to look at the rows and columns of $A$? A simpler and very intuitive idea would be to regularize the norm of $A$ just by zeroing out a few large entries of $A$. However, this approach does not work for the case when the entries have only two finite moments: for the efficient local regularization, one has to account for the mutual positions of the entries in the matrix, not only for their absolute values.
Only in the case when $A_{ij}$ have \emph{more} than two finite moments the truncation idea works and it is not hard to derive the following result from known bounds on random matrices such as \cite{vH, Seginer, Auff} (see also the discussion in \cite[Section~1.4]{ReV}).
In the two moments case, individual correction of the entries can guarantee a bound with bigger additional factor $\ln n$ in the norm. It can be derived from known general bounds on random matrices, such as the matrix Bernstein's inequality (\cite{Tropp}). One would apply the matrix Bernstein's inequality for the entries truncated at level $\sqrt{n}$ to get that $\| \tilde A \| \le \varepsilon^{-1/2} \sqrt{n} \cdot \ln n$.
We consider Theorem~\ref{main} more advantageous with respect to the individual corrections approach not only because we are able to bound the norm closer to the optimal order $\sqrt{n}$, but also due to the fact that it gives more adequate information about the obstructions to the ideal norm bound. Namely, they are not only in the entries that are too large, but also in the rows and columns that accumulate too many entries (all of which are, potentially, of average size).
\subsection*{Symmetry assumption}
An assumption that the entries of $A$ has to have symmetric distribution does not look natural and potentially can be avoided. We need it in the current argument to keep zero mean after various truncations by absolute value (in \eqref{general split} and also in \eqref{est_light}). The standard symmetrization techniques (see [\cite{LT}, Lemma 6.3]) would not work in this case since we combine convex norm function with truncation (zeroing out of a product subset), which is not convex.
\subsection*{Dependence on $n$}
Another potential improvement is an extra $\sqrt{\ln \ln n}$ factor on the optimal $n$-order $\|\tilde{A} \| \sim \sqrt{n}$. The reason for its appearance in our proof is that we consider restrictions of $A$ to the discretization ``levels" independently, and independently estimate their norms. The second moment assumption gives us that $\sum 2^{2k} p_k \sim 1$. However, the best we can hope for a norm of one ``level" (after proper regularization) is $2^k \sqrt{n p_k}$ (since this is an expected $L_2$-norm of a restricted row). Thus, we end up summing square roots of the converging series, $\sum (2^{2k} p_k)^{1/2}$, which for some distributions is as large as square root of the number of summands ($\ln \ln n$ in our case).
It would be desirable to remove extra $\sqrt{\ln \ln n}$ term and symmetric distribution assumption, proving something like the following
\begin{conjecture}
Consider an $n \times n$ random matrix $A$ with i.i.d. mean zero entries such that $\E A_{ij}^2 = 1$. Let $\tilde{A}$ be the matrix that obtained from $A$ by zeroing out all rows and columns such that
\begin{equation}\label{deletion}
\|row_i(A)\|_m \ge C\E \|row_i(A)\|_m, \quad
\|col_i(A)\|_m \ge C\E \|col_i(A)\|_m
\end{equation}
for some $L_m$-norm to be specified (e.g. $m = 2$). Then with probability $1 - o(1)$ the operator norm satisfies $\|\tilde{A}\| \le C' \sqrt{n}$.
\end{conjecture}
Note that this result would be somewhat similar to the estimate proved by Seginer (\cite{Seginer}): in expectation, the norm of the matrix with i.i.d. elements is bounded by the largest norm of its row or column. However, note that after cutting ``heavy" rows and columns we lose independence of the entries in the resulting matrix. And in general, the question of the norm regularization is not equivalent to another interesting question about the sufficient conditions on the distribution of the entries that ensure an optimal order of the operator norm.
| {
"timestamp": "2018-09-12T02:12:43",
"yymm": "1809",
"arxiv_id": "1809.03926",
"language": "en",
"url": "https://arxiv.org/abs/1809.03926",
"abstract": "We show a simple local norm regularization algorithm that works with high probability. Namely, we prove that if the entries of a $n \\times n$ matrix $A$ are i.i.d. symmetrically distributed and have finite second moment, it is enough to zero out a small fraction of the rows and columns of $A$ with largest $L_2$ norms in order to bring the operator norm of $A$ to the almost optimal order $O(\\sqrt{\\log \\log n \\cdot n})$. As a corollary, we also obtain a constructive procedure to find a small submatrix of $A$ that one can zero out to achieve the same goal. This work is a natural continuation of our recent work with R. Vershynin, where we have shown that the norm of $A$ can be reduced to the optimal order $O(\\sqrt{n})$ by zeroing out just a small submatrix of $A$, but did not provide a constructive procedure to find this small submatrix. Our current approach extends the norm regularization techniques developed for the graph adjacency (Bernoulli) matrices in the works of Feige and Ofek, and Le, Levina and Vershynin to the considerably broader class of matrices.",
"subjects": "Probability (math.PR)",
"title": "Constructive regularization of the random matrix norm",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9907319866190856,
"lm_q2_score": 0.798186775339273,
"lm_q1q2_score": 0.7907891696249596
} |
https://arxiv.org/abs/1912.01607 | Moments of Student's t-distribution: A Unified Approach | In this note, we derive the closed form formulae for moments of Student's t-distribution in the one dimensional case as well as in higher dimensions through a unified probability framework. Interestingly, the closed form expressions for the moments of Student's t-distribution can be written in terms of the familiar Gamma function, Kummer's confluent hypergeometric function, and the hypergeometric function. | \section{Remaining TODO Items}
\newpage
\section{Introduction}
In probability and statistics,
the location (e.g., mean), spread (e.g, standard deviation),
skewness, and kurtosis play an important role in the modeling of random processes. One often
uses the mean and standard deviation to construct confidence
intervals or conduct hypothesis testing, and
significant skewness or kurtosis of a data set indicates deviations from normality. Moreover, moment matching algorithms are among the most widely used fitting procedures in practice. As a result, it is important
to be able to find the moments of a given distribution. In his popular note, \cite{winkelbauer2012moments}
gave
the closed form formulae
for the moments as well as absolute moments
of a normal distribution $N(\mu,\sigma^2)$.
The obtained results are beautiful
and have been well received. Recently, \cite{ogasawara2020unified}
provides a unified, non-recursive formulae for moments of normal distribution
with strip truncation.
Given the close relationship between the normal and Student's t-distributions, a natural question arises: Can we derive
similar formulae for the family
of Student's t-distributions?
From the authors' best knowledge,
no such set of formulae exist
for (generalized) Student's t-distributions.
The purpose of this note
is to provide a complete set of closed form formulae
for raw moments, central raw moments, absolute moments, and central absolute moments
for (generalized) Student's t-distributions in the one dimensional case and n-dimensional case.
In particular, the formulae given in \eqref{musigmamoment} - \eqref{musigmaabscentralmoment} and Proposition \ref{proposition3} are new in the literature.
In this sense, we unify existing results and provide extensions to higher dimensions, within a common probabilistic framework.
\section{Student's t-distribution: One dimensional case}
Recall the probability density function (pdf)
of a standard Student's t-distribution with $\nu>0$ degrees of freedom, denoted by $ St(t| 0, 1,\nu)$, is given by
\begin{equation}
St(t| 0, 1,\nu)=\displaystyle\frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})}\frac{1}{\sqrt{\nu\pi}}
\left( 1+\frac{t^2}{\nu} \right)^{-\frac{\nu+1}{2}},\quad -\infty<t<\infty,
\end{equation}
where the Gamma function is defined as
$$\Gamma(z)=\displaystyle\int_0^{\infty} t^{z-1}e^{-t}dt.$$
\noindent More generally, we have
\begin{equation}\label{t-genversion}
St(t| \mu, \sigma,\nu)=\displaystyle\frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})}
\left(\frac{\sigma}{{\nu\pi}}\right)^{\frac{1}{2}}
\left( 1+\frac{\sigma}{\nu}\left({t-\mu} \right)^2 \right)^{-\frac{\nu+1}{2}},\quad -\infty<t<\infty,
\end{equation}
where $\mu\in(-\infty,\infty)$ is the location, $\sigma>0$
determines the scale,
and $\nu=1,2,3,\ldots$
represents the degrees of freedom.
When $\nu=1$, the pdf in \eqref{t-genversion}
reduces
to the pdf of $\text{Cauchy}(\mu,\sigma)$, while
the pdf in \eqref{t-genversion}
converges to the pdf of
the normal $N(\mu,\sigma^{-1})$ as $\nu\to\infty$.
While the tails of the normal distribution decay at an exponential rate,
the Student's t-distribution is heavy-tailed, with a polynomial decay rate. Because of this, the Student's t-distribution has been widely adopted
in robust data analysis including (non) linear regression \citep{lange1989robust}, sample selection models \citep{marchenko2012heckman}, and
linear mixed effect models \citep{pinheiro2001efficient}. It is also among the most widely applied distributions for financial risk modeling, see \cite{QRM15}, \cite{Shaw06},\cite{kwon2020distribution}. The reader is
invited to refer to \cite{kotz2004multivariate} for more.
The mean and variance
of a Student's t-distribution
are well known and can be found in
closed form by using the properties
of the Gamma function. However, for higher order raw or central moments, the calculation quickly
becomes tedious.
For later use,
we denote the probability density function of a Gamma distribution with parameters
$\alpha>0,\beta>0$ by
$$
\text{Gamma}(x|\alpha,\beta)
=\displaystyle\frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha-1}e^{-\beta x},\quad x\in (0,\infty).
$$
Similarly,
the probability density function of a normal distribution
$X\sim N(\mu,\sigma^2)$ is denoted by
$$
N(x|\mu,\sigma^2)=\displaystyle\frac{1}{\sqrt{2\pi}\sigma}
\exp\left(-\displaystyle\frac{(x-\mu)^2}{2\sigma^2} \right), \quad x\in(-\infty,+\infty).
$$
We will also require two common special functions.
The Kummer's confluent hypergeometric function is defined by
$$
K(\alpha,\gamma; z)\equiv {}_1F_1(\alpha,\gamma; z)=\displaystyle\sum_{n=0}^{\infty}\frac{\alpha^{\overline{n}}z^n}{\gamma^{\overline{n}}n!}.
$$
The hypergeometric function is defined by
$$
{}_2F_1(a,b,c; z)=\sum_{n=0}^{\infty}\frac{a^{\overline n} b^{\overline n}}{c^{\overline n}}
\cdot\frac{z^n}{n!},
$$
where
\begin{equation*}
a^{\overline n}=\frac{\Gamma(a+n)}{\Gamma(a)}=
\left\{
\begin{array}{ll}
1 & n=0,\\
a(a+1)\ldots(a+n-1) &n>0.
\end{array}
\right.
\end{equation*}
The following result is surprisingly simple and will be very useful in later derivations. It provides a representation of a conditional Student's t-distribution in terms of a normal distribution. See, for example, \cite{bishop2006pattern}.
\begin{lem}\label{mixtureRepre}
Let $T|\frac{1}{\sigma\lambda}$ be a normal distribution with mean $\mu$ and variance $1/(\sigma\lambda)$.
For $\nu>0$, let $\lambda\sim \text{Gamma}(\nu/2,\nu/2)$.
Then the marginal distribution of $T$ is a $St(t|\mu,\sigma,\nu)$ Student's t-distribution.
\end{lem}
\textbf{Proof:}
As the proof is very concise, we reproduce it here for the reader's convenience.
We have
\begin{align*}
&\displaystyle\int_0^{\infty}N(t|\mu,\frac{1}{\sigma\lambda})\text{Gamma}(\lambda|\frac{\nu}{2},\frac{\nu}{2})d\lambda \\
&=\displaystyle\int_0^{\infty}\frac{\sqrt{\sigma\lambda}}{\sqrt{2\pi}}e^{-\frac{\sigma\lambda}{2}(t-\mu)^2}\frac{\nu^{\nu/2}}{2^{\nu/2}\Gamma(\nu/2)}\lambda^{\nu/2-1} e^{-\frac{\nu}{2}\lambda}d\lambda\\
&=\frac{\sqrt{\sigma}}{\sqrt{2\pi}}\frac{\nu^{\nu/2}}{2^{\nu/2}\Gamma(\nu/2)}
\frac{\Gamma(\frac{\nu+1}{2})}{(\frac{\nu}{2}+\frac{\sigma}{2}(t-\mu)^2)^{\frac{\nu+1}{2}}}
\displaystyle\int_0^{\infty}\text{Gamma}(\lambda|\frac{\nu+1}{2},\frac{\nu}{2}+\frac{\sigma}{2}(t-\mu)^2))d\lambda\\
&=\frac{\sqrt{\sigma}}{\sqrt{2\pi}}\frac{\nu^{\nu/2}}{2^{\nu/2}\Gamma(\nu/2)}
\frac{\Gamma(\frac{\nu+1}{2})}{(\frac{\nu}{2}+\frac{\sigma}{2}(t-\mu)^2)^{\frac{\nu+1}{2}}}\\
&=St(t|\mu,\sigma,\nu).
\end{align*}
This completes the proof of the Lemma.
\qed
\\
\indent The following results are well known:
\begin{thm}\label{momentsNormal}
We have
\begin{enumerate}
\item If $X\sim N(0,\sigma^2)$
then
$$
\mathbb E(X^m)=
\left\{
\begin{array}{cl}
0, &\mbox{if} \quad m=2k+1,\\
\displaystyle\frac{\sigma^m m!}{2^{m/2}(m/2)!}, &\mbox{if}\quad m=2k.
\end{array}
\right.
$$
\item If $X\sim\text{Gamma}(\alpha,\beta)$, then $\mathbb E(X^{k})=\frac{\beta^{-k} \Gamma(k+\alpha)}{\Gamma(\alpha)}$ for $k>-\alpha$.
\end{enumerate}
\end{thm}
With this and Lemma \ref{mixtureRepre} above, we are able to find
moments of Student's t-distribution. More specifically, we have the following comprehensive theorem in one dimension.
\begin{thm}
For $k\in \mathbb N_+$, $0< k < \nu$, the following results hold:
\begin{enumerate}
\item For $ T\sim St(t| 0, 1,\nu)$, the raw and absolute moments satisfy
\begin{equation}
\label{zeroonemoment}
\mathbb{E}(T^k) =
\left\{
\begin{array}{ll}
\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}} \cdot \frac{\nu^{k/2}}{\prod_{i=1}^{k/2} (\frac{\nu}{2}-i)}
,& k \text{ even},\\
0, & k \text{ odd}
\end{array}
\right.
\end{equation}
\begin{equation}
\label{zerooneabsmoment}
\mathbb{E}(|T|^k)=
\frac{\nu^{k/2}\Gamma((k+1)/2)\Gamma((\nu-k)/2) }{\sqrt{\pi}\Gamma(\nu/2)}.
\end{equation}
\item If $ T\sim St(t| \mu, \sigma,\nu)$, the raw moments satisfy
\begin{equation}
\label{musigmamoment}
\mathbb{E}(T^k)= \left\{
\begin{array}{ll}
(\nu/\sigma)^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}\frac{\Gamma(\frac{\nu}{2}-\frac{k}{2})}{\Gamma(\frac{\nu}{2})} {}_2F_1(-\frac{k}{2},\frac{\nu}{2}-\frac{k}{2},\frac{1}{2};-\frac{\mu^2\sigma}{\nu}),
& k \text{ even},\\
2\mu(\nu/\sigma)^{(k-1)/2}\frac{\Gamma(\frac{k}{2}+1)}{\sqrt{\pi}} \frac{\Gamma(\frac{\nu}{2}-\frac{k-1}{2})}{\Gamma(\frac{\nu}{2})} {}_2F_1(\frac{1-k}{2},\frac{\nu}{2}-\frac{k-1}{2},\frac{3}{2};-\frac{\mu^2\sigma}{\nu}), & k \text{ odd}.
\end{array}
\right.
\end{equation}
\begin{equation}
\label{musigmacentralmoment}
\mathbb E((T-\mu)^k)=\frac{(1+(-1)^k)}{2}(\nu/\sigma)^{k/2}\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}\frac{\Gamma(\frac{\nu-k}{2})}{\Gamma(\frac{\nu}{2})}.
\end{equation}
\item If $ T\sim St(t| \mu, \sigma,\nu)$, the absolute moments satisfy
\begin{equation}
\label{musigmaabsmoment}
\mathbb E(|T|^k)=(\nu/\sigma)^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}\frac{\Gamma(\frac{\nu}{2}-\frac{k}{2})}{\Gamma(\frac{\nu}{2})} {}_2F_1(-\frac{k}{2},\frac{\nu}{2}-\frac{k}{2},\frac{1}{2};-\frac{\mu^2\sigma}{\nu}).
\end{equation}
\begin{equation}
\label{musigmaabscentralmoment}
\mathbb E(|T-\mu|^k)=\displaystyle(\nu/\sigma)^{k/2}\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}\frac{\Gamma(\frac{\nu-k}{2})}{\Gamma(\frac{\nu}{2})}.
\end{equation}
\end{enumerate}
In general, the moments are undefined when $k\geq \nu$.
\end{thm}
\textbf{Proof:}
First assume that $T\sim St(t|0,1,\nu)$; we will find
$\mathbb{E}(|T|^k)$. The proof for $\mathbb{E}(T^k)$ follows from similar ideas in combination with the result obtained in Theorem \ref{momentsNormal}. From the equation (17) in \cite{winkelbauer2012moments}, we have
\begin{align*}
\mathbb{E}(|T|^k|\lambda) &= \int \limits_{-\infty}^\infty |t|^k \text{ N}(t|0,\tfrac{1}{\lambda}) \text{ } dt =\frac{1}{\lambda^{k/2}}2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};0 \right).
\end{align*}
Hence we have
\begin{align*}
\mathbb{E}(|T|^k)&=\mathbb E( \mathbb{E}(|T|^k|\lambda) )\\
&=\int \limits_{0}^\infty \frac{1}{\lambda^{k/2}}2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};0 \right)
\cdot \frac{\nu^{\nu/2}}{2^{\nu/2} \Gamma(\tfrac{\nu}{2})} \lambda^{\nu/2-1} \exp \Big( -
\frac{\nu}{2} \lambda \Big) \text{ } d\lambda\\
&=\int \limits_{0}^\infty \frac{1}{\lambda^{k/2}}2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};0 \right)
\cdot \frac{\nu^{\nu/2}}{2^{\nu/2} \Gamma(\tfrac{\nu}{2})} \lambda^{\nu/2-1} \exp \Big( -
\frac{\nu}{2} \lambda \Big) \text{ } d\lambda\\
&= 2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};0 \right)
\cdot \frac{\nu^{\nu/2}}{2^{\nu/2} \Gamma(\tfrac{\nu}{2})} \int \limits_{0}^\infty \lambda^{\nu/2-1-k/2} \exp \Big( -
\frac{\nu}{2} \lambda \Big) \text{ } d\lambda\\
&= 2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};0 \right)
\cdot \frac{\nu^{\nu/2}}{2^{\nu/2} \Gamma(\tfrac{\nu}{2})}
\cdot\frac{\Gamma(\frac{\nu-k}{2})}{(\nu/2)^{\frac{\nu-k}{2}}}
\int \limits_{0}^\infty \frac{(\nu/2)^{\frac{\nu-k}{2}}}{\Gamma(\frac{\nu-k}{2})}\lambda^{(\nu-k)/2-1} \exp \Big( -
\frac{\nu}{2} \lambda \Big) \text{ } d\lambda\\
&=2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};0 \right)
\cdot \frac{\nu^{\nu/2}}{2^{\nu/2} \Gamma(\tfrac{\nu}{2})}
\cdot\frac{\Gamma(\frac{\nu-k}{2})}{(\nu/2)^{\frac{\nu-k}{2}}}\\
&=\frac{\nu^{k/2}\Gamma((k+1)/2)\Gamma((\nu-k)/2) }{\sqrt{\pi}\Gamma(\nu/2)},
\end{align*}
where we have used the fact that $K\left( -\frac{k}{2},\frac{1}{2};0 \right)=1$. \\
\indent Next, assume that $T\sim St(t|\mu,\sigma,\nu)$. Using the following facts (obtained in \cite{winkelbauer2012moments})
\begin{align*}
\mathbb{E}((T-\mu)^k|\lambda) &= \int \limits_{-\infty}^\infty (t-\mu)^k \text{ N}(t|\mu,\tfrac{1}{\lambda \sigma}) \text{ } dt =(1+(-1)^k)\frac{1}{\lambda^{k/2}}2^{k/2-1}\sigma^{-k/2}\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}},
\end{align*}
and
\begin{align*}
\mathbb{E}(|T-\mu|^k|\lambda) &= \int \limits_{-\infty}^\infty |t-\mu|^k \text{ N}(t|\mu,\tfrac{1}{\lambda \sigma}) \text{ } dt =\frac{\sigma^{-k/2}}{\lambda^{k/2}}2^{k/2}\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}},
\end{align*}
the derivations for $\mathbb{E}((T-\mu)^k)$ and $\mathbb{E}(|T-\mu|^k)$ follow similarly.\\
\indent When $T\sim St(t|\mu,\sigma,\nu)$ for raw absolute moment, from the equation (17) in \cite{winkelbauer2012moments}, we have
\begin{align*}
\mathbb{E}(|T|^k|\lambda) &= \int \limits_{-\infty}^\infty |t|^k \text{ N}(t|\mu,\tfrac{1}{\lambda\sigma}) \text{ } dt =\frac{1}{\lambda^{k/2}}2^{k/2}\sigma^{-k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};-\frac{\mu^2}{2}\sigma\lambda \right).
\end{align*}
Hence, using Part 2) of Theorem \ref{momentsNormal}, we have {for $k<\nu$}
\begin{align*}
\mathbb{E}(|T|^k)&=\mathbb E( \mathbb{E}(|T|^k|\lambda) )\\
&=\int \frac{1}{\lambda^{k/2}}2^{k/2}\sigma^{-k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
K\left( -\frac{k}{2},\frac{1}{2};-\frac{\mu^2}{2}\sigma\lambda \right)\text{Gamma}(\lambda|\frac{\nu}{2},\frac{\nu}{2})d\lambda\\
&=\sigma^{-k/2}2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
\sum_{n=0}^{\infty}\frac{(-k/2)^{\overline n}}{(1/2)^{\overline n}}\frac{(-\mu^2/2)^n\sigma^n }{n!} \int \lambda^{n-k/2}\text{Gamma}(\lambda|\frac{\nu}{2},\frac{\nu}{2})d\lambda\\
&=\sigma^{-k/2}2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}
\sum_{n=0}^{\infty}\frac{(-k/2)^{\overline n}}{(1/2)^{\overline n}}\frac{(-\mu^2/2)^n\sigma^n }{n!} (\nu/2)^{-n+k/2}\frac{\Gamma(n-k/2+\nu/2)}{\Gamma(\nu/2)}\\
&=\sigma^{-k/2}2^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}} \frac{\Gamma(\frac{\nu}{2}-\frac{k}{2})}{\Gamma(\frac{\nu}{2})}
\sum_{n=0}^{\infty}\frac{(-k/2)^{\overline n}}{(1/2)^{\overline n}}\frac{(-\mu^2/2)^n\sigma^n }{n!} (\nu/2)^{-n+k/2}(\frac{\nu}{2}-\frac{k}{2})^{\overline n}\\
&=\sigma^{-k/2}2^{k/2}(\nu/2)^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}} \frac{\Gamma(\frac{\nu}{2}-\frac{k}{2})}{\Gamma(\frac{\nu}{2})}
\sum_{n=0}^{\infty}\frac{(-k/2)^{\overline n}}{(1/2)^{\overline n}}\frac{(-\mu^2/2)^n\sigma^n }{n!} (\nu/2)^{-n}(\frac{\nu}{2}-\frac{k}{2})^{\overline n}\\
&= (\nu/\sigma)^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}\frac{\Gamma(\frac{\nu}{2}-\frac{k}{2})}{\Gamma(\frac{\nu}{2})} {}_2F_1(-\frac{k}{2},\frac{\nu}{2}-\frac{k}{2},\frac{1}{2};-\frac{\mu^2\sigma}{\nu}).
\end{align*}
Lastly, from the equation (12) in \cite{winkelbauer2012moments}, we have
\begin{equation*}
\mathbb{E}(T^k|\lambda)= \int \limits_{-\infty}^\infty t^k \text{ N}(t|\mu,\tfrac{1}{\lambda \sigma}) \text{ } dt
=\left\{
\begin{array}{ll}
\sigma^{-k/2}2^{k/2}\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}\frac{1}{\lambda^{k/2}}K(-\frac{k}{2},\frac{1}{2};
-\frac{\mu^2}{2}\sigma\lambda) & k \text{ even},\\
\mu\sigma^{-(k-1)/2}2^{(k+1)/2}\frac{\Gamma(\frac{k}{2}+1)}{\sqrt{\pi}}\frac{1}{\lambda^{(k-1)/2}}K(\frac{1-k}{2},\frac{3}{2};
-\frac{\mu^2}{2}\sigma\lambda) &\text{ } k \text{ odd}.
\end{array}
\right.
\end{equation*}
Similar to the calculation above, we have
\begin{equation*}
\mathbb{E}(T^k)= \left\{
\begin{array}{ll}
(\nu/\sigma)^{k/2}
\displaystyle\frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi}}\frac{\Gamma(\frac{\nu}{2}-\frac{k}{2})}{\Gamma(\frac{\nu}{2})} {}_2F_1(-\frac{k}{2},\frac{\nu}{2}-\frac{k}{2},\frac{1}{2};-\frac{\mu^2\sigma}{\nu}).
& k \text{ even},\\
2\mu(\nu/\sigma)^{(k-1)/2}\frac{\Gamma(\frac{k}{2}+1)}{\sqrt{\pi}} \frac{\Gamma(\frac{\nu}{2}-\frac{k-1}{2})}{\Gamma(\frac{\nu}{2})} {}_2F_1(\frac{1-k}{2},\frac{\nu}{2}-\frac{k-1}{2},\frac{3}{2};-\frac{\mu^2\sigma}{\nu}) & k \text{ odd}.
\end{array}
\right.
\end{equation*}
This completes the proof of the theorem.
\qed
\begin{rmk}
\text{}
\begin{enumerate}
\item The formulae given in \eqref{musigmamoment} - \eqref{musigmaabscentralmoment} are new in the literature. Also when $ T\sim St(t| 0, 1,\nu)$, $\mathbb E(T^k)$ is well known. Moreover, one can directly use
the definition to find $\mathbb E(|T|^k)$
through the class of $\beta$-functions defined in Section 6.2 of \cite{abramowitz1948handbook} and arrive
at the same formula.
However, this direct approach no longer works for expectations of the form
$\mathbb E(|T|^{k})$, $\mathbb E(T^{k})$ when $ T\sim St(t| \mu, \sigma,\nu)$, or for higher dimensional moments
considered in Section \ref{HigherDimcase}. Also, clearly \eqref{musigmamoment} is reduced to \eqref{zeroonemoment} and \eqref{musigmaabsmoment} is reduced to \eqref{zerooneabsmoment} when $\mu=0$ and $\sigma=1$, which shows the consistency of the method.
\item If $ T\sim St(t| \mu, \sigma,\nu)$, and once $\mathbb E((T-\mu)^i), 0\leq i\leq k$ have been computed, we can use them to compute
$\mathbb E(T^k)$ for $k<\nu$ using the expansion
$$
\mathbb E(T^k)=\mathbb E((T-\mu+\mu)^k)=\sum_{i=0}^{k}\mu^{k-i}{k\choose i}\mathbb E((T-\mu)^i).
$$
\end{enumerate}
\end{rmk}
\section{Higher dimensional case}\label{HigherDimcase}
Now we consider the case when $n\geq 2$. Denote $\bm t=(t_1,t_2,\ldots,t_n)\in\mathbb R^{n}$.
Denote the pdf of $n$-dimensional Normal random variable as
$$
N(\bm x|\bm\mu, \Sigma)
=\frac{1}{(2\pi)^{n/2}|\bm\Sigma|^{\frac{1}{2}}}
e^{-\frac{1}{2}(\bm x-\bm \mu)^T \bm\Sigma^{-1}(\bm x-\bm \mu)},
$$
where $|\bm\Sigma|$ is the determinant of $\bm\Sigma$. Similar to Lemma \ref{mixtureRepre}, we have
the probability density of $n$-dimensional Student's t-distribution is given by
\begin{equation}\label{densitythroughnormal}
St(\bm t| \bm \mu, \bm \Sigma,\nu)
=\displaystyle\int_0^{\infty} N(\bm t|\bm \mu, (\eta\bm\Sigma)^{-1})\text{Gamma}(\eta|\frac{\nu}{2}
, \frac{\nu}{2})d\eta,
\end{equation}
where $\bm\mu$ is called the location, $\bm\Sigma$ is the scale matrix, and $\nu$
is the degrees of freedom parameter. It can be shown that the pdf of the $n$-dimensional multivariate Student's t-distribution
is given by
\begin{equation}
\label{n-t-dist-pdf}
St(\bm t| \bm\mu, \bm\Sigma,\nu)=\displaystyle
\frac{\Gamma(\frac{\nu+n}{2})}{\Gamma(\frac{\nu}{2})}
\frac{|\bm\Sigma|^{\frac{1}{2}}}{(\nu\pi)^{\frac{n}{2}}}
\left(1+\frac{1}{\nu}(\bm t-\bm\mu)^T\bm\Sigma(\bm t -\bm\mu) \right)^{-\frac{\nu+n}{2}}.
\end{equation}
Note that in the standardized case of
$\bm \mu=\bm 0$ and $\bm\Sigma =\bm I$, the representation in \eqref{densitythroughnormal} is reduced to
\begin{equation}\label{densitythroughnormal1}
St(\bm t| \bm 0, \bm I,\nu)
=\displaystyle\int_0^{\infty} N(\bm t|\bm 0, \frac{1}{\eta}\bm I)\text{Gamma}(\eta|\frac{\nu}{2}
, \frac{\nu}{2})d\eta.
\end{equation}
Let $\bm T=(T_1,T_2,\ldots,T_n)$, and $\bm k=(k_1,k_2,\ldots,k_n)$ with $0\leq k_i\in\mathbb N$. The $\bm k$ moment of $\bm T$
is defined as
$$
\mathbb E(\bm T^{\bm k})=\bm\int t_1^{k_1}t_2^{k_2}\ldots t_n^{k_n} \cdot St(\bm t| \bm \mu, \bm \Sigma,\nu)dt_1\ldots dt_n.
$$
Similarly,
$$
\mathbb E( |\bm T|^{\bm k})=\bm\int |t_1|^{k_1}|t_2|^{k_2}\ldots |t_n|^{k_n}\cdot St(\bm t| \bm \mu, \bm \Sigma,\nu)dt_1\ldots dt_n.
$$
From the authors' best knowledge, the following is new:
\begin{thm}
\label{proposition3}
For $\sum k_i< \nu$, we have
\begin{enumerate}
\item If $\bm T\sim St(\bm t| \bm 0, \bm I,\nu)$ then
\begin{itemize}
\item The raw moments satisfy
$$
\mathbb E(\bm T^{\bm k})=\left\{
\begin{array}{ll}
0, &\mbox{if} \quad \text{at least one} \text{ } k_i\text{ is odd},\\
\displaystyle \displaystyle\nu^{\frac{\sum k_i}{2}}\frac{ \Gamma(\frac{\nu-\sum k_i}{2})}{
\Gamma(\frac{\nu}{2})} \frac{ \prod (k_i)!}{2^{(\sum k_i)}\prod(k_i/2)!}, &\mbox{if}\quad \sum k_i\text{ }\text{ is even}.
\end{array}
\right.
$$
\item The absolute moments satisfy
$$\mathbb E(|\bm T|^{\bm k})=\displaystyle\nu^{\frac{\sum k_i}{2}}\frac{ \Gamma(\frac{\nu-\sum k_i}{2})}{
\Gamma(\frac{\nu}{2})}\prod\frac{\Gamma(\frac{k_i+1}{2})}{\sqrt{\pi}}.$$
\end{itemize}
\item If $\bm T\sim St(\bm t| \bm \mu, \bm \Sigma,\nu)$, let $\bm\Sigma^{-1}=(\overline{\sigma}_{ij})$ and $\bm e_i=(0,\ldots,1,\ldots,0)$ - the $i$th unit vector of $\mathbb R^n$. Then we have the following recursive
formula to compute the moments of $T$:
\begin{align*}
\mathbb E(\bm T^{\bm k+\bm e_i})
=\mu_i\mathbb E(\bm T^{\bm k})
+\frac{\nu}{2}\frac{\Gamma(\frac{\nu}{2}-1)}{\Gamma(\frac{\nu}{2})}
\sum_{j=1}^{n}\overline{\sigma}_{ij}k_j\mathbb E(\bm T^{\bm k-\bm e_j}).
\end{align*}
\end{enumerate}
\end{thm}
\textbf{Proof}:\\
For 1), first from \eqref{densitythroughnormal1}, we have
\begin{align*}
\mathbb E(\bm T^{\bm k})=\int_0^{\infty}\mathbb E(\bm X^{\bm k}|\bm 0,\frac{1}{t}\bm I)\text{Gamma}(t|\frac{\nu}{2},\frac{\nu}{2})dt,
\end{align*}
where $\mathbb{E}(\bm X^{\bm k}|\bm 0,\frac{1}{t}\bm I)$ is the $\bm k$ moment of a $N(\bm 0,\frac{1}{t}\bm I)$.
Using Theorem \ref{momentsNormal}, we have
$$
\mathbb{E}(\bm X^{\bm k}|\bm 0,\frac{1}{t}\bm I)=
\prod_{i=1}^{n} \mathbb{E}( X_i^{k_i}|0,\frac{1}{t})=
\left\{
\begin{array}{cl}
0, &\mbox{if} \quad \text{at least one } k_i\text{ is odd},\\
\displaystyle\frac{t^{-\sum k_i/2} \prod (k_i)!}{2^{(\sum k_i)/2}\prod(k_i/2)!}, &\mbox{if} \quad \text{all } k_i\text{ are even}.
\end{array}
\right.
$$
As a result,
\begin{align*}
\mathbb E(\bm T^{\bm k})&=
\left\{
\begin{array}{ll}
0, &\mbox{if} \quad \text{at least one } k_i\text{ is odd},\\
\displaystyle\frac{ \prod (k_i)!}{2^{(\sum k_i)/2}\prod(k_i/2)!}\int_0^{\infty} t^{-\sum k_i/2}\text{Gamma}(t|\frac{\nu}{2},\frac{\nu}{2})dt, &\mbox{if} \quad \text{all } k_i\text{ are even}.
\end{array}
\right.\\
&=\left\{
\begin{array}{ll}
0, &\mbox{if} \quad \text{at least one } k_i \text{ is odd},\\
\displaystyle \displaystyle\nu^{\frac{\sum k_i}{2}}\frac{ \Gamma(\frac{\nu-\sum k_i}{2})}{
\Gamma(\frac{\nu}{2})} \frac{ \prod (k_i)!}{2^{(\sum k_i)}\prod(k_i/2)!}, &\mbox{if} \quad \nu>\sum k_i\text{ is even}.
\end{array}
\right.
\end{align*}
\noindent Similarly, we have
\begin{align*}
\mathbb E(|\bm T^{\bm k}|)=\int_0^{\infty}\mathbb E(|X_1|^{k_1}|X_2|^{k_2}\ldots |X_n|^{k_n}|\bm 0,\frac{1}{t}\bm I)\text{Gamma}(t|\frac{\nu}{2},\frac{\nu}{2})dt
\end{align*}
where
\begin{align*}
\mathbb E(|X_1|^{k_1}|X_2|^{k_2}\ldots |X_n|^{k_n}|\bm 0,\frac{1}{t}\bm I)=
\prod_{i=1}^{n} \mathbb{E}( |X_i|^{k_i}|0,\frac{1}{t})=\prod \frac{1}{t^{k_i/2}}2^{k_i/2}
\displaystyle\frac{\Gamma(\frac{k_i+1}{2})}{\sqrt{\pi}}.
\end{align*}
Therefore,
\begin{align*}
\mathbb E( |\bm T^{\bm k}|)&=2^{\sum k_i/2}
\displaystyle\prod\frac{\Gamma(\frac{k_i+1}{2})}{\sqrt{\pi}}\int_0^{\infty}t^{-\sum k_i/2}\text{Gamma}(t|\frac{\nu}{2},\frac{\nu}{2})dt
\\
&=\displaystyle\nu^{\frac{\sum k_i}{2}}\frac{ \Gamma(\frac{\nu-\sum k_i}{2})}{
\Gamma(\frac{\nu}{2})}\prod\frac{\Gamma(\frac{k_i+1}{2})}{\sqrt{\pi}}\quad \text{if}\quad \sum k_i< \nu.
\end{align*}
For 2), from \eqref{densitythroughnormal1}
\begin{align}\label{recursiveEq}
\mathbb E(\bm T^{\bm k})=\int_0^{\infty}\mathbb E(\bm X^{\bm k}|\bm\mu,\frac{1}{t}\bm \Sigma^{-1})\text{Gamma}(t|\frac{\nu}{2},\frac{\nu}{2})dt,
\end{align}
where $\mathbb E(\bm X^{\bm k})\equiv \mathbb E(\bm X^{\bm k}|\bm\mu,\frac{1}{t}\bm \Sigma^{-1})$
is the $\bm k$ moment of $N(\bm\mu,\frac{1}{t}\bm \Sigma^{-1})$.
Recall the pdf of $N(\bm \mu,\frac{1}{t}\Sigma^{-1})$ is given by $$
N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})
=\frac{1}{(2\pi)^{n/2}|\frac{1}{t}\bm\Sigma^{-1}|^{\frac{1}{2}}}
e^{-\frac{1}{2}(\bm x-\bm \mu)^T t\bm\Sigma(\bm x-\bm \mu)}.
$$
Similar to Theorem 1 in \cite{kan2017moments}, we have
\begin{align*}
-\frac{\partial N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})}{\partial\bm x}
=t\bm\Sigma(\bm x-\bm \mu)N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1}).
\end{align*}
Hence
\begin{align*}
-\bm\int\bm x^{\bm k}\frac{\partial N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})}{\partial\bm x}d\bm x
=\bm\int \bm x^{\bm k} t\bm\Sigma(\bm x-\bm \mu)N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})d\bm x.
\end{align*}
By integration by parts, we arrive at
\begin{align*}
\bm\int k_j\bm x^{\bm k-\bm e_j}N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})d\bm x
=\bm\int \bm x^{\bm k} t\bm\Sigma(\bm x-\bm \mu)N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})d\bm x.
\end{align*}
Or equivalently
\begin{align*}
\bm\int \bm x^{\bm k} (\bm x-\bm \mu)N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})d\bm x=
\frac{1}{t}\bm\Sigma^{-1}\bm\int k_j\bm x^{\bm k-\bm e_j}N(\bm x|\bm\mu, \frac{1}{t}\bm\Sigma^{-1})d\bm x.
\end{align*}
This in turn implies that
$$
\mathbb E(\bm X^{\bm k+\bm e_i})
=\mu_i\mathbb E(\bm X^{\bm k})
+\frac{1}{t}\sum_{j=1}^{n}\overline{\sigma}_{ij}k_j\mathbb E(\bm X^{\bm k-\bm e_j}).
$$
Plugging this into the equation \eqref{recursiveEq}, we have the following recursive equation
\begin{align*}
\mathbb E(\bm T^{\bm k+\bm e_i})
&=\mu_i\mathbb E(\bm T^{\bm k})
+\sum_{j=1}^{n}\overline{\sigma}_{ij}k_j\mathbb E(\bm T^{\bm k-\bm e_j})\int_0^{\infty}\frac{1}{t}\text{Gamma}(t|\frac{\nu}{2},\frac{\nu}{2})dt\\
&=\mu_i\mathbb E(\bm T^{\bm k})
+\frac{\nu}{2}\frac{\Gamma(\frac{\nu}{2}-1)}{\Gamma(\frac{\nu}{2})}
\sum_{j=1}^{n}\overline{\sigma}_{ij}k_j\mathbb E(\bm T^{\bm k-\bm e_j}).
\end{align*}
This completes the proof of the theorem.
\qed
\indent Lastly, for $\bm a=(a_1,a_2,\ldots,a_n)$ and $\bm b=(b_1,b_2,\ldots,b_n)\in\mathbb R^n$, let
$\bm a_{(j)}$ is the vector obtained from $\bm a$ by deleting the $j$th element of $\bm a$.
For $\bm\Sigma=(\sigma_{ij})$, let $\sigma_i^2=\sigma_{ii}$ and $\bm\Sigma_{i,(j)}$
stand for the $i$th row of $\bm\Sigma$ with its $j$th element removed.
Analogously, let $\bm\Sigma_{(i),(j)}$
stand for the matrix $\bm\Sigma$ with $i$th row and $j$th column removed.
\\
\indent Consider the following truncated
$\bm k$ moment
$$
F^n_{\bm k}(\mathbf{a},\mathbf{b}; \bm\mu,\bm\Sigma,\nu)
=\bm\int_{\bm a}^{\bm b} \bm t^{\bm k} St(\bm x| \bm \mu, \bm \Lambda,\nu)d\bm t.
$$
We have
\begin{equation}\label{7e1}
\begin{aligned}
F^n_{\bm k}(\mathbf{a},\mathbf{b}; \bm\mu,\bm\Sigma,\nu) =\int_0^{\infty}\mathbb E\left[\boldsymbol{1}_{\{\mathbf{a}\leq \mathbf{X}\leq\mathbf{b}\}}\bm X^{\bm k}\Big|\bm\mu,\frac{1}{t}\bm \Sigma^{-1}\right]\text{Gamma}(t|\frac{\nu}{2},\frac{\nu}{2})dt,
\end{aligned}
\end{equation}
Using Theorem 1 in \cite{kan2017moments}, we have for $n>1$
\begin{align}
\mathbb{E} (X_{\bm k+\bm e_i}^{n};\bm a, \bm b,\bm \mu,\frac{1}{t}\bm\Sigma^{-1})&:=\mathbb{E}\left[\boldsymbol{1}_{\{\mathbf{a}\leq \mathbf{X}\leq\mathbf{b}\}}\bm X^n_{\bm k+\mathbf{e}_i}\Big|\bm\mu,\frac{1}{t}\bm \Sigma^{-1}\right]\nonumber\\
&=\mu_i\mathbb{E}\left[\boldsymbol{1}_{\{\mathbf{a}\leq \mathbf{X}\leq\mathbf{b}\}}\bm X^n_{\bm k}\Big|\bm\mu,\frac{1}{t}\bm \Sigma^{-1}\right]
+\frac1t \mathbf{e}_i^\top{\boldsymbol\Sigma}^{-1}\mathbf{c}_\mathbf{k},
\end{align}
\noindent and $\mathbf{c}_\mathbf{k}$ satisfies
\begin{align*}
\mathbf{c}_{\mathbf{k},j}
&=k_j\mathbb{E} (X_{\bm k-\bm e_i}^{n};\bm a, \bm b,\bm \mu,\frac{1}{t}\bm\Sigma^{-1})
+a_j^{k_j}N(a_j|\mu_j,\sigma_j^2)
\mathbb{E} (X_{\bm k_{(j)}}^{n-1};\bm a_{(j)}, \bm b_{(j)},\widehat{\bm \mu}_j^{\bm a},\frac{1}{t}\widehat{\bm\Sigma}^{-1})\\
&-b_j^{k_j}N(b_j|\mu_j,\sigma_j^2)
\mathbb{E} (X_{\bm k_{(j)}}^{n-1};\bm a_{(j)}, \bm b_{(j)},\widehat{\bm \mu}_j^{\bm b},\frac{1}{t}\widehat{\bm\Sigma}^{-1})
\end{align*}
with
\begin{equation}
\left\{
\begin{array}{l}
\widehat{\bm \mu}_j^{\bm a}=\bm \mu_{(j)}+\bm\Sigma^{-1}_{(j),j}\frac{a_j-\mu_j}{\overline\sigma^2_j},\\
\widehat{\bm \mu}_j^{\bm b}=\bm \mu_{(j)}+\bm\Sigma^{-1}_{(j),j}\frac{b_j-\mu_j}{\overline\sigma^2_j},\\
\widehat{\bm\Sigma}^{-1}=\bm\Sigma^{-1}_{(j),j}-\frac{1}{\overline\sigma^2_j}\bm\Sigma^{-1}_{(j),j}\bm \Sigma^{-1}_{(j),j}.
\end{array}
\right.
\end{equation}
Thus, we have the following recursive formula
$$
F^n_{\bm k+\mathbf{e}_i}(\mathbf{a},\mathbf{b}; \bm\mu,\bm\Sigma,\nu)
=\mu_i F^n_{\bm k}(\mathbf{a},\mathbf{b}; \bm\mu,\bm\Sigma,\nu)
+\frac{\nu}{\nu-2} \mathbf{e}_i^\top{\boldsymbol\Sigma}^{-1}\mathbf{d}_\mathbf{k},
$$
where
\begin{align*}
\mathbf{d}_{\mathbf{k},j}=&k_j F^n_{\bm k-\mathbf{e}_i}(\mathbf{a},\mathbf{b}; \bm\mu,\bm\Sigma,\nu)
+a_j^{k_j}N(a_j|\mu_j,\sigma_j^2)
F^{n-1}_{\bm k_{(j)}}(\mathbf{a}_{(j)},\mathbf{b}_{(j)}; \widehat{\boldsymbol\mu}^\mathbf{a}_j,\widehat{\bm\Sigma},\nu)\\
&-b_j^{k_j}N(b_j|\mu_j,\sigma_j^2)
F^{n-1}_{\bm k_{(j)}}(\mathbf{a}_{(j)},\mathbf{b}_{(j)}; \widehat{\boldsymbol\mu}^{ \mathbf{b}_j},\widehat{\bm\Sigma},\nu).
\end{align*}
Note by convention that the first term, second term, and third term in the expression of $\mathbf{d}_{\mathbf{k},j}$ equal 0 when $k_j=0$, $a_j=\infty$, $b_j=-\infty$ respectively.
\section{Conclusion}
We derive
the closed form formulae
for the raw moments, absolute moments, and central moments
of Student's t-distribution with arbitrary degrees of freedom. We provide results in one and $n$-dimensions, which unify and extend the existing literature for the Student's t-distribution.
It would be interesting to investigate tail quantile approximations or asymptotic tail properties
of higher (generalized) Student's T-distribution as done in \cite{schluter2012tail} and \cite{finner2008asymptotic}.
We leave this as an interesting project for future studies.
\newpage
\bibliographystyle{agsm}
| {
"timestamp": "2021-03-29T02:09:24",
"yymm": "1912",
"arxiv_id": "1912.01607",
"language": "en",
"url": "https://arxiv.org/abs/1912.01607",
"abstract": "In this note, we derive the closed form formulae for moments of Student's t-distribution in the one dimensional case as well as in higher dimensions through a unified probability framework. Interestingly, the closed form expressions for the moments of Student's t-distribution can be written in terms of the familiar Gamma function, Kummer's confluent hypergeometric function, and the hypergeometric function.",
"subjects": "Probability (math.PR); Statistics Theory (math.ST)",
"title": "Moments of Student's t-distribution: A Unified Approach",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754474655618,
"lm_q2_score": 0.8031737987125612,
"lm_q1q2_score": 0.790785202260035
} |
https://arxiv.org/abs/1710.06916 | Switch Functions | We define a switch function to be a function from an interval to $\{1,-1\}$ with a finite number of sign changes. (Special cases are the Walsh functions.) By a topological argument, we prove that, given $n$ real-valued functions, $f_1, \dots, f_n$, in $L^1[0,1]$, there exists a switch function, $\sigma$, with at most $n$ sign changes that is simultaneously orthogonal to all of them in the sense that $\int_0^1 \sigma(t)f_i(t)dt=0$, for all $i = 1, \dots , n$.Moreover, we prove that, for each $\lambda \in (-1,1)$, there exists a unique switch function, $\sigma$, with $n$ switches such that $\int_0^1 \sigma(t) p(t) dt = \lambda \int_0^1 p(t)dt$ for every real polynomial $p$ of degree at most $n-1$. We also prove the same statement holds for every real even polynomial of degree at most $2n-2$. Furthermore, for each of these latter results, we write down, in terms of $\lambda$ and $n$, a degree $n$ polynomial whose roots are the switch points of $\sigma$; we are thereby able to compute these switch functions. | \section{Introduction}
In this paper, we provide a positive answer to the first (existence) part of a question raised by one of us \cite[end of Sec.~4]{HallTrigEllipt} and also an answer to the second (computation) part of the question in some special cases.
\begin{definition}
Given a real interval $[a,b]$, we call $\sigma:[a,b]\to \{1, -1\}$ a \emph{switch function}. We call the points at which it changes sign its \emph{switch points} and sometimes refer to its sign changes as its \emph{switches}.
\end{definition}
The functions studied by Walsh in \cite{Walsh}, are examples of switch functions.
\begin{definition}
Given functions $f,\sigma:[a,b]\to\mathbb{R}$, we define the pairing
\begin{equation}
\label{InnProd}\langle f,\sigma\rangle = \int_a^b \sigma(t)f(t)\,dt
\end{equation}
and we say that $\sigma$ and $f$ are \emph{orthogonal} if $\langle f,\sigma\rangle=0$.
\end{definition}
The following questions were posed in \cite[p.~543]{HallTrigEllipt}: Under what circumstances does a switch function with at most $n$ switches exist that is orthogonal to $n$ given functions, and how can such a function be computed?
To begin, we consider what class of functions to pair with switch functions. The most obvious choice would be continuous functions, but we can do a little better than that. We need functions for which the inner products with switch functions are well defined. For such a function, $f$, there exists a switch function, $\sigma$ (essentially $\sgn f$) such that $f\sigma = \abs f$, and hence
\[
\langle f,\sigma\rangle = \int_a^b \abs{f(t)}dt ,
\]
which is the $L^1$ norm of $f$, is finite. Indeed, this is the supremum of inner products of $f$ with switch functions. So, we should require $f\in L^1[a,b]$. (Functions will always be $\mathbb{R}$-valued, unless otherwise specified.)
Because switch functions are paired with $L^1$ functions, it is natural to treat the switch functions as a subset of the predual of $L^1[a,b]$, which is $L^\infty[a,b]$ with the weak${}^*$\ topology determined by the pairing with $L^1[a,b]$.
\begin{definition}
The weak${}^*$\ topology on $L^\infty[a,b]$ is the weakest topology such that the pairing with any $f\in L^1[a,b]$ is a continuous map, $\langle f,\;\cdot\;\rangle : L^\infty[a,b]\to\mathbb{R}$.
\end{definition}
Note that $L^\infty[a,b]$ isn't really a set of functions. Its elements are equivalence classes of functions that differ on sets of measure $0$. This is good, because we really don't care whether a switch function is equal to $1$ or $-1$ at a given switch point.
Without loss of generality, we can take the domain interval to be $[0,1]$.
Note that a switch function $\sigma\in L^\infty[0,1]$ is determined, up to an overall sign by its switch points. This parametrization will be very important here.
\begin{definition}
Denote $\mathbf x = (x_1,\dots,x_n)\in\mathbb{R}^n$.
For any $n\in\mathbb{N}$, the \emph{standard $n$-simplex} is
\[
\Delta^n \eqdef \{\mathbf x\in\mathbb{R}^n \mid 0\leq x_1\leq x_2\leq\dots\leq x_n\leq 1\} .
\]
For convenience, we let $x_0\eqdef0$ and $x_{n+1}\eqdef1$.
\[
\partial\Delta^n \eqdef \{\mathbf x\in\Delta^n \mid x_m=x_{m+1} \text{ for some }0\leq m\leq n\}
\]
\end{definition}
\begin{definition}
For any $\mathbf x\in\Delta^n$, let $\Sigma_n(\mathbf x)\in L^\infty[0,1]$ be (the equivalence class of) the function given by
\[
\Sigma_n(\mathbf x)(t) = (-1)^m \quad \text{ for }\quad x_m \leq t < x_{m+1}
\]
for any $0\leq m\leq n$, and $\Sigma_n(\mathbf x)(1)=(-1)^n$.
In this way, $\Sigma_n : \Delta^n \to L^\infty[0,1]$.
\end{definition}
\begin{definition}
$D_n \eqdef \Sigma_n(\Delta^n) \subset L^\infty[0,1]$ with the weak${}^*$\ topology. $\partial D_n \eqdef \Sigma_n(\partial \Delta^n) \subset D_n$.
\end{definition}
\begin{remark}
$D_n\smallsetminus \partial D_n \subset L^\infty[0,1]$ is the set of switch functions with precisely $n$ switches that take the value $+1$ for all small enough $t\in [0,1]$. $\partial D_n =D_{n-1}\cup (-D_{n-1})\subset L^\infty[0,1]$ is the set of switch functions with at most $n-1$ switches.
\end{remark}
With these definitions, we can formulate the main questions more precisely.
\begin{question}
\label{Main Question}
Given $n$ functions, $f_1,\dots,f_n\in L^\infty[0,1]$ (or equivalently, an $n$-dimensional subspace) does there exist $\sigma\in D_n$ such that $\langle f_i,\sigma\rangle=0$ for $i=1,\dots,n$? Can such a $\sigma$ be computed? Is it unique?
\end{question}
Theorem~\ref{Main theorem} will show that $\sigma$ does always exist. In the later sections, we will compute it for some classes of functions.
Given a single function, $f\in L^1[0,1]$, it is clear that there exists $\sigma\in D_1$ orthogonal to $f$, because if we set
\begin{equation}
\label{IndInt}
F(x)=\int_0^x f(t)dt
\end{equation}
then $F$ is continuous and so there exists \emph{at least} one $x_1$ such that $F(x_1)={\tfrac{1}{2}}F(1)$. This serves as the switch point, so $\sigma=\Sigma_1(x_1)$ (and $-\Sigma_1(x_1)$) is a switch function orthogonal to $f$.
This is a special case of the Intermediate Value Theorem, so Theorem~\ref{Main theorem} can be seen as a generalization of the Intermediate Value Theorem.
Given $n$ functions, $f_i$, $i= 1, \dots, n$, with integrals $F_i$, $\Sigma_n(\mathbf x)$ is an orthogonal switch function if $0\le x_1\le x_2 \le \dots \le x_n \le 1$ and
\begin{equation}
\label{SwitchEq}
F_i(x_n)-F_i(x_{n-1}) + \dots + (-1)^{n-1}F_i(x_1)=\tfrac{1}{2} F_i(1),
\end{equation}
for $1\le i\le n$.
\section{Two Switches}
When $n=2$, the equations \eqref{SwitchEq} specialise to
\begin{equation*}
F_i(x_2)-F_i(x_1)=\tfrac{1}{2} F_i(1),
\end{equation*}
for $i=1,2$. There is an easy solution in this case when one of the functions, say $f_1$, is positive (i.e.~$f_1(x) > 0$ for almost all $x\in [0,1]$). By rescaling, we can assume without loss of generality that $F_1(1)=1$.
Define $a$ by $F_1(a)=\tfrac{1}{2}$ and $\phi:[0,a]\to\mathbb{R}$ by the equation
\begin{equation}
\label{f1pos}
F_1(x+\phi(x))= F_1(x) + \tfrac{1}{2} .
\end{equation}
Note that $a+\phi(a)=1$. If $F_2(a)$ is $\tfrac{1}{2} F_2(1)$ then we only need one switch point. Otherwise, the equation
\begin{equation}
\label{ExcessDeficit}
F_2(x+\phi(x)) - F_2(x) = \tfrac{1}{2} F_2(1)
\end{equation}
errs by excess or deficit at $x=0$ and the opposite at $x=a$, and therefore, by continuity, there must be a value, $x=x_1$ for which it holds, whereupon we may take the switch points to be $x_1$ and $x_2=x_1+\phi(x_1)$. (When $F_2(a)$ is $\tfrac{1}{2} F_2(1)$, we may, if we wish, think that there is a second switch point at $x=0$ or at $x=1$.)
This argument permits an extension. Given an integer $k \ge 3$ we can define $\phi$ by $F_1(x+\phi(x))= F_1(x) + \frac1k$, together with points $a_1, a_2, \dots, a_{k-1}$ ($a_1=\phi(0), a_2=a_1 + \phi(a_1), \dots , a_{k-1}=a_{k-2} + \phi(a_{k-2}$)) such that $F_1(a_j)=j/k$, ($1\le j < k$). We put $a_0=0$, $a_k=1$ ($=a_{k-1} +\phi(a_{k-1})$) and look at the integrals of $f_2$ over the intervals $[a_{j-1}, a_j]$. Either these are all equal to $F_2(1)/k$ (in which case \eqref{kExDef} below holds with $x$ equal to any of the $a_j$, $j=1, \dots , {k-1}$) or there exists an adjacent pair of such integrals, one in excess and the other deficit, and so there exists an $x$ such that
\begin{equation}
\label{kExDef}
F_2(x+\phi(x)) - F_2(x) = \frac{1}{k} F_2(1)
\end{equation}
Thus we may solve the simultaneous equations
\begin{equation}
\label{1overk}
F_i(x_2)-F_i(x_1)=\frac{1}{k} F_i(1), \quad i=1,2.
\end{equation}
We then have that for any $\lambda$ of form
\begin{equation}
\label{lambda}
\lambda=1-\tfrac{2}{k},
\end{equation}
where $k\in\mathbb{N}$, and for any pair of functions $f_1, f_2 \in L^1[0,1]$, one of which is positive, there exists a $0\leq x_1\leq x_2\leq 1$ such that $\sigma=\Sigma_2(x_1,x_2)$ satisfies
\begin{equation}
\label{ChiLambda}
\int_0^1 \sigma(x)f_i(x)dx= \lambda\int_0^1f_i(x)dx,
\end{equation}
for $i=1,2$. In other words, $\sigma-\lambda$ is orthogonal to each $f_i$.
This suggests a more general question.
\begin{question}
\label{Generalized Question}
Given $-1\leq\lambda\leq1$ and $n$ functions, $f_1,\dots,f_n\in L^1[0,1]$, does there exist $\sigma\in D_n$, such that $\langle f_i,\sigma-\lambda\rangle=0$ for $i=1,\dots n$? Can such a $\sigma$ be computed? Is it unique?
\end{question}
For now, we return to Question~\ref{Main Question} (i.e., $\lambda=0$) for $n=2$. We can drop the assumption that one of the two functions, $f_1, f_2$, is positive with the following topological argument which, in the sequel, we will generalize to the case of $n$ functions.
Recall that the simplex $\Delta^2$ is just the triangle in $\mathbb{R}^2$ with vertices $(0,0)$, $(0,1)$, and $(1,1)$;
\begin{figure}
\centering
\includegraphics[clip, scale=0.7]{triangletrim}
\caption{ \label{fig1} The triangle $\Delta^2$, which, as explained in the text, can be taken to represesent the set of switch functions $D_2$ provided one thinks of points on the hypoteneuse as identified. For the significance of the labels A and B and of the arrows, see the caption to Figure~\ref{fig2}.}
\end{figure}
this is homeomorphic to the closed unit disc, $B^2$. The map $\Sigma_2:\Delta^2\to D_2$ is almost injective. Regarded as an element of $L^\infty[0,1]$, a function that switches twice at the same point is the same as a function that doesn't switch at all, so
\begin{equation*}
\Sigma_2(x,x)=1 \quad \forall x\in [0,1],
\end{equation*}
but any other switch function in $D_2$ comes from a unique point in $\Delta^2$. This means that $D^2$ is topologically the quotient of $\Delta^2$ by its hypotenuse, the edge from $(0,0)$ to $(1,1)$. This is also homomorphic to $B^2$.
The upper edge of $\Delta^2$ is the set $\{(x,1)\mid x\in[0,1]\}$. For $0<x<1$, $\Sigma_2(x,1)$ is the switch function with a single switch point at $x$ and equal to $+1$ on $[0,x)$. At the ends, this reduces to the constant functions $\Sigma_2(0,1)=-1$ and $\Sigma_2(1,1)=+1$ with no switch points.
The vertical edge of $\Delta^2$ is the set $\{(0,x)\mid x\in[0,1]\}$. For $0<x<1$, $\Sigma_2(0,x)$ is the switch function with a single switch point at $x$ but equal to $-1$ on $[0,x)$.
When $\mathbf x\in\Delta^2$ is in the interior, $\Sigma_2(\mathbf x)$ is the unique switch function in $D_2$ with switch points $x_1$ and $x_2$. When $\mathbf x\in\Delta^2$ is on the boundary, $\Sigma_2(\mathbf x)$ has one (or no) switch point, and there is another function in $D_2$ with the same switch point. To be precise, $\Sigma_2(0,x)=-\Sigma_2(x,1)$.
Analogously, any point $\mathbf z\in S^1\subset B^2$, on the unit circle, which is the boundary of the unit disc, has an antipodal point $-\mathbf z\in S^1$. In fact, we will see that there exists a (not unique) homeomorphism $\varphi_2:B^2\to D_2$ such that for $\mathbf z\in S^1$, $\varphi_2(-\mathbf z) = -\varphi_2(\mathbf z)$.
For two given functions, $f_1, f_2\in L^1[0,1]$, we now consider the map, $\Psi: D_2 \rightarrow \mathbb{R}^2$
\begin{equation}
\label{Phi}
\Psi(\sigma) \eqdef \left(\int_0^1 \sigma(t)f_1(t)dt, \int_0^1\sigma(t)f_2(t)dt\right) = \left(\langle f_1,\sigma\rangle,\langle f_2,\sigma\rangle\right).
\end{equation}
This is continuous, by definition of the weak${}^*$\ topology, and for $\sigma\in\partial D_2$, $\Psi(-\sigma)=-\Psi(\sigma)$. Precomposing $\Psi$ with the homeomorphism, $\varphi_2:B^2\to D_2$ we thus have a continuous map $\Psi\circ \varphi_2:B^2\to\mathbb{R}^2$ that maps antipodal points to antipodal points.
Two simple examples are provided by (a) $f_1(x)=1, f_2(x)=x$ and (b) $f_1(x)=\cos \pi x, f_2(x)=\sin \pi x$. For both of these examples, it is not difficult to compute the map $\Psi$ and to verify that $\Psi$ is a homeomorphism onto its image. (See Lem.~\ref{Hom lemma}.)
In fact, as indicated in Figure~\ref{fig2}(a), in the first example, the range is the region bounded by the curves $x \mapsto (y_1(x), y_2(x))$ and $x\mapsto (-y_1(x), -y_2(x))$ where
\begin{equation*}
(y_1(x), y_2(x)) = \Psi(\Sigma_2(0,x)) = (1-2x, \tfrac{1}{2} - x^2)
\end{equation*}
is the image of the point $(0,x)$ on the vertical edge of the triangle, $\Delta^2$, pictured in Figure~\ref{fig1}.
Similarly, in our second example, the range of $\Psi$ is the disc bounded by the circle of radius $2/\pi$ centred on the origin. The image of the point $(0,x)$ along the vertical edge of $\Delta^2$ is
\begin{equation*}
(y_1(x), y_2(x)) = \Psi(\Sigma_2(0,x)) = \frac{2}{\pi}(-\sin \pi x, \cos \pi x)
\end{equation*}
on the left semicircle. Likewise, the image of the point $(x,1)$ on the horizontal edge of $\Delta^2$ is $(-y_1(x), -y_2(x))$ on the right semicircle.
\begin{lem}
\label{Hom lemma}
There exists a homeomorphism $\varphi_2:B^2\to D_2$ such that for $\mathbf z\in S^1$, $\varphi_2(-\mathbf z)=-\varphi_2(\mathbf z)$.
\end{lem}
\begin{proof}
For case (b) above, the Jacobian determinant of $\Psi\circ\Sigma_2 : \Delta^2\to\mathbb{R}^2$ is easily computed to be $4\sin[\pi(x_2-x_1)]$, which does not vanish on the interior of $\Delta^2$; therefore $D_2\smallsetminus\partial D_2$ maps homeomorphically to its image. Explicit calculation shows that $\partial D_2$ maps homeomorphically to the circle of radius $\frac{2}{\pi}$. This shows that $\Psi$ maps $D_2$ homeomorphically to the disc of radius $\frac2\pi$.
After rescaling by $\frac\pi2$, this gives an explicit homeomorphism from $D_2$ to $B^2$. Its inverse is a suitable choice of $\varphi_2$.
\end{proof}
Of course, this is far from unique.
\begin{prop}
\label{Preliminary case}
Given $f_1,f_2\in L^1[0,1]$, there exists $\sigma\in D_2$ orthogonal to $f_1$ and $f_2$.
\end{prop}
\begin{proof}
This condition is equivalent to $\Psi(\sigma)=(0,0)$. Suppose that no such $\sigma$ exists.
Any map from $S^1$ to $\mathbb{R}^2\smallsetminus \{(0,0)\}$ has a well-defined, homotopy-invariant winding number (around the origin). For $s\in[0,1]$, define $\alpha_s:S^1\to \mathbb{R}^2\smallsetminus \{(0,0)\}$ by
\[
\alpha_s(\mathbf z) = (\Psi\circ \varphi_2)(s\mathbf z) .
\]
Because $\alpha_0$ has image a single point, its winding number is $0$.
Because $\alpha_s$ is continuous in $s$, each $\alpha_s$ must have the same winding number: $0$.
On the other hand, $\alpha_1$ respects antipodes in the sense that $\alpha_1(-\mathbf z)=-\alpha_1(\mathbf z)$ for any $\mathbf z\in S^1$. This implies that $\alpha_1$ has an odd winding number. This is a contradiction.
\end{proof}
This proof used the homeomorphism $\varphi_2:B^2\to D_2$. For general $n$, it is clear that $\Delta^n\cong B^n$, and $\Sigma_n:\Delta^n\to D_n$ is a homeomorphism on the interior, so that $D_n$ can be constructed as a quotient of $\Delta^n$ by identifications along the boundary. But, although it seems to us likely that $D_n$ is homeomorphic to the $n$-ball $B^n$, it seems to become increasingly difficult to prove this as $n$ increases and we have not been able to prove it for general $n$.
However, it turns out that homeomorphism is not really necessary for this proof. The proof of Proposition~\ref{Preliminary case} uses algebraic topology, and algebraic topology invariants are not just homeomorphism invariant, they are homotopy invariant \cite{AlgTop}. So, we really only needed $\varphi_2$ to be a homotopy equivalence $\varphi_2:(B^2,S^1)\to(D_2,\partial D_2)$ that respects antipodes.
Generalizing this will lead to our main existence result, Theorem \ref{Main theorem}.
Returning to Question~\ref{Generalized Question}, despite the above result for $\lambda$ of form $1-2/k$, existence does not always hold. In fact, if we make the definition:
\begin{definition}
Let $\mathcal F_n$ be the set of $\lambda$ in the interval $(-1,1)$ such that for some $f_1,\dots,f_n\in L^1[0,1]$ there \textbf{does not} exist $\sigma\in D_n$ with $\langle f_i,\sigma-\lambda\rangle = 0$ for all $i=1,\dots,n$.
\end{definition}
then we have
\begin{prop}
\label{Dense}
$\mathcal F_2 \subset [-1,1]$ is dense.
\end{prop}
\begin{proof}
For $k=1,2,3,\dots$ and $m=1,2,\dots,k$, let
\[
\lambda = \frac{4k-8m+1}{4k+1} .
\]
Consider the two functions, $f_1(t)=1$ and
\[
f_2(t) = \frac{(4k+1)\pi}{2} \cos\left(\frac{4k+1}{2}\pi t\right) .
\]
These have been chosen so that $\int_0^1 f_i(t)dt =1$ for $i=1,2$.
The conditions that $0 = \langle f_i,\Sigma_2(x_1,x_2)-\lambda\rangle$ are explicitly
\[
x_2-x_1 = \frac{1-\lambda}2 = \frac{4m}{4k+1}
\]
and
\begin{align*}
\frac{1-\lambda}2 &= \sin\left(\frac{4k+1}{2}\pi x_2\right) - \sin\left(\frac{4k+1}{2}\pi x_1\right) \\
&= 2 \sin\left(\frac{4k+1}{4}\pi [x_2-x_1]\right) \cos\left(\frac{4k+1}{4}\pi [x_2+x_1]\right) .
\end{align*}
By the first condition,
\[
\sin\left(\frac{4k+1}{4}\pi [x_2-x_1]\right) = \sin \pi m = 0 ,
\]
but $1-\lambda\neq0$. Therefore, for this choice of $\lambda$ and functions, there does not exist any $\sigma\in D_2$ such that $\langle f_i,\sigma-\lambda\rangle =0$ for $i=1,2$, so $\lambda\in \mathcal F_2$.
Clearly, the set
\[
\left\{\tfrac{4k-8m+1}{4k+1} \mid k,m\in\mathbb{N},\ 1\leq m\leq k\right\} \subset \mathcal F_2
\]
is dense in the interval $[-1,1]$, and therefore $\mathcal F_2$ is as well.
\end{proof}
\begin{figure}
\centering
\includegraphics[clip, scale=0.7]{1ximagetrim}
\caption{\label{fig2} The image of the map $\Psi: D_2\rightarrow {\mathbb R}^2$ for (a) $f_1(x)=1, f_2(x)=x$; (b) $f_1(x) = \cos \pi x, f_2(x)=\sin \pi x$. A and B denote the images under $\Psi$ of the points labelled A and B on the triangle, $\Delta^2$ of Figure~\ref{fig1} (regarded as a representation of $D_2$ by identifying the points on its hypotenuse). The arrows indicate the path traversed by the image of $\Psi$ when the argument traverses the path on the boundary of $\Delta^2$ indicated by the arrows in Figure~\ref{fig1}. The dashed lines in (a) and (b) are the images of the parametrized curves $[-1,1]\rightarrow \mathbb{R}^2$ defined by Equation \eqref{slope}.}
\end{figure}
On the other hand, existence and uniqueness do hold in Examples (a) and (b) above in the case $n=2$: In each of these examples, the line:
\begin{equation}
\label{slope}
\lambda \mapsto \left(\lambda\int_0^1 f_1(t)dt, \ \ \lambda\int_0^1 f_2(t)dt\right),
\end{equation}
$\lambda \in [-1,1]$, represented in Figures 2(a) and 2(b) by the indicated dashed lines, clearly lies entirely in the range of $\Psi$. Hence, for each of these examples, \eqref{ChiLambda} holds for all $\lambda \in [-1,1]$. In general, if the image of $\Psi$ is star shaped (as in these examples) then existence will hold for all such $\lambda$.
In both of these examples, $\Psi$ is a homeomorphism to its image -- and thus injective. Injectivity implies uniqueness of the switch function.
\begin{figure}
\centering
\includegraphics[scale=.7]{Image2}
\caption{The outline of the image of $\Psi$ in the $k=1$ case. The dashed curve is the part of the image of $\partial D_2$ in the interior of the image of $\Psi$. The dashed line segment is parametrized by \eqref{slope}, as in the previous examples, and the marked point is $(-\frac35,-\frac35)$.\label{Counterexample}}
\end{figure}
Figure~\ref{Counterexample} shows the image of $\Psi$ in the $k=1$ case in the proof of Proposition~\ref{Dense}, i.e., $f_1(t)=1$ and
\[
f_2(t) = \frac{5\pi}{2} \cos\left(\frac{5\pi t}2\right) .
\]
In this case, $\Psi$ is far from injective. It twists and folds $D_2$. The image is not star shaped, and the dashed line segment is not entirely contained in the image. In particular, the point $(-\frac35,-\frac35)$ is outside of the image and this shows that these functions give a counterexample for Question~\ref{Generalized Question} with $\lambda=-\frac35$. Note that in contrast to Figures~\ref{fig2}a and \ref{fig2}b, the image $\Psi(D_2)$ is not invariant under the antipodal map, but in all cases the image of the boundary, $\Psi(\partial D_2)$ is invariant.
Finally, our topological proof of Proposition~\ref{Preliminary case} is no help to actually computing the desired switch function, but we will discuss examples below in which the switch function can be computed explicitly.
\section{An Existence Theorem}
We will now provide a positive answer to the existence part of Question~\ref{Main Question}.
\label{Existence Section}
\begin{lem}
The map $\Sigma_n : \Delta^n\to L^\infty[0,1]$ is weak${}^*$-continuous.
\end{lem}
\begin{proof}
Recall that weak${}^*$\ continuity means precisely that if $\Sigma_n: \Delta^n\to L^\infty[0,1]$ is composed with the pairing with any $f\in L^1[0,1]$, then the result is continuous.
Firstly, observe that
\[
F(s) \eqdef \int_0^s f(t) dt
\]
defines a continuous function, $F:[0,1]\to\mathbb{R}$. For any $\mathbf x\in\Delta^n$, the pairing of $\Sigma_n(\mathbf x)$ with $f$ is
\[
\langle f,\Sigma_n(\mathbf x)\rangle = (-1)^n F(1) - \sum_{m=1}^n 2(-1)^m F(x_m) ,
\]
which is manifestly continuous.
\end{proof}
Note that $\Sigma_n$ is injective, except on the subset
\[
K_n \eqdef \{\mathbf x\in\Delta^n \mid x_m=x_{m+1} \text{ for some }1\leq m\leq n-1\} \,.
\]
\begin{lem}
\label{Good subset}
\[
\Sigma_n : \Delta^n\smallsetminus K_n \to D_n\smallsetminus D_{n-2}
\]
is a homeomorphism.
\end{lem}
\begin{proof}
It is easy to see that this is a bijection. We just need to check that the inverse is continuous by showing that the image of an open set is open. It is sufficient to check this for an arbitrarily small neighborhood of any point.
For any $\mathbf y\in\Delta^n$ and $\varepsilon>0$,
\[
\mathcal{O}_{\mathbf y,\varepsilon} \eqdef \left\{\mathbf x\in \Delta^n \bigm| \left|x_m-y_m\right|<\varepsilon\ \forall m=1,\dots,n\right\}
\]
is an open neighborhood of $\mathbf y$.
Suppose that $\mathbf y \in \Delta^n \smallsetminus \partial\Delta^n$, i.e., $y_m<y_{m+1}$ for $m=0,\dots,n$. If $\varepsilon>0$ and $y_{m+1}-y_m\geq 2\varepsilon$ for $m=0,\dots,n$, then $\mathcal{O}_{y,\varepsilon} \subset \Delta^n \smallsetminus K_n$ and
\[
\Sigma_n(\mathcal{O}_{\mathbf y,\varepsilon}) = \bigcap_{m=1}^n \left\{\sigma\in D_n \bigm| \Abs{\left<\chi_{[y_m-\varepsilon,y_m+\varepsilon]},\sigma\right>} < 2\varepsilon\right\}
\]
where $\chi_{[y_m-\varepsilon,y_m+\varepsilon]}$ is the function equal to $1$ on that interval and $0$ elsewhere.
By the definition of the weak${}^*$\ topology, the pairing with $\chi_{[y_m-\varepsilon,y_m+\varepsilon]}$ is continuous. The inverse image of the open interval $(-2\varepsilon,2\varepsilon)$ is therefore open, and a finite intersection of open sets is open.
This shows that $\Sigma_n(\mathcal{O}_{\mathbf y,\varepsilon})$ is open.
This argument must be modified slightly to extend to any $\mathbf y\in \Delta^n\smallsetminus K_n$. If $y_1=0$ then we use the set
\[
\left\{\sigma\in D_n \bigm| \Abs{\left<\chi_{[0,\varepsilon]},\sigma\right>}<\varepsilon\right\}
\]
in the first place, and don't require $y_1-y_0\geq 2\varepsilon$.
If $y_n=1$ then we use
\[
\left\{\sigma\in D_n \bigm| \Abs{\left<\chi_{[1-\varepsilon,1]},\sigma\right>}<\varepsilon\right\}
\]
in the last place, and don't require $y_{n+1}-y_n\geq 2\varepsilon$.
\end{proof}
\begin{definition}
$B^n\subset \mathbb{R}^n$ is the closed unit ball. $S^{n-1}\subset B^n$ is the unit sphere.
\end{definition}
\begin{lem}
For all $n\geq0$, there exists a homotopy equivalence $\varphi_n : (B^n,S^{n-1}) \to (D_n,\partial D_n)$, such that for any $\mathbf z\in S^{n-1}$,
\beq
\label{antipode}
\varphi_n(-\mathbf z)=-\varphi_n(\mathbf z) .
\eeq
\end{lem}
\begin{proof}
We will prove this by induction. The base cases $n=0$ and $n=1$ follow immediately if we define the maps by $\varphi_0(0)=+1$ and
\[
\varphi_1(z) = \Sigma_1\left(\tfrac12[z+1]\right) .
\]
Now, let $n\geq2$ and suppose that the proposition is true up to $n-1$.
Note that $\partial D_n = D_{n-1} \cup (-D_{n-1})$ and the intersection is precisely $\partial D_{n-1}$. Define a map $\psi_n : S^{n-1} \to \partial D_n$ by
\[
\psi_n(\mathbf z,\pm\sqrt{1-\lVert \mathbf z\rVert^2}) = \pm \varphi_{n-1}(\pm \mathbf z)
\]
for $\mathbf z\in B^{n-1}$. This is well defined, because for $\mathbf z\in S^{n-2}$, eq.~\eqref{antipode} shows that the $+$ and $-$ formulae agree.
The induction hypothesis that $\varphi_{n-1}:(B^n,S^{n-1})\to (D_n,\partial D_n)$ is a homotopy equivalence implies that $\psi_n:S^{n-1}\to \partial D_n$ is a homotopy equivalence.
Note that $K_n\subset \partial \Delta^n$ is the union of a proper subset of closed faces. It is therefore compact and contractable.
The image $\Sigma_n(K_n)= D_{n-2}$ is compact. By the induction hypothesis, this is homotopic to $B^{n-2}$, so it is also contractable.
By Lemma~\ref{Good subset}, $\Sigma_n$ gives a homeomorphism from the quotient $\partial \Delta^n/K_n$ to $D_n/D_{n-2}$. This gives an isomorphism in homology $H_{n-1}(\partial \Delta^n/K_n)\to H_{n-1}(D_n/D_{n-2})$. By excision, this is equivalent to an isomorphism $H_{n-1}(\partial \Delta^n,K_n)\to H_{n-1}(D_n,D_{n-2})$. Because $K_n$ and $D_{n-2}$ are contractable (and using the exact sequence) this is equivalent to an isomorphism $H_{n-1}(\partial\Delta^n)\to H_{n-1}(\partial D_n)$.
Now, since $\partial \Delta^n\cong S^{n-1}$ and we have shown that $\partial D_n\simeq S^{n-1}$, this is equivalent to a map between $n-1$ dimensional spheres. Such maps are classified up to homotopy by their action on $H_{n-1}$, therefore $\Sigma_n: \partial\Delta^n\to \partial D_n$ is a homotopy equivalence.
Because $\Sigma_n:\Delta^n\to D_n$ is a homeomorphism on the interior, $D_n$ can be constructed by attaching an $n$-cell to $\partial D_n$ by $\Sigma_n: \partial\Delta^n\to \partial D_n$. That is, $D_n$ is homeomorphic to the quotient of the disjoint union $\Delta^n\cup \partial D_n$ by the equivalence relation generated by $x\sim \Sigma_n(x)$ for all $x\in \partial \Delta^n$. Because this attaching map is a homotopy equivalence, this proves that there is a homotopy equivalence $(D_n,\partial D_n)\simeq (B^n,S^{n-1})$.
Finally, we want to choose a specific homotopy equivalence $\varphi_n : (B^n,S^{n-1}) \to (D_n,\partial D_n)$ whose restriction to $S^{n-1}$ is the $\psi_n$ constructed above. This is equivalent to a homotopy rel $S^{n-2}$ between $\varphi_{n-1}$ and the map defined by $-\varphi_{n-1}(-\mathbf z)$. This exists because $(B^n,S^{n-1})$ is $(n-1)$-connected.
\end{proof}
\begin{thm}
\label{Main theorem}
If $f_1,\dots,f_n\in L^1[0,1]$, then there exists a function in $D_n$ that is orthogonal to all of $f_1,\dots, f_n$.
\end{thm}
\begin{proof}
Denote $\mathbf f\eqdef(f_1,\dots,f_n)\in L^1([0,1],\mathbb{R}^n)$. Define
\[
\Phi :B^n\to \mathbb{R}^n
\]
by $\Phi(\mathbf z) \eqdef \langle \mathbf f,\varphi_n(\mathbf z)\rangle$. We need to prove that $0$ is in the image of $\Phi$.
We will prove this by contradiction, so suppose that $\Phi(B^n)\subset \mathbb{R}^n_*$, where $\mathbb{R}^n_*$ is the set of nonzero vectors. Because $B^n$ is contractible, $\Phi : S^{n-1}\to \mathbb{R}^n_*$ is homotopic to a constant map. This means that $\pi\circ \Phi : S^{n-1}\to S^{n-1}$ (where $\pi:\mathbb{R}^n_*\to S^{n-1}$, $\pi(\mathbf z) = \mathbf z/\Norm{\mathbf z}$) has degree $0$.
On the other hand, for $\mathbf z\in S^{n-1}$, $\Phi(-\mathbf z)=-\Phi(\mathbf z)$. This is the hypothesis of Theorem 3 in \cite{Whitt}, which states that the degree of $\pi\circ \Phi : S^{n-1}\to S^{n-1}$ must then be odd.
This is a contradiction, therefore for some $\mathbf z\in B^n$,
\[
0 = \Phi(\mathbf z) = \langle \mathbf f,\varphi_n(\mathbf z)\rangle .
\]
This $\varphi_n(\mathbf z)\in D_n \subset L^\infty[0,1]$ is the desired function.
\end{proof}
\section{The Polynomial Case}
\label{Sect:Poly}
Now, we return to the more general Question~\ref{Generalized Question} but in the specific example where the $n$ functions are $1, x, x^2, \dots, x^{n-1}$ (or any other linearly independent set of linear combinations of these functions). For this example, we will show existence and uniqueness and also compute the switch function.
For any $\lambda\in(-1,1)$, we wish to find a switch function, $\sigma$, with no more than $n$ switch points, such that
\[
0 = \langle p,\sigma-\lambda\rangle
\]
for any polynomial, $p$, of degree $\leq n-1$. Note that if $\sigma=\Sigma_n(\mathbf x)$, then this condition is equivalent to the system of equations,
\begin{equation}
\label{thetasumlow}
x_n^k-x_{n-1}^k+x_{n-2}^k- \dots + (-1)^{n-1}x_1^k=\theta, \quad (1 \le k \le n) ,
\end{equation}
in which $\theta=(1+(-1)^{n-1}\lambda)/2 \in (0,1)$.
We first give a simple, explicit solution to the computation part of Question~\ref{Main Question} (the case $\theta=\frac12$):
\begin{prop}
\[
x_j = \cos^2\left(\frac{[n+1-j]\pi}{2n+2}\right)
\]
is a solution of eq.~\eqref{thetasumlow} for $\theta=\frac12$ ($\lambda=0$) hence
\[
0 = \langle p,\Sigma_n(\mathbf x)\rangle
\]
for any polynomial, $p$, of degree $\leq n-1$.
\end{prop}
\begin{proof}
First notice that for $k\in\Z$,
\[
\sum_{j=1}^n (-1)^{j-1}\cos\left(\frac{jk\pi}{n+1}\right) = \operatorname{Re}\frac{e^{ik\pi/(n+1)}-(-1)^{k+n}}{1+e^{ik\pi/(n+1)}}
= \begin{cases}
0 & k+n \text{ even}\\
1 & k+n\text{ odd},
\end{cases}
\]
because, for $k+n$ is even, we have
\[
\operatorname{Re} i \tan\left(\frac{k\pi}{2n+2}\right) = 0 .
\]
Next, by the binomial theorem,
\[
\cos^{2k}t = 2^{-2k}\sum_{m=-k}^{k}\binom{2k}{k-m}e^{2imt}
=2^{-2k}\sum_{m=-k}^{k}\binom{2k}{k-m}\cos 2mt ,
\]
and by putting $t=0,\frac\pi2$ in this identity, we see that
\[
1 = 2^{-2k}\sum_{m=-k}^{k}\binom{2k}{k-m}
\]
and
\[
0 = 2^{-2k}\sum_{m=-k}^{k}\binom{2k}{k-m} (-1)^m ,
\]
hence
\[
2^{-2k}\sum_{k\;\mathrm{odd}}\binom{2k}{k-m} = 2^{-2k}\sum_{k\;\mathrm{even}} \binom{2k}{k-m} = \frac12 .
\]
We need to compute
\begin{multline*}
x_n^k-x_{n-1}^k+x_{n-2}^k- \dots + (-1)^{n-1}x_1^k
= \sum_{j=1}^n (-1)^{n-j} x_j^k \\
= \sum_{j=1}^n (-1)^{n-j} \cos^{2k}\left(\frac{[n+1-j]\pi}{2n+2}\right)
= \sum_{j=1}^n (-1)^{j-1} \cos^{2k}\left(\frac{j\pi}{2n+2}\right)\\
= \sum_{j=1}^n (-1)^{j-1}2^{-2k}\sum_{m=-k}^{k}\binom{2k}{k-m}\cos \left(\frac{mj\pi}{n+1}\right)
\end{multline*}
By the first identity, this equals
\[
2^{-2k}\sum_{k+n\;\mathrm{odd}} \binom{2k}{k-m} = \frac12 ,
\]
which verifies eq.~\eqref{thetasumlow} in this case.
\end{proof}
For Question~\ref{Generalized Question} (general $\theta$) our strategy is to compute a polynomial whose roots are $x_1,\dots,x_n$.
\subsection{Preliminary remarks}
It is helpful to consider first how one can solve the system of equations
\begin{equation}
\label{thetasumplus}
\xi_n^k+\xi_{n-1}^k+\xi_{n-2}^k+ \dots + \xi_1^k=\theta, \quad (1 \le k \le n)
\end{equation}
which resemble equations \eqref{thetasumlow} except that, in place of the alternating signs, all the signs are positive. To do this, we may seek an order-$n$ polynomial, $P(x)$ such that $\xi_1 \dots \xi_n$ solve \eqref{thetasumplus} if and only if they are its roots. We can assume that this is monic, so that
\[
P(x) = \prod_{j=1}^n (x-\xi_j) .
\]
\begin{definition}
For any polynomial, $Q$, of degree $m$ the \emph{reciprocal polynomial} is
\begin{equation*}
Q^*(x) \eqdef x^mQ(1/x)
\end{equation*}
so that the coefficients of $Q^*$ are those of $Q$ in reverse order.
\end{definition}
We use this notation throughout this section.
With this notation, and using the Taylor expansion of the natural logarithm,
\begin{align*}
P^*(x)&=\prod_{j=1}^n(1-\xi_j x) = \exp\left\{\sum_{j=1}^n\log(1-\xi_j x)\right\} \\
&= \exp(-s_1 x-\tfrac{1}{2} s_2x^2 - \tfrac{1}{3}s_3x^3 - \dots)
\end{align*}
where
\begin{equation*}
s_k=\xi_1^k+\xi_2^k+\dots +\xi_n^k.
\end{equation*}
Equations \eqref{thetasumplus} state that $s_k=\theta$ for $k=1,\dots,n$, so
\begin{align}
P^*(x)&=\exp\left\{-\theta x - \tfrac12\theta x^2 - \dots - \tfrac1n\theta x^n + O(x^{n+1})\right\} \nonumber \\
\label{plus}
&=(1-x)^\theta + O(x^{n+1})\\
&=1-\theta x + \binom{\theta}{2}x^2-\binom{\theta}{3}x^3 + \dots (-1)^{n}\binom{\theta}{n}x^n + O(x^{n+1}) . \nonumber
\end{align}
Because $P^*$ is a polynomial of degree $n$, it is clear that the $O(x^{n+1})$ term in the last line must actually vanish. So $P(x)$ is determined uniquely to be
\begin{equation*}
P(x)=x^n-\theta x^{n-1} + \binom{\theta}{2}x^{n-2}-\binom{\theta}{3}x^{n-3} + \dots (-1)^{n}\binom{\theta}{n}
\end{equation*}
and we may conclude that \eqref{thetasumplus} has a solution $(\xi_1, \dots \xi_n)$ -- unique up to permutations -- consisting of the roots of this $P(x)$.
\subsection{Construction}
To adapt the method explained above to Equations~\eqref{thetasumlow} we must deal with the alternating signs.
\begin{lem}
\label{Signs}
Suppose that $0<\theta<1$ and $\delta_i=\pm1$ for $i=1,\dots,n$. If $0 \le x_0 \le x_1 \le \dots \le x_n \le 1$ satisfy
\begin{equation}
\label{altdelta}
\delta_n x_n^k + \delta_{n-1} x_{n-1}^k + \dots + \delta_1x_1^k=\theta \qquad (1 \le k \le n)
\end{equation}
then $\delta_i=(-1)^{n-i}$ and $0<x_1<x_2<\dots<x_n<1$.
\end{lem}
\begin{proof}
Define $x_0=0$, $x_{n+1}=1$, and $\delta_{n+1}=-1$.
Note that eqs.~\eqref{altdelta} mean precisely that for any polynomial, $D$, of degree $\leq n-1$,
\[
\sum_{i=1}^n \delta_i x_i D(x_i) = \theta D(1) .
\]
For some polynomial $D$ (of any degree) and a set $T\subset\{1,\dots,n\}$, consider the sum
\[
\sum_{i\in T} \delta_i x_i D(x_i) .
\]
Starting from $T=\{1,\dots,n\}$, we can simplify this set in two ways without changing the sum:
\begin{itemize}
\item
If $i\in T$ and $x_i=0$, then remove $i$ from $T$ (because the $i$ term contributes nothing to the sum).
\item
If $i,j\in T$ with $x_i=x_j$ and $\delta_i\neq\delta_j$ then remove $i$ and $j$ from $T$ (because the $i$ and $j$ terms cancel).
\end{itemize}
Apply these steps iteratively until we have a set $T$ such that for all $i,j\in T$, $x_i\neq0$ and $x_i=x_j\implies\delta_i=\delta_j$, and
\[
\sum_{i\in T} \delta_i x_i D(x_i) = \sum_{i=1}^{n} \delta_i x_i D(x_i) .
\]
Now consider 2 possible cases.
\emph{Case 1: $x_j=\delta_j=1$ for some $j\in T$.} Assume without loss of generality that that is $n\in T$.
For each consecutive $i,j\in T$ such that $x_i<x_j$ and $\delta_i\neq\delta_j$, choose a number in the open interval $(x_i,x_j)$. Construct a polynomial $D$ with these numbers as its (simple) roots and choose the overall sign so that $D(1)>0$. In this way, we have $\delta_i D(x_i)>0$ for all $i\in T$, so
\begin{align*}
0 &< (1-\theta)D(1) + \sum_{i\in T,\neq n} \delta_i x_i D(x_i) =-\theta D(1) + \sum_{i\in T} \delta_i x_i D(x_i) \\
&\quad= -\theta D(1) + \sum_{i=1}^n \delta_i x_i D(x_i) .
\end{align*}
\emph{Case 2: Otherwise.}
For each consecutive $i,j\in T\cup\{n+1\}$ such that $x_i<x_j$ and $\delta_i\neq\delta_j$, choose a number in the open interval $(x_i,x_j)$. Construct a polynomial $D$ with these numbers as its (simple) roots and choose the overall sign so that $D(1)<0$. In this way, we have $\delta_i D(x_i)>0$ for all $i\in T\cup\{n+1\}$, so
\[
0 < -\theta D(1) + \sum_{i\in T} \delta_i x_i D(x_i) = -\theta D(1) + \sum_{i=1}^n \delta_i x_i D(x_i) .
\]
In either case, if $\deg D\leq n-1$, then the last expression is $0$, which is a contradiction. Therefore $\deg D=n$, but this requires $T=\{1,\dots,n\}$, with $x_i<x_{i+1}$ and $\delta_i\neq \delta_{i+1}$ for all $i=1,\dots,n$.
\end{proof}
This is equivalent to:
\begin{cor}
\label{Plusminus}
If $0<\theta<1$, and
\begin{equation}
\label{thetasumm}
\xi_1^k+\xi_2^k+ \dots + \xi_{n-m}^k - (\eta_1^k + \eta_2^k + \dots + \eta_m^k)=\theta, \qquad (1 \le k \le n)
\end{equation}
such that $\xi_i,\eta_i\in[0,1]$ for all $i$, then $m=\lfloor\frac{n}2\rfloor$ and the sets of $\xi_i$'s and $\eta_i$'s interleave in the sense that if they are labelled in increasing order then
\begin{equation}
\label{interleave}
\begin{split}
0<\eta_1 < \xi_1 < \eta_2 < \xi_2 < \dots < \eta_m < \xi_m<1 \qquad& (n=2m) \\
0<\xi_1 < \eta_1 < \xi_2 < \eta_2 < \dots < \eta_m < \xi_{m+1}<1 \qquad& (n=2m+1).
\end{split}
\end{equation}
\end{cor}
We will also need a generalization of these results later for the proof of Theorem~\ref{Jacobi}:
\begin{cor}
\label{Sequence}
The same result applies if we replace the condition $1 \le k \le n$ in \eqref{altdelta} and \eqref{thetasumm} by the condition that $k-1\in\mathcal I$ for some $\mathcal I\subset\mathbb{N}$ of size $\Abs{\mathcal I}=n$.
\end{cor}
\begin{proof}
All we need to generalize the proof is to show that for any $m\leq n-1$ positive real numbers, there exists a polynomial $D$, of degree $m$, whose exponents are all in $\mathcal I$, and whose positive roots are simple and are precisely the given numbers.
To construct such a polynomial, consider an arbitrary monic polynomial that uses the first $m+1$ numbers in $\mathcal I$ as its exponents. The condition on the roots gives a system of linear equations for the $m$ undetermined coefficients. This has a unique solution, because of the linear independence of different powers of $x$. Descartes' rule of signs shows that the number of positive roots of $D$ is at most the number of sign changes in the coefficients of $D$, which is at most $m$; therefore there can be no other positive roots.
\end{proof}
In view of Corollary~\ref{Plusminus}, the problem of finding solutions to eqs.~\eqref{thetasumlow} that satisfy the condition $0 \le x_1 \le x_2 \le \dots \le x_n \le 1$ is immediately solved once one finds a solution to the equations \eqref{thetasumm} for which all the $\xi_i$, $i=1 \dots n-m$ and the $\eta_j, j=1\dots m$ lie in $[0,1]$.
\begin{lem}
\label{Roots}
Let $P^+$ and $P^-$ be monic polynomials of degrees $n-m$ and $m$.
The roots of $P^+$ and $P^-$ will be a solution to eqs.~\eqref{thetasumm} if and only if
\begin{equation}
\label{PplusoverPminus}
\frac{{P^+}^*(x)}{{P^-}^*(x)}=(1-x)^\theta + O(x^{n+1}).
\end{equation}
\end{lem}
(We remark that ${P^+}^*(x)$ and ${P^-}^*(x)$ are what are called Pad\'e approximants \cite{Pade} to $(1-x)^\theta$.)
\begin{proof}
Write the factorizations of these polynomials as
\[
P^+(x)=\prod_{i=1}^{n-m} (x-\xi_i) \qquad\text{and}\qquad P^-(x) = \prod_{j=1}^m (x-\eta_j).
\]
Generalizing our preliminary example, this means that
\begin{align*}
\frac{{P^+}^*(x)}{{P^-}^*(x)} &= \frac{\prod_{i=1}^{n-m} (1-\xi_ix)}{\prod_{j=1}^m (1-\eta_jx)}
= \exp\left\{\sum_{i=1}^{n-m} \log (1-\xi_ix) - \sum_{j=1}^m \log(1-\eta_jx)\right\} \\
&= \exp\left\{-\sum_{k=1}^n\frac1k s_k x^k\right\} + O(x^{n+1}) ,
\end{align*}
where
\[
s_k \eqdef \xi_1^k+\xi_2^k+ \dots + \xi_{n-m}^k - (\eta_1^k + \eta_2^k + \dots + \eta_m^k) .
\]
On the other hand,
\[
(1-x)^\theta = \exp\left\{-\sum_{k=1}^n\frac1k \theta x^k\right\} + O(x^{n+1}) ,
\]
so $s_k=\theta$ for $k=1,\dots,n$ (which is eqs.~\eqref{thetasumm}) if and only if eq.~\eqref{PplusoverPminus} holds.
\end{proof}
\begin{definition}
\begin{equation}
\label{P}
P_m(x,\theta)= \sum_{i=0}^m (-1)^i \frac{m! (2m-i)!}{(2m)! (m-i)!} \binom{m+\theta}{i} x^{m-i}
\end{equation}
\begin{equation}
\label{Q}
Q_{m+1}(x,\theta)= \sum_{i=0}^{m+1} (-1)^i \frac{(m+1)!(2m+1-i)!}{(2m+1)!(m+1-i)!}\binom{m+\theta}{i} x^{m+1-i}
\end{equation}
\begin{equation}
\label{R}
R_m(x,\theta)= \sum_{i=0}^{m} (-1)^i \frac{m!(2m+1-i)!}{(2m+1)!(m-i)!}\binom{m+1+\theta}{i} x^{m-i} .
\end{equation}
\end{definition}
\begin{thm}
\label{Explicit}
For $\lambda\in(-1,1)$ and $n\in\mathbb{N}$,
\[
\sigma=\Sigma_n(\mathbf x)
\]
satisfies
\[
0 = \langle p,\sigma-\lambda\rangle
\]
for any polynomial of degree $\leq n-1$ if and only if the $x_i$ are the roots (in order) of the polynomials of $x$:
\begin{equation*}
P_m(x,\theta)\text{ and }P_m(x,-\theta), \text{ for }n=2m\text{ and }\theta=\tfrac{1-\lambda}2,
\end{equation*}
\begin{equation*}
Q_{m+1}(x,\theta)\text{ and }R_m(x,-\theta),\text{ for } n=2m+1\text{ and }\theta=\tfrac{1+\lambda}2.
\end{equation*}
In each case $x_1, x_3,\dots$ are the roots of the first polynomial.
\end{thm}
\begin{proof}
Consider the case $n=2m$. By Lemma~\ref{Roots} and Corollary~\ref{Plusminus}, it is sufficient to show that $P^\pm(x)=P_m(x,\pm\theta)$ satisfy eq.~\eqref{PplusoverPminus} and have all of their roots between $0$ and $1$.
From eq.~\eqref{P} we see that
\begin{equation}
\label{hyperpoly}
P_m^*(x,\theta) = 1 - \frac{m}{2m}\cdot\frac{m+\theta}{1!}+\frac{m(m-1)}{2m(2m-1)}\binom{m+\theta}{2}x^2-\dots + (-1)^m\binom{2m}{m}^{-1}\binom{m+\theta}{m}x^m,
\end{equation}
which is a hypergeometric function, $P_m^*(x,\theta) = {}_2F_1(-m,-m-\theta;-2m;x)$. In this case, the power series terminates at the $m$'th term, so that albeit $c=-2m$ is a negative integer, the denominators do not vanish.
The roots of the polynomial \eqref{hyperpoly} all lie in $(1,\infty)$; the roots of the reciprocal polynomial, $P_m(x,\theta)$, are the reciprocals of these, and hence, as required, lie in $(0,1)$. We refer to \cite[Theorem 2.3 (iii)]{Zeros1} in which the parameter $k$ is seen to be $0$. To cope with $P_m^*(x,-\theta)$, we employ \cite[Theorem 2.3 (ii)]{Zeros1} (see also \cite{Zeros2}) when the parameter again denoted by $k$ equals $m$ to see that all roots lie in $(0,1)$.
In general, the hypergeometric function
\begin{equation}
\label{hyper}
{}_2F_1(a,b;c;x)=1+\frac{ab}{1!c}x+\frac{a(a+1)b(b+1)}{2!c(c+1)}x^2 + \dots
\end{equation}
satisfies the Gauss differential equation
\begin{equation*}
x(1-x)F''+\{c-(1+a+b)x\}F'-abF=0.
\end{equation*}
The function $W(x)=(1-x)^{-\theta}P_m^*(x,\theta)$ satisfies the differential equation
\begin{equation}
\label{Wode}
x(1-x)W''+\{-2m-(1-2m+\theta)x\}W'-m(m-\theta)W=0
\end{equation}
which is a Gauss equation with parameters $-m+\theta, -m, -2m$. The indicial equation for \eqref{Wode} has roots $0,1+2m$. Since $P_m^*(x,\theta)$ is a polynomial and $0<\theta < 1$, $W$ does not involve logarithms and has an infinite, convergent power series expansion comprising two parts: $P_m^*(x,-\theta)$ and then a series whose first term involves $x^{2m+1}$. We refer to
\cite[p.~286]{WW} to find that, with a suitable constant $c_m(\theta)$, we have
\begin{equation*}
(1-x)^{-\theta}P_m^*(x,\theta)=P_m^*(x,-\theta) + c_m(\theta)x^{2m+1}{}_2F_1(m+1,m+1+\theta,2m+2;x),
\end{equation*}
which satisfies \eqref{PplusoverPminus} as we wished to show.
We do not need the value of $c_m(\theta)$ but remark that this may be deduced from \cite[p.\ 299, Ex.~18.]{WW}. For suitable values of the parameters, this gives an asymptotic formula for the hypergeometric function as $x\rightarrow 1$.
The odd $n$ case is similar and is left to the reader.
\end{proof}
Another way of expressing this is that
\[
\sigma(x) = \sgn\left\{P_m(x,\theta)P(x,-\theta)\right\}
\]
and so on. Although this formula gives the value $0$ at some points, it still defines a switch function within $L^\infty[0,1]$.
For example, let $n=5, \theta=1/3$. From \eqref{Q} and \eqref{R},
\begin{equation*}
Q_3(x,1/3)=x^3-\frac{7}{5}x^2+\frac{7}{15}x-\frac{7}{405}, \quad R_2(x,-1/3)=x^2-\frac{16}{15}x+\frac{2}{9}.
\end{equation*}
Therefore, $x_1 \doteq .0422244245$, $x_2 \doteq .2838895075$, $x_3 \doteq .4518343712$, $x_4 \doteq .7827771591$,
$x_5 \doteq .9050412043$.
\subsection{Uniqueness}
We now answer the uniqueness part of Questions~\ref{Main Question} and \ref{Generalized Question} in this polynomial case.
\begin{thm}
\label{Uniqueness}
For $\lambda\in(-1,1)$ and $n\in\mathbb{N}$, the switch function $\sigma$ described in Theorem~\ref{Explicit} is the \emph{unique} $\sigma\in D_n$ such that $0 = \langle p,\sigma-\lambda\rangle$ for any polynomial $p$ of degree $\leq n-1$.
\end{thm}
\begin{proof}
We need to show that for $\theta\in(0,1)$, the solution of eqs.~\eqref{thetasumlow} with $\mathbf x\in\Delta^n$ is unique. Any solution determines polynomials $P^\pm$ satisfying eq.~\eqref{PplusoverPminus}, and the polynomials determine an ordered solution uniquely. It is therefore sufficient to show uniqueness for the solution of eq.~\eqref{PplusoverPminus}.
To prove the uniquess of $P^\pm$, write
\begin{gather*}
{P^+}^*(x) = 1 - a_1x + a_2 x^2 - \dots + (-1)^{n-m} a_{n-m}x^{n-m}, \\
{P^-}^*(x) = 1 - b_1x + b_2 x^2 - \dots + (-1)^m b_mx^m .
\end{gather*}
Cross multiplying in \eqref{PplusoverPminus} and equating coefficients of powers of $-x$ up to $(-x)^n$, we easily find that \eqref{PplusoverPminus} is equivalent to the block matrix equation
\begin{equation}
\label{block}
\begin{pmatrix} I & -T \\ 0 & -K \end{pmatrix}\begin{pmatrix} A \\ B \end{pmatrix} = C,
\end{equation}
where $I$ is the $(n-m)\times (n-m)$ identity matrix, $0$ the $m\times (n-m)$ zero matrix, $A$, $B$, and $C$ are the $(n-m)\times 1$, $m\times 1$, and $n\times 1$ matrices
\begin{equation*}
A=\begin{pmatrix} a_1\\a_2\\ \vdots \\a_{n-m}\end{pmatrix}, \quad B=\begin{pmatrix} b_1\\b_2\\ \vdots \\ b_m\end{pmatrix}; \quad C=\begin{pmatrix} c_1\\ c_2 \\ \vdots \\ c_n\end{pmatrix},
\end{equation*}
where
\begin{equation*}
c_i=\binom{\theta}{i}
\end{equation*}
is the coefficient of $(-x)^i$ in the binomial expansion, \eqref{plus} of $(1-x)^\theta$;
$T$ is the $(n-m)\times m$ matrix, given, when $n=2m$, by
\begin{equation*}
T = \begin{pmatrix} 1 & 0 & 0 & \dots & 0 & 0 \\
c_1 & 1 & 0 & \dots & 0 & 0 \\c_2 & c_1 & 1 & \dots & 0 & 0\\
\vdots & \vdots & \vdots& \ddots & \vdots & \vdots \\
c_{m-2} & c_{m-3} & c_{m-4} &\dots & 1 & 0\\
c_{m-1} & c_{m-2} & c_{m-3} & \dots & c_1 & 1
\end{pmatrix},
\end{equation*}
and, when $n=2m+1$, by
\begin{equation*}
T = \begin{pmatrix}
1 & 0 & 0 & \dots & 0 & 0 \\
c_1 & 1 & 0 & \dots & 0 & 0 \\c_2 & c_1 & 1 & \dots & 0 & 0\\
\vdots & \vdots & \vdots& \ddots & \vdots & \vdots \\
c_{m-2} & c_{m-3} & c_{m-4} &\dots & 1 & 0\\
c_{m-1} & c_{m-2} & c_{m-3} & \dots & c_1 & 1\\
c_m & c_{m-1} & c_{m-2} &\dots & c_2 & c_1 \end{pmatrix},
\end{equation*}
while (both when $n$ is even and when $n$ is odd) $K$ is the $m \times m$ matrix,
\begin{equation}
\label{K}
K = \begin{pmatrix} c_{n-m} & c_{n-m-1} & \dots & c_{n-2m+1} \\c_{n-m+1} & c_{n-m} & \dots &
c_{n -2m+2} \\ \vdots & \vdots & \ddots & \vdots \\ c_{n-1} & c_{n-2} & \dots & c_{n-m}\end{pmatrix}.
\end{equation}
Clearly, the determinant of the $2\times 2$ block matrix in \eqref{block} is $(-1)^m \det K$. We show in the appendix (Propositions \ref{DetKeven} and \ref{DetKodd}) that this is never zero for $\theta\in(0,1)$. So the matrix equation \eqref{block} has a unique solution and therefore the polynomials, $P^+$ and $P^-$ are determined uniquely by eq.~\eqref{PplusoverPminus}.
\end{proof}
The reader may wonder why we proved existence by verifying that the polynomials given in the statement of Theorem~\ref{Explicit} satisfy $\eqref{thetasumlow}$ rather than by deriving these polynomials from those equations, and they may further wonder how we came to know that these were the right polynomials to try. To address these questions, we remark that in principle it must, of course, be possible to solve \eqref{block} and thereby derive the explicit formulae \eqref{P} for $P_m(x, \theta)$ and $P_m(x,-\theta)$ in the case $n$ is even and \eqref{Q} and \eqref{R} for $Q_{m+1}(x, \theta)$ and $R_m(x,\theta)$ in the case $n$ is odd, for $P^+$ and $P^-$, as given in the statement of Theorem~\ref{Explicit}. However, in practice, such a direct derivation of eqs.~\eqref{P}, \eqref{Q}, and \eqref{R} seems difficult. It is possible, though, to solve them for small $n$ and thereby to be able to guess the form of these polynomials for all $n$. Alternatively, one can guess them after directly solving the equation \eqref{thetasumlow} for small $n$ and this is what we did.
\section{The Even Polynomial Case}
\label{Sect:Even}
Now we consider Question~\ref{Generalized Question} for the functions $1,x^2,x^4,\dots, x^{2n-2}$. We will show existence by computing the switch function, and we shall prove its uniqueness.
Recall that the degree $n$ Jacobi polynomial \cite{Freud} for parameters $\theta$ and $-\theta$ is
\begin{equation}
\label{Jac}
\begin{split}
J_n(\theta, -\theta; x) &= (-1)^n\left(\frac{1-x}{1+x}\right)^\theta\frac{n!}{(2n)!}\frac{d^n}{dx^n}\{ (1+x)^{n+\theta}(1-x)^{n-\theta}\}\\
&=x^n-\theta x^{n-1} -\frac{(n-1)(n-2\theta^2)}{2(2n-1)}x^{n-2}+\dots .
\end{split}
\end{equation}
This polynomial has $n$ distinct (non-zero) roots, $\zeta_i$ on $(-1,1)$, being one of a sequence of orthogonal polynomials on this interval.
\begin{thm}
\label{Jacobi}
Let $n\in\mathbb{N}$, $-1\leq\lambda\leq1$, and again $\theta=(1+(-1)^{n-1}\lambda)/2$.
A switch function $\sigma\in D_n$ satisfying
\[
0=\langle p,\sigma-\lambda\rangle
\]
for any even polynomial, $p$, of degree $\leq2n-2$ is given by $\sigma=\Sigma_n(\mathbf x)$, where $x_i=\Abs{\zeta_i}$, and $\zeta_i$ are the roots of the Jacobi polynomial $J_n(\theta,-\theta;x)$ ordered by absolute value.
\end{thm}
\begin{proof}
Note that if $\sigma=\Sigma_n(x)$, then this condition on $\sigma$ is equivalent to the system of equations,
\begin{equation}
\label{thetasumhigh}
x_1^{2k-1}-x_2^{2k-1} + x_3^{2k-1} - \dots + (-1)^{n-1}x_n^{2k-1}=\theta, \qquad (1 \le k \le n) .
\end{equation}
Consider the system of equations
\begin{equation}
\label{positive}
\theta = \sum_{i=1}^n \zeta_i^{2k-1}
\end{equation}
for $1\leq k \leq n$. This is the same as eq.~\eqref{thetasumhigh}, but without negative signs. Because the exponents are all odd, this is equivalent to
\[
\theta = \sum_{i=1}^n (\sgn \zeta_i)\Abs{\zeta_i}^{2k-1} .
\]
If we require $-1\leq \zeta_i\leq 1$ and label in order of increasing $\Abs{\zeta_i}$, then Corollary~\ref{Sequence} shows that $\sgn\zeta_i = (-1)^{n-i}$. Therefore any solution of eqs.~\eqref{positive} with $-1\leq\zeta_i\leq1$ gives a solution of eqs.~\eqref{thetasumhigh} by $x_i=\Abs{\zeta_i}$.
We can easily adapt the method explained in the preliminary remarks to see that $\{\zeta_1, \zeta_2, \dots, \zeta_n\}$ will be a solution to eqs.~\eqref{positive} if and only if these are the roots of an order $n$ polynomial, $P$, such that
\begin{equation}
\label{even}
P^*(x) = \exp(-s_1 x - \tfrac{1}{2} s_2x^2 - \tfrac{1}{3}s_3 x^3 -\tfrac{1}{4} s_4 x^4 - \dots )
\text{ where }
s_{2k-1} = \theta \text{ for } 1 \le k \le n .
\end{equation}
Equation \eqref{even} is equivalent to requiring that $(1-x)^{-\theta}P^*(x)$ is an even function of $x$ up to and including order $x^{2n}$.
Now let $P(x)= J_n(\theta,\-\theta;x)$. We need to show that for $0<\theta<1$,
\[
G(x) \eqdef (1-x)^{-\theta}P^*(x)
\]
is even up to order $x^{2n}$.
We first notice that the Jacobi polynomial, $P(x)=J_n(\theta, -\theta;x)$ satisfies the differential equation
\begin{equation}
\label{Jdiff}
(1-x^2)P'' + (2\theta-2x)P' + n(n+1)P=0
\end{equation}
\eqref{Jdiff}, we deduce that the reciprocal polynomial, $P^*$, satisfies
\begin{equation*}
x(1-x^2){P^*}''+2[(n-1)x^2+\theta x-n]{P^*}'-n[(n-1)x+2\theta]P^*=0 ,
\end{equation*}
and $G$ satisfies
\begin{equation*}
x(1-x^2)G''+[-2n-(2\theta - 2n + 2)x^2]G'-(\theta-n)(\theta-n+1)x G=0.
\end{equation*}
This is a hypergeometric equation with independent variable $x^2$ and parameters $a=(\theta-n)/2$, $b=(\theta-n+1)/2$ and $c=-n+\tfrac{1}{2}$. Therefore \cite{WW} $G$ may be written in the form
\begin{equation}
\label{Geq}
G(x)={}_2F_1\left(\tfrac{\theta-n}{2}, \tfrac{\theta-n+1}{2}, -n+\tfrac{1}{2};x^2\right)+Cx^{2n+1}{}_2F_1\left(\tfrac{\theta+n+1}{2}, \tfrac{\theta+n+2}{2}, n+\tfrac{3}{2}; x^2\right)
\end{equation}
in which $C$ is a constant. We refer to \cite[p.~299, Ex.~18]{WW} as in the proof of Theorem~\ref{Explicit}. We see from \eqref{Geq} that $G(x)$ is indeed even to order $x^{2n}$.
Since the roots of $J_n(\theta, -\theta; x)$ are in the interval $(-1,1)$, we may conclude that when their absolute values are labelled in increasing order, they will solve eqs.~\eqref{thetasumhigh} and be the switch points for the desired switch function.
\end{proof}
For example, let $n=5$, $\theta=1/3$ as before. The Jacobi polynomial in \eqref{Jac} is
\begin{equation*}
x^5- \frac{1}{3}x^4-\frac{86}{81}x^3+\frac{62}{243}x^2+\frac{157}{729}x-\frac{143}{6561}
\end{equation*}
and the roots are $\zeta_1\doteq.0948419$, $\zeta_2\doteq-.4571986$, $\zeta_3\doteq.6167796$, $\zeta_4\doteq-.8641519$,
$\zeta_5\doteq.9430623$. The reader might care to try the resulting $x_i$ in \eqref{thetasumhigh}.
\subsection{Uniqueness}
\begin{thm}
\label{Uniqueness2}
For all $n\in \mathbb{N}$ and $-1<\lambda<1$, the switch function $\sigma\in D_n$ such that $\sigma-\lambda$ is orthogonal to all even polynomials of degree $\leq 2n-2$ is unique.
\end{thm}
\begin{proof}
Uniqueness of $\sigma$ means uniqueness of the switch points. The roots of $P$ are $\zeta_i=(-1)^{n-i}x_i$, therefore the monic polynomial $P$ is uniquely determined by the switch points. It is therefore sufficient to show that $P$ is uniquely determined by eq.~\eqref{even}.
Write $P^*(x) = 1+a_1x+a_2x^2 + \dots a_{n}x^{n}$ and expand $(1-x)^{-\theta}P^*(x)$. We require the coefficients of odd powers of $x$ up to $x^{2n-1}$ to vanish. This is equivalent to the matrix equation:
\begin{equation}
\label{BAV}
BA=-V
\end{equation}
where, when $n$ is even,
$B$, $A$, and $V$ are the $n \times n$, $n\times 1$ and $n\times 1$ matrices
\begin{equation}
\label{Ba}
B=\begin{pmatrix}
1 & 0 & 0 & 0 & \dots & 0 \\
b_2 & b_1 & 1 & 0 & \dots & 0 \\
b_4 & b_3 & b_2 & b_1 & \dots & 0\\
\vdots & \vdots & \vdots &\vdots & \vdots & \vdots\\
b_{n-2} & b_{n-3} & b_{n-4} & b_{n-5} & \dots & 0 \\
b_n & b_{n-1} & b_{n-2} & b_{n-3} & \dots & b_1\\
b_{n+2} & b_{n+1} & b_n & b_{n-1} & \dots & b_3\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
b_{2n-2} & b_{2n-3} & b_{2n-4} & b_{2n-5} & \dots & b_{n-1}
\end{pmatrix}, \quad
A=\begin{pmatrix} a_1\\a_2\\ a_3 \\ a_4 \\ \vdots \\a_n\end{pmatrix}, \quad V=\begin{pmatrix} b_1\\b_3\\ b_5 \\ \vdots \\ b_{n-1} \\ b_{n+1} \\ b_{n+3} \\ \vdots \\ b_{2n-1}\end{pmatrix},
\end{equation}
and when $n$ is odd, $B$, $A$, and $V$ take the form
\begin{equation}
\label{Bb}
B=\begin{pmatrix}
1 & 0 & 0 & 0 & \dots & 0 \\
b_2 & b_1 & 1 & 0 & \dots & 0 \\
b_4 & b_3 & b_2 & b_1 & \dots & 0\\
\vdots & \vdots & \vdots &\vdots & \vdots & \vdots\\
b_{n-3} & b_{n-4} & b_{n-5} & b_{n-6} & \dots & 0 \\
b_{n-1} & b_{n-2} & b_{n-3} & b_{n-4} & \dots &1\\
b_{n+1} & b_{n} & b_{n-1} & b_{n-2} & \dots & b_2\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
b_{2n-2} & b_{2n-3} & b_{2n-4} & b_{2n-5} & \dots & b_{n-1}
\end{pmatrix}, \quad
A=\begin{pmatrix} a_1\\a_2\\ a_3 \\ a_4 \\ \vdots \\ a_n\end{pmatrix}, \quad V=\begin{pmatrix} b_1\\b_3\\ b_5 \\ \vdots \\ b_{n-2} \\ b_{n} \\ b_{n+2} \\ \vdots \\ b_{2n-1}\end{pmatrix},
\end{equation}
where
\begin{equation*}
b_m= \frac{\theta(\theta +1) \dots (\theta +m-1)}{m!}.
\end{equation*}
We show in the appendix (Prop.~\ref{DetB}) that $\det B$ never vanishes for $\theta\in (0,1)$ whereupon the matrix equation \eqref{BAV} will have a unique solution and therefore there will be a unique polynomial $P$ that determines $\sigma$.
\end{proof}
\section{Sines}
Consider the functions $f_k(t) = \sin \frac{k\pi}{2}t$ for $k=1,\dots,n$. The problem of finding switch functions in this case reduces to the polynomial problem by a simple change of variables.
Recall that the Chebyshev polynomial $T_k$ is a degree $k$ polynomial satisfying
\[
T_k(\cos x) = \cos kx .
\]
Suppose that $\sigma$ is a switch function satisfying $0=\langle f_k,\sigma-\lambda\rangle$. The change of variables $s = \cos\frac\pi2t$ gives $T_k(s) = \cos \frac{k\pi}2t$, and so
\begin{align*}
0 &= \int_0^1 \left(\sigma(t)-\lambda\right) \sin \tfrac{k\pi}2t\,dt \\
&= -\frac2{k\pi} \int_0^1 \left(\sigma(t)-\lambda\right) \frac{d}{dt}\cos \tfrac{k\pi}2t\,dt\\
&= \frac2{k\pi} \int_0^1\left(\sigma(\tfrac2\pi\cos^{-1} s) - \lambda\right) T'_k(s)\,ds .
\end{align*}
Now, $T_k'$ is of degree $k-1$, and these form a basis of polynomials. Therefore, the switch function
\[
s\mapsto \sigma(\tfrac2\pi\cos^{-1} s)
\]
satisfies our problem for polynomials of degree up to $n-1$.
Note, however, that the order of positive and negative values has been reversed.
So, suppose that $(x,1,\dots,x_n)$ is the solution for the polynomial problem with parameter $(-1)^n\lambda$, and let $y_j=\frac2\pi\cos^{-1}x_{n+1-j}$. The corresponding switch functions are related by
\[
\Sigma_n(\mathbf y)(t) = (-1)^n \Sigma_n(\mathbf x)(\cos\tfrac\pi2t) ,
\]
so $\mathbf y$ is a solution the the sine problem with parameter $\lambda$.
\subsection*{Acknowledgments}
BSK thanks Michael M.\ Kay for very helpful conversations.
| {
"timestamp": "2018-04-16T02:03:06",
"yymm": "1710",
"arxiv_id": "1710.06916",
"language": "en",
"url": "https://arxiv.org/abs/1710.06916",
"abstract": "We define a switch function to be a function from an interval to $\\{1,-1\\}$ with a finite number of sign changes. (Special cases are the Walsh functions.) By a topological argument, we prove that, given $n$ real-valued functions, $f_1, \\dots, f_n$, in $L^1[0,1]$, there exists a switch function, $\\sigma$, with at most $n$ sign changes that is simultaneously orthogonal to all of them in the sense that $\\int_0^1 \\sigma(t)f_i(t)dt=0$, for all $i = 1, \\dots , n$.Moreover, we prove that, for each $\\lambda \\in (-1,1)$, there exists a unique switch function, $\\sigma$, with $n$ switches such that $\\int_0^1 \\sigma(t) p(t) dt = \\lambda \\int_0^1 p(t)dt$ for every real polynomial $p$ of degree at most $n-1$. We also prove the same statement holds for every real even polynomial of degree at most $2n-2$. Furthermore, for each of these latter results, we write down, in terms of $\\lambda$ and $n$, a degree $n$ polynomial whose roots are the switch points of $\\sigma$; we are thereby able to compute these switch functions.",
"subjects": "Classical Analysis and ODEs (math.CA); Algebraic Topology (math.AT)",
"title": "Switch Functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683469514966,
"lm_q2_score": 0.8006920044739461,
"lm_q1q2_score": 0.7907380792756152
} |
https://arxiv.org/abs/1101.4740 | Minimal area ellipses in the hyperbolic plane | We present uniqueness results for enclosing ellipses of minimal area in the hyperbolic plane. Uniqueness can be guaranteed if the minimizers are sought among all ellipses with prescribed axes or center. In the general case, we present a sufficient and easily verifiable criterion on the enclosed set that ensures uniqueness. | \section{Introduction and statement of the main result}
\label{sec:introduction}
By a well-known theorem of convex geometry, a full-dimensional,
compact subset $F$ of the Euclidean plane can be enclosed by a unique
ellipse $C$ of minimal area. We share the general belief that this is
an important but easy result. Therefore, it is not surprising that
recently much more general uniqueness results were obtained
\cite{gruber08:_john_type,%
schroecker08:_uniqueness_results_ellipsoids,%
weber10:_davis_convexity_theorem}. These articles also contain more
complete references to the relevant literature.
The situation in the elliptic plane is different: As of today,
uniqueness can only be guaranteed for ``sufficiently small and round
sets $F$''. The precise statement can be found in
\cite[Theorem~8]{weber10:_minimal_area_conics}. Its proof requires
some non-trivial calculations. It is still an open question whether
any compact subset of the elliptic plane possesses a unique enclosing
conic or not.
In this article we consider uniqueness of the minimal area ellipse in
the hyperbolic plane. The algebraic equivalence of elliptic and
hyperbolic geometries suggests to imitate the proof of
\cite[Theorem~8]{weber10:_minimal_area_conics}. Indeed, this is
possible to a large extent, but not completely. The outcome of this
research is similar to the elliptic case. Uniqueness can be
guaranteed if some conditions on the axis lengths of enclosing
ellipses of minimal area are met. If this is not possible, we can
make neither a positive nor a negative uniqueness statement. Our main
result is
\begin{theorem}
\label{th:1}
Consider a compact and full-dimensional subset $F$ of the hyperbolic
plane. The enclosing ellipse of minimal area to $F$ is unique if the
following conditions are met:
\begin{itemize}
\item There exist positive numbers $\varrho$, $\mathrm{R}$ such that
the semi-axis lengths of the (a priori not necessarily unique)
minimal ellipses are in the closed interval
$[\varrho,\mathrm{R}]$.
\item The values $\nu_1 = \coth^2\!\mathrm{R}$ and $\nu_2 =
\coth^2\!\varrho$ satisfy the inequality
\begin{equation}
\label{eq:1}
H(\nu_1,\nu_2) := -13\nu_1^2 + 5\nu_1\nu_2 - 3\nu_1 + 7\nu_2 + 4 \le 0.
\end{equation}
\end{itemize}
\end{theorem}
The minimal enclosing ellipse $C_{\min}$ to the convex hull $F$ of a
finite point set is depicted in Figure~\ref{fig:center-and-axes}. The
drawing refers to the Cayley-Klein model of the hyperbolic plane which
will be introduced in Section~\ref{sec:hyperboloid-model}.
\begin{figure}
\centering
\includegraphics{img/hyp-graph}
\caption{The curves $H(\nu_1,\nu_2) = 0$ and $h_2(\nu_1,\nu_2) = 0$.}
\label{fig:hyp-graph}
\end{figure}
Figure~\ref{fig:hyp-graph} depicts the graph of the function
$H(\nu_1,\nu_2) = 0$. The shaded area contains admissible values
$\nu_1$, $\nu_2$. The meaning of the remaining elements will be
explained later in the text.
Condition \eqref{eq:1} is certainly fulfilled for $\nu_1 = \nu_2$ and
$\nu_1 \to \infty$. Thus, Theorem~\ref{th:1} informally states that
minimal enclosing ellipses are unique if they are sufficiently small
and round. Of course, the Theorem should be accompanied by an easily
verifiable criterion that ensure suitably shaped minimal ellipses:
\begin{proposition}
\label{prop:1}
Consider a compact and full-dimensional subset $F$ of the hyperbolic
plane and denote its (hyperbolic) convex hull by $\conv{F}$. Assume
$\conv{F}$ admits an inscribed circle of radius $\varrho$ and a
circumscribed ellipse of area $S$. Denote by $\mathrm{R}$ the major
semi-axis length of an ellipse of area $S$ and minor semi-axis
length $r$. Then the minimal area ellipse of $F$ has semi-axis
length in the interval $[\varrho,\mathrm{R}]$.
\end{proposition}
We omit the obvious proof of this proposition. Together with
Theorem~\ref{th:1}, it leads to the following sufficient test for the
uniqueness of the minimal enclosing ellipsoid to a given set~$F$:
\begin{enumerate}
\item Find a (large) inscribed circle to $\conv{F}$ and denote its
radius by~$\varrho$.
\item Find a (small) circumscribed ellipse to $\conv{F}$ and denote
its area by~$S$.
\item Compute the unique value $\mathrm{R}$ such that an ellipse with
semi-axis lengths $\varrho$ and $\mathrm{R}$ has area $S$. By
construction, $[\varrho,\mathrm{R}]$ is not empty.
\item The minimal area ellipse is unique, if $\varrho$ and
$\mathrm{R}$ satisfy the inequality \eqref{eq:1}.
\end{enumerate}
We will show later that if a point $(\nu_1,\nu_2)$ satisfies
\eqref{eq:1} than the same is true for every admissible point
$(\nu'_1,\nu'_2)$ with $\nu'_1 \ge \nu_1$ and $\nu'_2 \le \nu_2$. This
means, that chances for an affirmative uniqueness statement increase
with a large value of the radius~$\varrho$ and a small value of the
area $S$, that is, with the quality of the input obtained from the
first and the second step.
The basic ideas and the initial calculations in our proof of
Theorem~\ref{th:1} are more or less identical to the proof of
\cite[Theorem~8]{weber10:_minimal_area_conics}. The minor differences
pertain to occasional changes in sign and the use of the hyperbolic
functions $\cosh$, $\sinh$, etc. instead of their spherical
counterparts $\cos$, $\sin$, etc. The major differences are in the
final estimates. Given the similarities between the elliptic and
hyperbolic case, we consider a rather terse presentation
appropriate. Yet, we will try to work out the crucial junctions points
and the major differences.
In Section~\ref{sec:hyperboloid-model} we settle our notation and
introduce the hyperboloid model of the hyperbolic plane, where our
calculations take place. In Section~\ref{sec:area-ellipses} we provide
a formula for the area of ellipses in the hyperbolic plane, which is
probably hard to find elsewhere. In Section~\ref{sec:area-convex}, we
prove a fundamental convexity result for the area function. By
standard arguments, it yields uniqueness of the minimal ellipse among
all ellipses with prescribed axes or center. The proof of
Theorem~\ref{th:1} is given in Section~\ref{sec:uniqueness}. Its main
ingredient is Lemma~\ref{lem:half-turn}, the Half-Turn Lemma. The
merely technical parts of its proof are moved to the appendix.
\section{The hyperboloid model of hyperbolic geometry}
\label{sec:hyperboloid-model}
In \cite{weber10:_minimal_area_conics} we used the spherical model of
the elliptic plane for investigating uniqueness of minimal area
conics. It is obtained from the geometry of the unit sphere
$\mathbb{S}^2$ of Euclidean three-space by identifying antipodal
points. By analogy, our calculations in this article refer to the
spherical model (or ``hyperboloid model'') of the hyperbolic plane
which is obtained in similar fashion from the geometry of the sphere
of squared radius $-1$ in Minkowski three space $\mathbb{R}^3_1$. An
elementary introduction to this model is given in
\cite{reynold93:_hyperbolic_geometry}.
Minkowski three-space $\mathbb{R}^3_1$ is the metric space over $\mathbb{R}^3$
where the metric is induced by the indefinite inner product
\begin{equation*}
\mIP{x}{y} = -x_0y_0 + x_1y_1 + x_2y_2.
\end{equation*}
The locus of the spherical model of the hyperbolic plane is the sphere
$\mathbb{S}^2_1$, defined as
\begin{equation*}
\mathbb{S}^2_1 = \{x \in \mathbb{R}^3_1\colon \mNorm{x}^2 = -x_0^2 + x_1^2 + x_2^2 = -1\}.
\end{equation*}
In a Euclidean interpretation, it is a hyperboloid of two sheets. We
use $\mathbb{S}^2_1$ as a model of the hyperbolic plane $\mathbb{H}^2$. The following
concepts are taken from \cite{reynold93:_hyperbolic_geometry}:
\begin{itemize}
\item The points of $\mathbb{H}^2$ are the points of $\mathbb{S}^2_1$ with antipodal
points $x$ and $-x$ identified.
\item The lines of $\mathbb{H}^2$ are the intersections of $\mathbb{S}^2_1$ with planes
through the origin $0$.
\item The hyperbolic distance between two points $x$, $y \in \mathbb{S}^2_1$ is
defined by
\begin{equation*}
\dist(x,y) = \arccosh(-\mIP{x}{y}).
\end{equation*}
\item The hyperbolic angle between two straight lines $K$ and $L$ is
defined by
\begin{equation*}
\sphericalangle(k,l) = \arccos\frac{\mIP{k}{l}}{\mNorm{k} \cdot
\mNorm{l}}
\end{equation*}
where $k$ and $l$ are two arbitrary tangent vectors of $K$ and $L$,
respectively.
\end{itemize}
Note that this model of $\mathbb{H}^2$ is closely related to the well-known
bundle model and also the Cayley-Klein model of the hyperbolic
plane. The bundle model is obtained by connecting points and lines
from the spherical model with the origin $0$ of $\mathbb{R}^3_1$; the
Cayley-Klein model is obtained by intersecting the bundle model with
the plane $x_0 = 1$. Its points are the inner points of the circle
\begin{equation*}
K\colon x_0 = 1,\ x_1^2 + x_2^2 = 1.
\end{equation*}
We will occasionally use the Cayley-Klein model for the purpose of
visualization but it is also convenient for defining center and axes
of an ellipse $C$ in the hyperbolic plane.
The conics in the spherical model of $\mathbb{H}^2$ are the intersections of
$\mathbb{S}^2_1$ with quadratic cones centered at $0$. In the Cayley-Klein model,
hyperbolic ellipses are conics that lie in the interior of $K$. The
ellipse center is the unique vertex $c$ of the common polar triangle
$P$ of $C$ and $K$. It is indeed a center in elementary sense, as it
halves the (hyperbolic) distance between the ellipse points on any line
incident with~$c$. The axes of $C$ are the two sides of $P$ through
$c$. Degenerate pole triangles characterize the circles among the
ellipses. Their center is still well-defined but the axes are
undetermined so that any line through $c$ can be addressed as axis.
Figure~\ref{fig:center-and-axes} displays a hyperbolic ellipse, its
center and axes in the Cayley-Klein model.
\begin{figure}
\centering
\includegraphics{img/center-and-axes}
\caption{Center $c$ and axes of a hyperbolic ellipse $C$; minimal
ellipse $C_{\min}$ to the convex hull $F$ of a finite points set}
\label{fig:center-and-axes}
\end{figure}
\section{The area of ellipses}
\label{sec:area-ellipses}
The hyperbolic plane $\mathbb{H}^2$ can be parametrized as
\begin{equation}
\label{eq:2}
\mathbb{H}^2\colon Y(\theta,\varphi) =
\begin{pmatrix}
\cosh\theta\\
\sinh\theta \sin\varphi\\
\sinh\theta \cos\varphi
\end{pmatrix},
\quad \theta \in [0, \infty), \varphi \in [-\pi, \pi).
\end{equation}
A conic $C$, defined as the intersection of this point set with a
quadratic cone whose vertex is in the origin, can be described as
\begin{equation*}
C = \{x \in \mathbb{H}^2 \colon x^\mathrm{T} \cdot M \cdot x = 0\},
\end{equation*}
where $M \in \mathbb{R}^{3 \times 3}$ is an indefinite symmetric matrix of
full rank. A vector $x$ is called (Minkowski) eigenvector of $M$ with
(Minkowski) eigenvalue $\lambda$ if
\begin{equation}
\label{eq:3}
M \cdot x = \lambda I \cdot x.
\quad\text{where}\quad
I = \diag(-1, 1, 1).
\end{equation}
By $e(M) = (\nu_0,\nu_1,\nu_2)$ we denote the vector of eigenvalues of
$M$, arranged in ascending order. We will only consider the case where
$M$ describes an ellipse. In this case $M$ can be normalized such that
$e(M) = (1, \nu_1, \nu_2)$ and $1 < \nu_1 \le \nu_2$.
A point $x$ is contained in the ellipse $C$ if it satisfies $x^\mathrm{T}
\cdot M \cdot x < 0$ and $M$ is in normal form. After a suitable
(Minkowski) rotation of $\mathbb{S}^2_1$ we may assume that the ellipse is
described by the diagonal matrix
\begin{equation}
\label{eq:4}
M = \diag(-1, \nu_1, \nu_2).
\end{equation}
Referring to the parametrization \eqref{eq:2}, points inside $C$
belong to parameter values $(\theta, \varphi)$ related by
\begin{equation*}
\theta < \theta^\star =
\arccosh \sqrt{\frac{\nu_1 \sin^2\!\varphi + \nu_2 \cos^2\!\varphi}
{\nu_1 \sin^2\!\varphi + \nu_2 \cos^2\!\varphi - 1}}.
\end{equation*}
By integrating the area element
\begin{equation*}
\sqrt{
\Bigl\langle \dpd{H}{\theta}, \dpd{H}{\theta} \Bigr\rangle
\cdot
\Bigl\langle \dpd{H}{\varphi}, \dpd{H}{\varphi} \Bigr\rangle
-
\Bigl\langle \dpd{H}{\theta}, \dpd{H}{\varphi} \Bigr\rangle^2
} \dif\theta \wedge \dif\varphi =
\sinh\theta \dif\theta \wedge \dif\varphi
\end{equation*}
of \eqref{eq:2} (see for example Proposition~5.2 of
\cite{callahan00:_geometry_spacetime}) we obtain the area of the conic
$C$ as
\begin{equation}
\label{eq:5}
\begin{aligned}
\area(C) &= \area(\nu_1, \nu_2)
= \int_{-\pi}^\pi \int_0^{\theta^\star} \sinh\theta \dif\theta \dif\varphi \\
&= \int_{-\pi}^\pi (\cosh\theta^\star - 1) \dif\varphi
= \int_{-\pi}^\pi \sqrt{\frac{\nu_1\sin^2\varphi + \nu_2\cos^2\varphi}
{\nu_1\sin^2\varphi + \nu_2\cos^2\varphi - 1}} \dif \varphi - 2 \pi.
\end{aligned}
\end{equation}
This is valid as long as $M$ is normalized such that $e(M) =
(1,\nu_1,\nu_2)$. If $M$ is not normalized and has ordered eigenvalues
$e(M) = (\nu_0, \nu_1, \nu_2)$, the area formula becomes
\begin{equation}
\label{eq:6}
\area(\nu_0,\nu_1,\nu_2) =
\int_{-\pi}^\pi \sqrt{
\frac{\nu_1\sin^2\!\varphi + \nu_2\cos^2\!\varphi}
{\nu_1\sin^2\!\varphi + \nu_2\cos^2\!\varphi - \nu_0}
}\dif\varphi - 2\pi.
\end{equation}
\section{Convexity of the area function.}
\label{sec:area-convex}
Convexity of the area function \eqref{eq:5} is already the key
property for uniqueness of the minimal area ellipse among concentric
or co-axial ellipses. Recall that only values $\nu_1$, $\nu_2 > 1$
are admissible.
\begin{lemma}
\label{lem:1}
The area function~\eqref{eq:5} is strictly convex for $\nu_1$,
$\nu_2 > 1$.
\end{lemma}
\begin{proof}
We proof that the Hessian matrix of~\eqref{eq:5} is positive
definite, that is, all its principal minors are positive. The upper
left entry equals
\begin{equation}
\label{eq:7}
\dpd[2]{\area}{\nu_1} = \frac{1}{4} \int_{-\pi}^\pi J \sin^4\!\varphi \dif \varphi,
\end{equation}
where
\begin{equation*}
J = \frac{4\nu_1\sin^2\!\varphi + 4\nu_2\cos^2\!\varphi - 1}
{(\nu_1\sin^2\!\varphi + \nu_2\cos^2\!\varphi)^{3/2}
(\nu_1\sin^2\!\varphi + \nu_2\cos^2\!\varphi - 1)^{5/2}}.
\end{equation*}
Clearly, $J$ is positive for admissible values of $\nu_1$ and
$\nu_2$. Therefore, \eqref{eq:7} is positive as well. The
determinant of the Hessian matrix is
\begin{multline}
\label{eq:8}
\dpd[2]{\area}{\nu_1} \dpd[2]{\area}{\nu_2} - \Bigl(
\dmd{\area}{}{\nu_1}{}{\nu_2}{} \Bigr)^2 \\
=\frac{1}{16} \int_{-\pi}^\pi J \sin^4\!\varphi \dif \varphi \cdot
\int_{-\pi}^\pi J \cos^4\!\varphi \dif \varphi -
\frac{1}{16}
\Bigl(
\int_{-\pi}^\pi J \sin^2\!\varphi \cos^2\!\varphi \dif \varphi
\Bigr)^2.
\end{multline}
Because $\sqrt{J}\sin^2\!\varphi$ and $\sqrt{J}\cos^2\!\varphi$ are
not proportional we can apply the strict Schwarz inequality and find
\begin{equation*}
\sqrt{\int_{-\pi}^\pi (\sqrt{J} \sin^2\!\varphi)^2 \dif \varphi}
\cdot
\sqrt{\int_{-\pi}^\pi (\sqrt{J} \cos^2\!\varphi)^2 \dif \varphi}
>
\int_{-\pi}^\pi J \sin^2\!\varphi \cos^2\!\varphi \dif \varphi.
\end{equation*}
Thus, \eqref{eq:8} is positive and $\area(\nu_1,\nu_2)$ is indeed a
strictly convex function.
\end{proof}
Now, two uniqueness results follow from standard arguments (see
\cite{schroecker08:_uniqueness_results_ellipsoids,%
weber10:_davis_convexity_theorem} and in particular
\cite{weber10:_minimal_area_conics}).
\begin{theorem}
\label{th:2}
Let $F$ be a compact and full-dimensional subset of the hyperbolic
plane. Among all ellipses with two given axes that contain $F$ there
exists exactly one with minimal area.
\end{theorem}
\begin{theorem}
\label{th:3}
Let $F$ be a compact and full-dimensional subset of the hyperbolic
plane. Among all ellipses with given center that contain $F$ there
exists exactly one with minimal area.
\end{theorem}
We give a quick outline of the proofs of Theorem~\ref{th:2} and
\ref{th:3}, mainly because this gives us the opportunity to introduce
an important concept that will be required later.
\begin{definition}[in-between ellipse]
\label{def:in-between-ellipse}
Let $C_0$ and $C_1$ be two ellipses
\begin{equation*}
C_i = \{x \in \mathbb{H}^2 \colon x^\mathrm{T} \cdot M_i \cdot x = 0\}, \quad i=0,1
\end{equation*}
where the matrices $M_i$ are indefinite and have Minkowski
eigenvalues $\nu_{i,0} = 1$ and $\nu_{i,1}$, $\nu_{i,2} > 1$. For
$\lambda \in (0,1)$, the \emph{in-between ellipse} $C_\lambda$ of
$C_0$ and $C_1$ is defined as
\begin{equation*}
C_\lambda = \{x \in \mathbb{H}^2 \colon x^\mathrm{T} \cdot M_\lambda \cdot x\},
\end{equation*}
where
\begin{equation*}
M_\lambda = (1-\lambda) M_0 + \lambda M_1.
\end{equation*}
We also write $C_\lambda = (1-\lambda) C_0 + \lambda C_1$.
\end{definition}
It is obvious that $C_\lambda$ contains the common interior of $C_0$
and $C_1$ and is an ellipse if this interior is not empty. Moreover,
it follows from Lemma~\ref{lem:1} and the strict version of Davis'
convexity theorem \cite{davis57:_convex_functions,%
lewis96:_convex_analysis} that $\area(C_\lambda)$ is a strictly
convex function of $\lambda$. More detailed arguments can be found in
\cite{weber10:_davis_convexity_theorem,%
weber10:_minimal_area_conics}. The important fact to remember is
that two enclosing conics $C_0$ and $C_1$ of the same area give rise
to an in-between conic $C_\lambda$ of lesser area. Thus, the
assumption of two minimal area conics leads to a contradiction. Note
that convexity of $\area(C_\lambda)$ in the general (non-concentric)
case is not implied by Davis' convexity theorem.
\section{Uniqueness in the general case.}
\label{sec:uniqueness}
Now we come to the proof of Theorem~\ref{th:1}, the general uniqueness
result. As usual, existence follows from compactness arguments. The
basic ideas and initial steps in the proof of uniqueness are not
different from the proof of Theorem~8 in
\cite{weber10:_minimal_area_conics}. We give an outline:
\begin{itemize}
\item Assume existence of two minimal enclosing ellipses $C_0$
and~$C_1$.
\item Find the unique (hyperbolic) half-turn $\eta$ (an idempotent
hyperbolic rotation) such that $C^\star_1 = \eta(C_1)$ and
$C^\star_1$ is concentric with~$C_0$.
\item Define in-between ellipses $C_\lambda = (1-\lambda) C_0 +
\lambda C_1$ and $C^\star_\lambda = (1-\lambda) C_0 + \lambda
C^\star_1$ according to Definition~\ref{def:in-between-ellipse}.
\item Show that there exists $\varepsilon > 0$ such that
$\area(C_\lambda) < \area(C^\star_\lambda)$ for $0 < \lambda <
\varepsilon$. Because of $\area(C^\star_\lambda) \le \area(C_0) =
\area(C_1)$ (with equality iff $C_1^\star = C_0$) this contradicts
the assumed minimality of $C_0$~and~$C_1$.
\end{itemize}
Existence of $\varepsilon$ in the last step of this program can be
proved by showing the inequality
\begin{equation}
\label{eq:9}
\dpd{\area(C^\star_\lambda)}{\lambda}\Big|_{\lambda=0}
<
\dpd{\area(C_\lambda)}{\lambda}\Big|_{\lambda=0}.
\end{equation}
The advantage of this approach is that both sides of \eqref{eq:9} can
be readily computed from the normalized equations that describe $C_0$,
$C_1$, and $C^\star_1$. In particular, the cubic problem of
calculating the eigenvalues of the matrices describing $C_\lambda$ or
$C^\star_\lambda$ is avoided.
In order to follow the outline of the proof of Theorem~\ref{th:1} we
have to compute the ellipses $C_0$, $C_1$ and $C^\star_1$ in a
sufficiently general way. By Theorem~\ref{th:3}, the centers of $C_0$
and $C_1$ can be assumed to be different. Thus, there exists a unique
mid-point $r$ of their respective centers $c_0$ and $c_1$. Define
$C^\star_1$ as the ellipse obtained by applying the half-turn with
center $r$ to $C_1$. The ellipses $C_0$, $C_1$, and $C^\star_1$ are
described by matrices $M_0$, $M_1$, and $M^\star_1$ with respective
eigenvalues
\begin{equation*}
e(M_0) = (1,\nu_{0,1},\nu_{0,2}),
\quad
e(M_1) = e(M^\star_1) = (1,\nu_{1,1},\nu_{1,2}).
\end{equation*}
We would like to make some admissible assumptions on these
eigenvalues. Because of $\area(C_0) = \area(C_1)$, we have
\begin{equation}
\label{eq:10}
1 < \nu_{0,1} \le \nu_{1,1} \le \nu_{1,2} \le \nu_{0,2}.
\end{equation}
If $\nu_{0,1} = \nu_{1,1}$ or $\nu_{1,2} = \nu_{0,2}$, \eqref{eq:10}
holds with equality throughout and both ellipses are actually
congruent circles. In this case a simple construction produces a
smaller enclosing circle (Figure~\ref{fig:two-circles}): Denote the
two intersection points of $C_0$ and $C_1$ by $s_0$ and $s_1$. By
elementary hyperbolic geometry, the circle $S$ over the diameter
$s_0$, $s_1$ is smaller than $C_0$ and $C_1$ and it contains the
common interior of $C_0$ and~$C_1$.
\begin{figure}
\centering
\includegraphics{img/two-circles}
\caption{The case of two circles}
\label{fig:two-circles}
\end{figure}
Thus, the case of two congruent circles can be excluded and we may
assume that the eigenvalues of $M_0$ and $M_1$ are ordered
according to
\begin{equation}
\label{eq:11}
1 < \nu_{0,1} < \nu_{1,1} \le \nu_{1,2} < \nu_{0,2}.
\end{equation}
Now we compute the derivative of the area function \eqref{eq:6} with
respect to $\lambda$. For that purpose, we assume that $C_0$ is given
by the normal form~\eqref{eq:4} and $C_1$ is obtained from an ellipse
in this normal form by a hyperbolic rotation, that is,
\begin{equation}
\label{eq:12}
M_0 = \diag(-1, \nu_{0,1}, \nu_{0,2}),
\quad
M_1 = (Q^{-1})^\mathrm{T} \cdot \diag(-1, \nu_{1,1}, \nu_{1,2}) \cdot Q^{-1}
\end{equation}
with the hyperbolic rotation matrix
\begin{equation}
\label{eq:13}
\begin{gathered}
Q =
\begin{pmatrix}
q_0^2 + q_1^2 + q_2^2 + q_3^2 & 2(q_0 q_3 + q_1 q_2) & 2(q_1 q_3 - q_0 q_2) \\
2(q_0 q_3 - q_1 q_2) & q_0^2 - q_1^2 - q_2^2 + q_3^2 & 2(q_0 q_1 - q_2 q_3) \\
2(-q_0 q_2 - q_1 q_3) & 2(-q_0 q_1 - q_2 q_3) & q_0^2 - q_1^2 + q_2^2 - q_3^2
\end{pmatrix},\\
q_0^2 + q_1^2 - q_2^2 - q_3^2 = 1.
\end{gathered}
\end{equation}
Up to the irrelevant sign of $q_1$ and an index shift, this is
precisely Equation~(5) of \cite{oezdemir06:_minkowski_rotations}. The
rotation angle $-2\xi$ is given by $q_0 = \cos\xi$, the axis direction
is $(-q_1, q_2, q_3)^\mathrm{T}$. A hyperbolic half-turn is obtained by
substituting $q_0 = 0$ into \eqref{eq:13}. In this case, $Q \cdot Q$
indeed equals the unit matrix $\diag(1,1,1)$.
The matrix $M_\lambda$ to $C_\lambda$ is computed according to
Definition~\ref{def:in-between-ellipse}. Its ordered eigenvalues
$(\nu_0, \nu_1, \nu_2)$ are functions of $\lambda$. In the vicinity of
$\lambda = 0$ we have $\nu_0(\lambda) > 0$ and $1 < \nu_1(\lambda) <
\nu_2(\lambda)$. These eigenvalues are implicitly defined as roots of
the characteristic polynomial $P(\lambda, \nu(\lambda)) =
\det(M_\lambda - \nu I)$ of $M_\lambda$ where $I$ is the matrix
defined in \eqref{eq:3}. For $\lambda = 0$ we know the values of
these roots:
\begin{equation*}
\nu_0(0) = 1,\quad
\nu_1(0) = \nu_{0,1},\quad
\nu_2(0) = \nu_{0,2}.
\end{equation*}
By implicit derivation we have
\begin{equation*}
\dod{\nu_i}{\lambda}(0) =
-\frac{\pd{P}{\lambda}(0, \nu_i(0))}{\pd{P}{\nu}(0, \nu_i(0))},
\quad i = 0,1,2.
\end{equation*}
Furthermore, we can compute the partial derivatives
\begin{equation*}
\dpd{\area(\nu_0, \nu_1, \nu_2)}{\nu_i}, \quad i = 0,1,2
\end{equation*}
of \eqref{eq:6}. Using the chain rule
\begin{equation*}
\dpd{\area(C_\lambda)}{\lambda}\Big|_{\lambda=0}
= \dpd{\area}{\nu_0} \dpd{\nu_0}{\lambda}\Big|_{\lambda=0}
+ \dpd{\area}{\nu_1} \dpd{\nu_1}{\lambda}\Big|_{\lambda=0}
+ \dpd{\area}{\nu_2} \dpd{\nu_2}{\lambda}\Big|_{\lambda=0},
\end{equation*}
we find
\begin{equation}
\label{eq:14}
\dpd{\area(C_\lambda)}{\lambda}\Big|_{\lambda=0} =
-\frac{1}{2}\int_{-\pi}^\pi \frac{D}{N} \dif\varphi
\end{equation}
where
\begin{equation*}
\begin{aligned}
D =\mbox{} &((q_{1,2}^2 \nu_{1,2} + q_{1,3}^2 \nu_{1,2} - q_{1,1}^2)\nu_{0,1} +
q_{2,2}^2 \nu_{1,1} + q_{2,3}^2 \nu_{1,2} - q_{2,1}^2) \sin^2\varphi + \\
&((q_{1,2}^2 \nu_{1,1} + q_{1,3}^2 \nu_{1,2} - q_{1,1}^2)\nu_{0,2} +
q_{3,2}^2 \nu_{1,1} + q_{3,3}^2 \nu_{1,2} - q_{3,1}^2) \cos^2\varphi,\\
N =\mbox{}& (\nu_{0,1} \sin^2 \varphi + \nu_{0,2} \cos^2 \varphi - 1)^{3/2}
(\nu_{0,1} \sin^2 \varphi + \nu_{0,2} \cos^2 \varphi)^{1/2}
\end{aligned}
\end{equation*}
and $q_{i,j}$ are the entries of the matrix~\eqref{eq:13}. It will be
convenient to write \eqref{eq:14} in terms of the first and second
complete elliptic integrals
\begin{equation}
\label{eq:15}
K(z) = \int_0^1 \frac{1}{\sqrt{1-t^2} \sqrt{1-z^2t^2}} \dif t
\quad\text{and}\quad
E(z) = \int_0^1 \frac{\sqrt{1-z^2t^2}}{\sqrt{1-t^2}} \dif t.
\end{equation}
Since we will evaluate them only at
\begin{equation*}
f = \sqrt{\frac{\nu_{0,2}-\nu_{0,1}}{(\nu_{0,2} - 1) \nu_{0,1}}},
\end{equation*}
we use the abbreviations $\bar{E} := E(f)$ and $\bar{K} := K(f)$. By
\eqref{eq:11}, $f$ is always real and between 0 and 1. Substituting
\begin{equation*}
\varphi = \arcsin\sqrt{\frac{\nu_{0,2}-x}{\nu_{0,2}-\nu_{0,1}}},
\end{equation*}
and noting that $2(\nu_{0,2}-x)(x-\nu_{0,1}) = 2(\nu_{0,1} -
\nu_{0,2}) \cos\varphi \sin\varphi$ we can express the derivative of
the area function in terms of $\bar{E}$ and $\bar{K}$:
\begin{multline}
\label{eq:16}
\dpd{\area(C_\lambda)}{\lambda} \Big|_{\lambda=0} =
\frac{2}
{\sqrt{(\nu_{0,2}-1)\nu_{0,1}}(\nu_{0,2}-\nu_{0,1})(\nu_{0,1}-1)}\\
(
\mathrm{A} (q_{1,2}^2 \nu_{1,1} + q_{1,3}^2 \nu_{1,2} - q_{1,1}^2) +
\mathrm{B} (q_{2,2}^2 \nu_{1,1} + q_{2,3}^2 \nu_{1,2} - q_{2,1}^2) +
\Gamma (q_{3,2}^2 \nu_{1,1} + q_{3,3}^2 \nu_{1,2} - q_{3,1}^2)
)
\end{multline}
where
\begin{equation}
\label{eq:17}
\begin{gathered}
\mathrm{A} = -\nu_{0,1} (\nu_{0,2}-\nu_{0,1}) \bar{E},
\quad
\mathrm{B} = \nu_{0,2} (\nu_{0,1}-1) \bar{K} - \nu_{0,1} (\nu_{0,2}-1) \bar{E},\\
\Gamma = \nu_{0,1} (\nu_{0,1}-1) (\bar{E}-\bar{K}).
\end{gathered}
\end{equation}
Having computed \eqref{eq:16}, the preparatory work for the final
(big) step in the proof of Theorem~\ref{th:1} is completed. We
formulate the last step as a lemma:
\begin{lemma}[Hyperbolic Half-Turn Lemma]
\label{lem:half-turn}
Consider three ellipses $C_0$, $C_1$, $C^\star_1$ of equal
area. Assume that
\begin{itemize}
\item $C_0$ and $C^\star_1$ are concentric,
\item $C_1$ is obtained from $C^\star_1$ by a half-turn,
\item the eigenvalues $\nu_{i,1}$, $\nu_{i,2}$ of the normalized
matrix $M_i$ to $C_i$ ($i=0,1$) satisfy ~\eqref{eq:11}, and
\item $H(\nu_{i,1}, \nu_{i,2}) \le 0$ where $H$ is defined in
Equation~\eqref{eq:1}.
\end{itemize}
Then the area of $C_\lambda = (1-\lambda)C_0 + \lambda C_1$ is
smaller than the area of $C^\star_\lambda = (1-\lambda)C_0 + \lambda
C^\star$, at least in the vicinity of $\lambda = 0$.
\end{lemma}
In order to proof Lemma~\ref{lem:half-turn}, we compare the
derivatives of the areas of $C_\lambda$ and $C^\star_\lambda$ with
respect to $\lambda$ at $\lambda = 0$. The ellipse $C^\star_1$ can be
obtained from an ellipse in normal form~\eqref{eq:4} by a rotation
about $(1, 0, 0)^\mathrm{T}$ through $\zeta$. We can compute the matrix $M_1$
as in~\eqref{eq:12} by substituting
\begin{equation*}
q_0 = \cos\tfrac{\zeta}{2},
\quad
q_1 = -\sin\tfrac{\zeta}{2},
\quad
q_2 = q_3 = 0
\end{equation*}
into the matrix~\eqref{eq:13}. Plugging this into
Equation~\eqref{eq:16} yields
\begin{equation*}
\frac{1}{2}\dpd{\area(C^\star_\lambda)}{\lambda} \Big|_{\lambda=0} =
\frac{D^\star_1}{N^\star_1}
\end{equation*}
where
\begin{equation*}
\begin{aligned}
D^\star_1 &= -\mathrm{A} +
(\mathrm{B}\cos^2\zeta+\Gamma\sin^2\zeta)\nu_{1,1} +
(\mathrm{B}\sin^2\zeta+\Gamma\cos^2\zeta)\nu_{1,2},\\
N^\star_1 &= \sqrt{(\nu_{0,2}-1) \nu_{0,1}}(\nu_{0,2} - \nu_{0,1})(\nu_{0,1} - 1),
\end{aligned}
\end{equation*}
and $\mathrm{A}$, $\mathrm{B}$, $\Gamma$ are as in~\eqref{eq:17}.
The ellipse $C_1$ is obtained by a half-turn from $C^\star_1$ about
the rotation axis defined by the unit vector
$r=(r_1,r_2,r_3)^\mathrm{T}$. The matrix $Q$ in~\eqref{eq:13} is the product
of the rotation matrix about $(1, 0, 0)^\mathrm{T}$ through $\zeta$ and a
half-turn rotation matrix about the unit vector $r$. The later is
obtained by substituting
\begin{equation*}
q_0 = 0,\quad
q_1 = -r_1,\quad
q_2 = r_2,\quad
q_3 = r_3
\end{equation*}
into Equation~\eqref{eq:13}. Plugging the entries of the product
matrix into \eqref{eq:16} yields
\begin{equation*}
\frac{1}{2} \dpd{\area(C_1)}{\lambda} \Big|_{\lambda=0} \equiv \frac{D_1}{N_1}
\mod (r_1^2-r_2^2-r_3^2-1)
\end{equation*}
where $N_1 = N_1^\star$ and
\begin{equation*}
\begin{aligned}
D_1 =&\phantom{\mbox{}+\mbox{}}4r_2r_3((2\mathrm{A}+\mathrm{B}+\Gamma)r_1^2+(\mathrm{B}-\Gamma)r_2^2-(\mathrm{B}-\Gamma)r_3^2)(\nu_{1,1}-\nu_{1,2})\sin\zeta\cos\zeta\\
&+ (4(\mathrm{A}+\mathrm{B})r_1^2r_2^2-4(\mathrm{A}+\Gamma)r_1^2r_3^2-8(\mathrm{B}-\Gamma)r_2^2 r_3^2+\mathrm{B}-\Gamma)(\nu_{1,1}-\nu_{1,2})\cos^2\zeta\\
&+ 4(\mathrm{A}+\mathrm{B})r_1^2r_2^2(\nu_{1,2}-1)+4(\mathrm{A}+\Gamma)r_1^2r_3^2(\nu_{1,1}-1)\\
&+ 4(\mathrm{B}-\Gamma)r_2^2r_3^2(\nu_{1,1}-\nu_{1,2})+\Gamma\nu_{1,1}+\mathrm{B}\nu_{1,2}-\mathrm{A}.
\end{aligned}
\end{equation*}
Now we are going to prove the inequality $D_1 - D^\star_1 < 0$ for
$\zeta \in [0,\frac{\pi}{2}]$. We substitute $\zeta = 2\arctan t$ into
its left-hand side and obtain a rational expression in $t$. Clearing
the positive denominator $(1+t^2)^2$, we are left with a polynomial
$P(t)$ of degree four whose negativity on $[0,1]$ has to be shown. To
do this, we write $P(t)$ with respect to the Bernstein basis as
\begin{equation}
\label{eq:18}
P(t) = \sum_{i=0}^4 p_i B^4_i(t)
\quad\text{where}\quad
B^4_i(t) = \binom{4}{i}(1-t)^{4-i}t^i
\end{equation}
and show non-positivity of the coefficients $p_1$, $p_2$, $p_3$ and
negativity of the coefficients $p_0$ and $p_4$. After a
straightforward basis transformation and reducing modulo
$r_1^2-r_2^2-r_3^2-1$ we find
\begin{equation*}
p_0 = 4(
(\nu_{1,1}-1)(\mathrm{A}+\mathrm{B})r_1^2r_2^2 +
(\nu_{1,2}-1)(\mathrm{A}+\Gamma)r_1^2r_3^2 +
(\nu_{1,2}-\nu_{1,1})(\mathrm{B}-\Gamma)r_2^2r_3^2
),
\end{equation*}
\begin{multline*}
p_1 =
4 (
(\nu_{1,1}-1)(\mathrm{A}+\mathrm{B})r_1^2r_2^2
+ (\nu_{1,2}-1)(\mathrm{A}+\Gamma)r_1^2r_3^2
+ (\nu_{1,2}-\nu_{1,1})(\mathrm{B}-\Gamma)r_2^2r_3^2
)\\
+2r_2r_3(\nu_{1,2}-\nu_{1,1})(
-(2\mathrm{A}+\mathrm{B}+\Gamma)r_1^2
-(\mathrm{B}-\Gamma)r_2^2
+(\mathrm{B}-\Gamma)r_3^2
),
\end{multline*}
\begin{multline*}
3p_2 =
8(\nu_{1,1}+\nu_{1,2}-2)(
(\mathrm{A}+\mathrm{B})r_1^2r_2^2
+(\mathrm{A}+\Gamma)r_1^2r_3^2
)\\
+12r_2r_3(\nu_{1,2}-\nu_{1,1})(
-(2\mathrm{A}+\mathrm{B}+\Gamma)r_1^2
-(\mathrm{B}-\Gamma)r_2^2
+(\mathrm{B}-\Gamma)r_3^2
),
\end{multline*}
\begin{multline*}
p_3 =
8(
(\nu_{1,2}-1)(\mathrm{A}+\mathrm{B})r_1^2r_2^2
+(\nu_{1,1}-1)(\mathrm{A}+\Gamma)r_1^2r_3^2
-(\nu_{1,2}-\nu_{1,1})(\mathrm{B}-\Gamma)r_2^2r_3^2
)\\
+4r_2r_3(\nu_{1,2}-\nu_{1,1})(
-(2\mathrm{A}+\mathrm{B}+\Gamma)r_1^2
-(\mathrm{B}-\Gamma)r_2^2
+(\mathrm{B}-\Gamma)r_3^2
),
\end{multline*}
\begin{equation*}
p_4 =
16(
(\nu_{1,2}-1) (\mathrm{A}+\mathrm{B})r_1^2r_2^2
+ (\nu_{1,1}-1)(\mathrm{A}+\Gamma)r_1^2r_3^2
- (\nu_{1,2}-\nu_{1,1})(\mathrm{B}-\Gamma)r_2^2r_3^2
).
\end{equation*}
Recall now Equation~\eqref{eq:11} ($1 < \nu_{0,1} < \nu_{1,1} \le
\nu_{1,2} < \nu_{0,2}$) and $r_1^2 - r_2^2 - r_3^2 = 1$ and observe
that
\begin{itemize}
\item $\mathrm{A} < \mathrm{B} < \Gamma$; this is proved in Lemma~\ref{lem:2} and
Lemma~\ref{lem:3} in the appendix.
\item $\Gamma < 0$; this follows from $\bar{E} < \bar{K}$ and $\nu_{0,1} >
1$.
\end{itemize}
Under these conditions, the negativity of $p_0$ is clear except when
$r_2 = r_3 = 0$. But this is the concentric case $C_1 = C^\star_1$ and
need not be considered. The non-positivity of the coefficients $p_1$,
$p_2$, $p_3$, and the negativity of $p_4$ is shown in
Lemmas~\ref{lem:6} and \ref{lem:5} below. This concludes the proof of
the Half-Turn Lemma and, thus, also the proof of Theorem~\ref{th:1}.
\begin{example}
\label{ex:1}
We use the prerequisites of Theorem~\ref{th:1} on the radii $r$ and
$R$ in Lemma~\ref{lem:6}. But one might wonder whether the
Half-Turn Lemma remains true without these assumptions. The answer
to this question is negative. We can provide and example, where the
polynomial $P(t)$ attains positive values on $(0,1)$.
Substituting $r_1^2 = r_2^2 + r_3^2 + 1$, the coefficient $p_1$ can
be written as
\begin{equation*}
\begin{aligned}
p_1 &= \nu_{1,2}(
4r_2^2r_3(\mathrm{A}+\mathrm{B})(r_3-r_2)
+4r_3^2(\mathrm{A}+\Gamma)(r_3^2-r_2r_3+1)
-2(2\mathrm{A}+\mathrm{B}+\Gamma)r_2r_3
) \\
&+\nu_{1,1}(
4(\mathrm{A}+\mathrm{B})r_2^2(r_2^2+r_2r_3+1)
+4(\mathrm{A}+\Gamma)r_2r_3^2(r_2+r_3)
+2(2\mathrm{A}+\mathrm{B}+\Gamma)r_2r_3
) \\
&-4(1+r_2^2+r_3^2)(r_2^2(\mathrm{A}+\mathrm{B})+r_3^2(\mathrm{A}+\Gamma)).
\end{aligned}
\end{equation*}
Assuming $r_2$, $r_3 > 0$ we see that
\begin{itemize}
\item the coefficient of $\nu_{1,1}$ is always negative and
\item it is possible to choose $\nu_{0,1}$, $\nu_{0,2}$, $r_2$, and
$r_3$ so that the coefficient of $\nu_{1,2}$ is positive.
\end{itemize}
Consequently $p_1$ can be made positive for large $\nu_{1,2}$. The
choice
\begin{equation*}
\nu_{0,1} = \nu_{1,1} = 1.1,\quad
\nu_{0,2} = \nu_{1,2} = 90,\quad
r_2 = 0.9 \cos(0.3),\quad
r_3 = 0.9 \sin(0.3),
\end{equation*}
accomplishes this and even makes $P(t)$ attain positive values for
$t \in (0,1)$ (the zeros are $t \approx 0.1272$ and $t \approx
0.1389$). Note that this does not imply
\begin{equation*}
\pd{\area(C_\lambda)}{\lambda}\Big|_{\lambda = 0} > 0
\end{equation*}
and, thus, constitutes no counter-example to the statement that the
area of $C_\lambda$ is smaller than the area of $C_0$ and $C_1$. We
are not aware of such a counter-example.
\end{example}
\section{Conclusion and future research}
\label{sec:conclusion}
We proved uniqueness results for minimal enclosing ellipses in the
hyperbolic plane. The general result (Theorem~\ref{th:1}) involves
rather cumbersome but straightforward calculations. The differences
to the elliptic case are mainly in the final estimates for the
coefficients of the polynomial $P(t)$ in \eqref{eq:18} and can be
found in the appendix.
It is apparent that Theorem~\ref{th:1} leaves room for improvements.
Pushing back the frontier dictated by the inequality \eqref{eq:1} in
Theorem~1 would be nice. Substantial steps towards answering the
question whether the minimal area ellipse to all compact and
full-dimensional sets $F$ in the hyperbolic plane is unique or not
would be great.
Note that there is a subtle difference to the situation in the
elliptic plane. In \cite{weber10:_minimal_area_conics}, we presented
an example from which we inferred that uniqueness in the elliptic
plane cannot be proved by means of our construction of in-between
conics. In the hyperbolic plane, we are not aware of such a
configuration. Example~\ref{ex:1} only shows that the estimate of the
derivative of the area function is insufficient. Thus, there is a
certain hope that a general uniqueness result can be proved by means
of our construction.
Since uniqueness or non-uniqueness of minimal enclosing ellipses in
the elliptic and hyperbolic plane remains a difficult topic, one might
try to aim at a weaker result and consider only ``typical'' (in the
sense of Baire categories, see \cite{gruber85:_results_baire,%
gruber93:_baire_categories}) convex sets~$\conv{F}$.
We would also like to mention that \cite{weber10:_minimal_area_conics}
and this article are the only results on extremal quadrics in
non-Euclidean geometries that we are aware of. We can conceive
numerous possibilities for generalizations. They pertain to the
dimension of the surrounding space, the type of the quadric and
enclosed set (for example minimal enclosing hyperbolas to line sets as
in \cite{schroecker07:_minim_hyper}) the measure for the quadric's
size (volume, surface area etc.), and the replacement of ``minimal
enclosing'' by ``maximal inscribed'' quadrics.
The attentive reader will have noticed that our method of proving
Theorem~1 can be adapted to these generalizations. Having defined an
``in-between'' quadric $Q_\lambda$ by means of a suitable matrix
convex combination, it might be infeasible to compute the size
$Q_\lambda$ in a form that allows further processing. But, provided
the size function's derivatives with respect to the matrix eigenvalues
can be computed, it is, at least in principle, possible to obtain an
explicit formula for the derivative of the size of $Q_\lambda$ for
$\lambda = 0$. Its negativity has to be shown so that the uniqueness
problem is made accessible to numerous tools and techniques related to
inequalities.
In the Euclidean setting, the mere uniqueness result is less important
than John's characterization of it via his famous decomposition of the
identity. The original reference is the old paper
\cite{john48:_studies_and_essays}. But, following
\cite{ball92:_ellipsoids_max_vol}, many contemporary authors
considered this topic \cite{gruber05:_arithmetic_proof,%
gruber08:_john_type,%
lutwak05:_john_ellipsoids,%
bastero02:_johns_decomposition,%
gordon04:_johns_decomposition}. Elliptic and hyperbolic versions of
John's characterization seem to be a worthy topic of future research.
\section*{Appendix. Proofs of auxiliary results}
\begin{lemma}
\label{lem:2}
For $\mathrm{A}$ and $\mathrm{B}$ as in~\eqref{eq:17} we have $\mathrm{A} < \mathrm{B}$.
\end{lemma}
\begin{proof}
We show that $A-B < 0$. By \eqref{eq:17} we have
\begin{equation*}
\mathrm{A} - \mathrm{B} = (\nu_{0,1}-1) (\nu_{0,1} \bar{E} - \nu_{0,2} \bar{K}).
\end{equation*}
This is negative because of $\bar{E} < \bar{K}$ and $1 < \nu_{0,1} <
\nu_{0,2}$.
\end{proof}
\begin{lemma}
\label{lem:3}
For $\mathrm{B}$ and $\Gamma$ as in~\eqref{eq:17} we have $\mathrm{B} < \Gamma$.
\end{lemma}
\begin{proof}
We let $\Delta = \mathrm{B} - \Gamma$ and view $\Delta$ as a function of $\nu_{0,1}$
and $\nu_{0,2}$. Its negativity for $1 < \nu_{0,1} < \nu_{0,2}$
follows from three facts:
\begin{itemize}
\item $\Delta = 0$ for $\nu_{0,1} = \nu_{0,2}$ (this is obvious because
in this case we have $\bar{E} = \bar{K}$),
\item $\pd{\Delta}{\nu_{0,2}} = 0$ for $\nu_{0,1} = \nu_{0,2}$, and
\item $\Delta$ is concave in $\nu_{0,2}$ for $1 < \nu_{0,1} <
\nu_{0,2}$.
\end{itemize}
We compute the first partial derivative of $\Delta$ with respect to
$\nu_{0,2}$:
\begin{equation*}
\pd{\Delta}{\nu_{0,2}} =
\frac{(2\nu_{0,2}(1-\nu_{0,2})+\nu_{0,1}-1)\nu_{0,1}\bar{E}}
{2\nu_{0,2}(\nu_{0,2}-1)} +
\frac{(2\nu_{0,2}-1)(\nu_{0,1}-1)\bar{K}}
{2(\nu_{0,2}-1)}.
\end{equation*}
It vanishes for $\nu_{0,1} = \nu_{0,2}$. The second partial
derivative of $\Delta$ with respect to $\nu_{0,2}$ equals
\begin{equation*}
\dpd[2]{\Delta}{\nu_{0,2}} =
\frac{(\nu_{0,1}-1)}{4\nu_{0,2}^2(\nu_{0,2}-1)^2} J_1
\quad\text{where}\quad
J_1 = \nu_{0,2}(\nu_{0,1}-1)\bar{K}-\nu_{0,1}(5\nu_{0,2}-2)\bar{E}.
\end{equation*}
We have to show that it is negative. The factor before $J_1$ is
positive. To see the negativity of $J_1$ itself we write it in the
integral form (see~\eqref{eq:15})
\begin{equation*}
J_1 = \int_0^1 \frac{J_2}{\sqrt{1-t^2} \sqrt{1-f^2 t^2}} \dif t,
\end{equation*}
where
\begin{equation*}
J_2 =
\nu_{0,2}(\nu_{0,1}-1) - \nu_{0,1} (5\nu_{0,2}-2)
\Bigl(
1 - t^2 \frac{\nu_{0,2}-\nu_{0,1}}{\nu_{0,1} (\nu_{0,2}-1)}
\Bigr).
\end{equation*}
The term $J_2$ is linear in $t^2$. For $t=0$ it equals $2\nu_{0,1}
(1-2\nu_{0,2}) - \nu_{0,2} < 0$ and for $t=1$ it equals $-\nu_{0,2}
(\nu_{0,1}-1) (4\nu_{0,2}-1)/(\nu_{0,2}-1) < 0$. Thus, $J_2 < 0$ for
$t \in [0,1]$. This implies $J_1 < 0$ and we see that $\Delta$ is
indeed concave for $1 < \nu_{0,1} < \nu_{0,2}$.
\end{proof}
We will deduce non-positivity of the Bernstein coefficients $p_1$,
\ldots, $p_3$ from the inequality \eqref{eq:1} and the additional
inequalities
\begin{align}
h_1(\nu_1,\nu_2) &:= \nu_2 - 5\nu_1 + 4 \le 0,\label{eq:19}\\
h_2(\nu_1,\nu_2) &:= -5\nu_1^2+\nu_1\nu_2+\nu_1+\nu_2+2 \le 0,\label{eq:20}\\
h_3(\nu_1,\nu_2) &:= \nu_2^2 -5\nu_1\nu_2-2\nu_1+4\nu_2+2 \le 0,\label{eq:21}\\
h_4(\nu_1,\nu_2) &:= 5\nu_2^2 - 13\nu_1\nu_2 - 2\nu_1 + 6\nu_2 + 4 \le 0,\label{eq:22}\\
h_5(\nu_1,\nu_2) &:= -5\nu_1^2+\nu_1\nu_2-\nu_1+3\nu_2+2 \le 0,\label{eq:23}\\
h_6(\nu_1,\nu_2) &:= \nu_2^2-5\nu_1\nu_2+2\nu_2+2 \le 0,\label{eq:24}
\end{align}
which are all simple consequences of \eqref{eq:1}. We state this in
Lemma~\ref{lem:9}, below.
The assumptions of Theorem~\ref{th:1} guarantee that these
inequalities are fulfilled for $\nu_1 = \nu_{0,1}$, $\nu_2 =
\nu_{0,2}$. Thus, we only have to show that the inequalities are
satisfied on the set
\begin{equation*}
U := \{(\nu_1,\nu_2) \mid 1 < \nu_1 < \nu_2\}.
\end{equation*}
\begin{lemma}
\label{lem:9}
{\normalfont (a)} If a point $(\nu_1,\nu_2) \in U$ satisfies the
inequality \eqref{eq:1}, it also satisfies the inequalities
\eqref{eq:19}--\eqref{eq:24}.\par
{\normalfont (b)} If a point $(\nu_1^\star, \nu_2^\star)$ satisfies
the inequalities \eqref{eq:1}, and \eqref{eq:19}--\eqref{eq:24}, the
same is true for all points $(\nu_1,\nu_2) \in U$ with $\nu_1 \ge
\nu_1^\star$ and $\nu_2 \le \nu_2^\star$.
\end{lemma}
\begin{proof}
It is an elementary exercise to verify that the curves defined by
the implicit equations $H$ and $h_j$ over the closure of $U$ are the
graphs of strictly monotone increasing functions of $F(\nu_1)$ and
$f_j(\nu_1)$
\begin{equation*}
F, f_j\colon [1,\infty) \to [1,\infty),
\end{equation*}
for $j \in \{1,\ldots,6\}$. This implies assertion (b).
The functions $f_j-F$ are strictly monotone increasing as well.
This and the observation $F(1) = f_j(1) = 1$ implies assertion~(a).
\end{proof}
Figure~\ref{fig:hyp-graph} displays the hyperbolas $H(\nu_1,\nu_2) =
0$ and, as an example, $h_2(\nu_1,\nu_2) = 0$ together with the line
$\nu_1 = \nu_2$. The remaining curves are depicted in light-gray. The
region $U$ is dotted.
\begin{lemma}
\label{lem:6}
The coefficients $p_1$, $p_2$, and $p_3$ are not positive.
\end{lemma}
\begin{proof}
We substitute $r_1^2 = 1 + r_2^2 + r_3^2$ into $p_1$ and observe
that $p_1 = 0$ and $\pd{p_1}{r_2} = 0$ if $r_2 = r_3 = 0$. The
lemma's claim holds true if we can show that the $r_2$-parameter
lines of $p_1$, viewed as a function of $r_2$ and $r_3$, are
strictly concave, that is,
\begin{multline}
\label{eq:25}
\dpd[2]{p_1}{r_2} = 8(\mathrm{A}+\mathrm{B})(\nu_{1,1}-1)(6 r_2^2+1)\\
+8((\mathrm{A}+\mathrm{B})\nu_{1,2} +(\mathrm{A}+\Gamma)\nu_{1,1} -(2\mathrm{A}+\mathrm{B}+\Gamma))r_3^2
-24(\nu_{1,2}-\nu_{1,1})(\mathrm{A}+\mathrm{B}) r_2 r_3 < 0.
\end{multline}
The coefficient of $r_2r_3$ is positive, the remaining terms are
negative. By the inequality of arithmetic and geometric means we
have $r_2r_3 \le (r_2^2+r_3^2)/2$. We insert this into \eqref{eq:25}
to obtain
\begin{multline}
\label{eq:26}
\dpd[2]{p_1}{r_2} \le
4(\mathrm{A}+\mathrm{B})(3(-\nu_{1,2}+5\nu_{1,1}-4)r_2^2 + 2(\nu_{1,1}-1))\\
+ 4(-(\mathrm{A}+\mathrm{B})\nu_{1,2} +(5\mathrm{A}+3\mathrm{B}+2\Gamma)\nu_{1,1}-2(2\mathrm{A}+\mathrm{B}+\Gamma)) r_3^2 < 0.
\end{multline}
The first term is negative if $\nu_{1,2} -5\nu_{1,1} + 4 \le
0$. This is implied by $\nu_{0,2} - 5\nu_{0,1} + 4 \le 0$ and thus
follows from \eqref{eq:19}. In the second term the coefficient of
$r_3^2$ needs closer investigation. We want to show its
negativity. By \eqref{eq:11} we have
\begin{multline}
\label{eq:27}
(5\mathrm{A}+3\mathrm{B}+2\Gamma)\nu_{1,1} - (\mathrm{A}+\mathrm{B})\nu_{1,2} - 2(2\mathrm{A}+\mathrm{B}+\Gamma) \leq \\
(5\mathrm{A}+3\mathrm{B}+2\Gamma)\nu_{0,1} - (\mathrm{A}+\mathrm{B})\nu_{0,2} - 2(2\mathrm{A}+\mathrm{B}+\Gamma).
\end{multline}
Using \eqref{eq:15} and \eqref{eq:17}, we write the term on the
right in its integral form:
\begin{multline}
\label{eq:28}
(5\mathrm{A}+3\mathrm{B}+2\Gamma)\nu_{0,1} - (\mathrm{A}+\mathrm{B})\nu_{0,2} -2(2\mathrm{A}+\mathrm{B}+\Gamma)\\
= (\nu_{0,2}-\nu_{0,1}) \int_0^1 \frac{J_3}{\sqrt{1-t^2}\sqrt{1-f^2
t^2}}
\end{multline}
where
\begin{equation*}
J_3 =
-\frac{\nu_{0,2}-\nu_{0,1}}{\nu_{0,2}-1}(2\nu_{0,2}-7\nu_{0,1}+5)t^2
+\nu_{0,2}(\nu_{0,1}+1)+\nu_{0,1}(1-5\nu_{0,1})+2.
\end{equation*}
We see that $J_3$ is linear in $t^2$. For $t=0$ and $t=1$ it attains
the respective values
\begin{align}
J_3\big\vert_{t=0} & = -5\nu_{0,1}^2 + \nu_{0,1}\nu_{0,2} + \nu_{0,1} + \nu_{0,2} + 2,\label{eq:29} \\
J_3\big\vert_{t=1} & = \frac{\nu_{0,1}-1}{\nu_{0,2}-1}(\nu_{0,2}^2-5\nu_{0,1}\nu_{0,2}-2\nu_{0,1}+4\nu_{0,2}+2).\label{eq:30}
\end{align}
The right-hand side of \eqref{eq:29} is not positive by
\eqref{eq:20}. The right-hand side of \eqref{eq:30} is not positive
by \eqref{eq:21}. We conclude that the integrand $J_3$ is not
positive for $t \in [0,1]$ and the same is true for
$\pd[2]{p_1}{r_2}$. Hence, the coefficient $p_1$ as a function of
$r_2$ is concave with the maximum, $p_1 = 0$ attained at $r_2 = r_3
= 0$. Thus, $p_1$ is not positive.
The proofs of non-positivity of $p_2$ and $p_3$ run along exactly
the same lines. We only provide the relevant formulas and reduce
the explanatory text between them to a minimum.
Equations~\eqref{eq:25} and \eqref{eq:26} become
\begin{equation*}
\begin{aligned}
3\dpd[2]{p_2}{r_2} &= 16(\mathrm{A}+\mathrm{B})(\nu_{1,2}+\nu_{1,1}-2)(6r_2^2+1) +\\
& \qquad 16(2\mathrm{A}+\mathrm{B}+\Gamma)(\nu_{1,2}+\nu_{1,1}-2)r_3^2 - 144(\mathrm{A}+\mathrm{B})(\nu_{1,2}-\nu_{1,1})r_2r_3\\
&\leq 8(\mathrm{A}+\mathrm{B})(3(\nu_{1,2}+7\nu_{1,1}-8)r_2^2 +2(\nu_{1,2}+\nu_{1,1}-2)) +\\
& \qquad 8(-(5\mathrm{A}+7\mathrm{B}-2\Gamma)\nu_{1,2} + (13\mathrm{A}+11\mathrm{B}+2\Gamma)\nu_{1,1} - 4(2\mathrm{A}+\mathrm{B}+\Gamma))r_3^2.
\end{aligned}
\end{equation*}
Instead of \eqref{eq:27} and \eqref{eq:28} we have
\begin{equation*}
\begin{gathered}
-(5\mathrm{A}+7\mathrm{B}-2\Gamma)\nu_{1,2} + (13\mathrm{A}+11\mathrm{B}+2\Gamma)\nu_{1,1} - 4(2\mathrm{A}+\mathrm{B}+\Gamma) \le \\
-(5\mathrm{A}+7\mathrm{B}-2\Gamma)\nu_{0,2} + (13\mathrm{A}+11\mathrm{B}+2\Gamma)\nu_{0,1} - 4(2\mathrm{A}+\mathrm{B}+\Gamma) = \\
(\nu_{0,2}-\nu_{0,1}) \int_0^1 \frac{J_4}{\sqrt{1-t^2}\sqrt{1-f^2 t^2}} \dif t
\end{gathered}
\end{equation*}
where
\begin{equation*}
J_4 = -\frac{\nu_{0,2}-\nu_{0,1}}{\nu_{0,2}-1}(3(4\nu_{0,2}-5\nu_{0,1}+1)t^2
+ \nu_{0,2}(5\nu_{0,1}+7)+\nu_{0,1}(-13\nu_{0,1}-3)+4).
\end{equation*}
The non-positivity of $p_2$ follows from
\begin{align*}
J_4\big\vert_{t=0} & = -13\nu_{0,1}^2+5\nu_{0,1}\nu_{0,2}-3\nu_{0,1}+7\nu_{0,2}+4,\\
J_4\big\vert_{t=1} & = \frac{\nu_{0,1}-1}{\nu_{0,2}-1}(5\nu_{0,2}^2-13\nu_{0,1}\nu_{0,2}-2\nu_{0,1}+6\nu_{0,2}+4),
\end{align*}
\eqref{eq:1} and \eqref{eq:22}.
As to the coefficient $p_3$, Equations~\eqref{eq:25} and
\eqref{eq:26} are replaced by
\begin{equation*}
\begin{aligned}
\dpd[2]{p_3}{r_2} & = 16(\nu_{1,2}-1)(\mathrm{A}+\mathrm{B})(6r_2^2+1) + \\
& 16((\mathrm{A}+\Gamma)\nu_{1,2} + (\mathrm{A}+\mathrm{B})\nu_{1,1} - (2\mathrm{A}+\mathrm{B}+\Gamma))r_3^2 -
48(\nu_{1,2}-\nu_{1,1})(\mathrm{A}+\mathrm{B})r_2r_3 \\
& \leq 8(\mathrm{A}+\mathrm{B})(3(3\nu_{1,2}+\nu_{1,1}-4)r_2^2 +2(\nu_{1,2}-1)) + \\
& \qquad 8(-(\mathrm{A}+3\mathrm{B}-2\Gamma)\nu_{1,2} +5(\mathrm{A}+\mathrm{B})\nu_{1,1} -2(2\mathrm{A}+\mathrm{B}+\Gamma))r_3^2
\end{aligned}
\end{equation*}
and \eqref{eq:27} and \eqref{eq:28} by
\begin{equation*}
\begin{gathered}
-(\mathrm{A}+3\mathrm{B}-2\Gamma)\nu_{1,2} +5(\mathrm{A}+\mathrm{B})\nu_{1,1} -2(2\mathrm{A}+\mathrm{B}+\Gamma) \le \\
-(\mathrm{A}+3\mathrm{B}-2\Gamma)\nu_{0,2} +5(\mathrm{A}+\mathrm{B})\nu_{0,1} -2(2\mathrm{A}+\mathrm{B}+\Gamma) = \\
(\nu_{0,2}-\nu_{0,1}) \int_0^1 \frac{J_5}{\sqrt{1-t^2}\sqrt{1-f^2t^2}}
\end{gathered}
\end{equation*}
where
\begin{equation*}
J_5 =
-\frac{\nu_{0,2}-\nu_{0,1}}{\nu_{0,2}-1}((4\nu_{0,2}-5\nu_{0,1}+1)t^2
+\nu_{0,2}(\nu_{0,1}+3) -\nu_{0,1}(5\nu_{0,1}+1) +2).
\end{equation*}
The non-positivity of $p_3$ follows from
\begin{align*}
J_5\big\vert_{t=0}& = -5\nu_{0,1}^2+\nu_{0,1}\nu_{0,2}-\nu_{0,1}+3\nu_{0,2}+2,\\
J_5\big\vert_{t=1}& = \frac{\nu_{0,1}-1}{\nu_{0,2}-1}(\nu_{0,2}^2 -5\nu_{0,1}\nu_{0,2}+2\nu_{0,2}+2).
\end{align*}
and \eqref{eq:23},~\eqref{eq:24}.
\end{proof}
The negativity of the only remaining Bernstein coefficient can be
shown directly without resorting to Lemma~\ref{lem:9}:
\begin{lemma}
\label{lem:5}
The coefficient $p_4$ is negative.
\end{lemma}
\begin{proof}
We can write
\begin{equation}
\label{eq:31}
\frac{p_4}{16} =
(\nu_{1,2}-1)r_2^2((\mathrm{A}+\mathrm{B}) r_1^2 - (\mathrm{B}-\Gamma)r_3^2)
+ (\nu_{1,1}-1)r_3^2((\mathrm{A}+\Gamma) r_1^2 + (\mathrm{B}-\Gamma)r_2^2).
\end{equation}
The proof is finished, if we can show that the coefficients of
$(\nu_{1,2}-1) r_2^2$ and $(\nu_{1,1}-1) r_3^2$ in \eqref{eq:31} are
negative. For the coefficient of $(\nu_{1,2}-1)r_2^2$ we argue as
follows: $\mathrm{A} < \mathrm{B} < \Gamma < 0$ implies $\mathrm{A}+\mathrm{B} < \mathrm{A} < \mathrm{B} <
\mathrm{B}-\Gamma$ and $r_1^2 - r_2^2 - r_3^2 = 1$ implies $r_1^2 >
r_3^2$. Thus, $(\mathrm{A}+\mathrm{B})r_1^2-(\mathrm{B}-\Gamma)r_3^2 < 0$. The negativity
of the coefficient of $(\nu_{1,2}-1)r_3^2$ follows from $\mathrm{A} + \Gamma <
0$ and $\mathrm{B}-\Gamma < 0$.
\end{proof}
\section*{Acknowledgments}
The authors gratefully acknowledge support of this research by the
Austrian Science Foundation FWF under grant P21032 (Uniqueness Results
for Extremal Quadrics).
\bibliographystyle{plainnat}
| {
"timestamp": "2011-07-07T02:01:35",
"yymm": "1101",
"arxiv_id": "1101.4740",
"language": "en",
"url": "https://arxiv.org/abs/1101.4740",
"abstract": "We present uniqueness results for enclosing ellipses of minimal area in the hyperbolic plane. Uniqueness can be guaranteed if the minimizers are sought among all ellipses with prescribed axes or center. In the general case, we present a sufficient and easily verifiable criterion on the enclosed set that ensures uniqueness.",
"subjects": "Metric Geometry (math.MG)",
"title": "Minimal area ellipses in the hyperbolic plane",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683498785867,
"lm_q2_score": 0.8006920020959543,
"lm_q1q2_score": 0.7907380792708834
} |
https://arxiv.org/abs/2201.12932 | A new proof of the description of the convex hull of space curves with totally positive torsion | We give new proofs of the description convex hulls of space curves $\gamma : [a,b] \mapsto \mathbb{R}^{d}$ having totally positive torsion. These are curves such that all the leading principal minors of $d\times d$ matrix $(\gamma', \gamma'', \ldots, \gamma^{(d)})$ are positive. In particular, we recover parametric representation of the boundary of the convex hull, different formulas for its surface area and the volume of the convex hull, and the solution to a general moment problem corresponding to $\gamma$. | \section{Introduction and a summary of main results}
Convex hull of a set $K \subset \mathbb{R}^{d}$ is defined as
\begin{align*}
\mathrm{conv}(K) = \left\{ \sum_{j=1}^{m} \lambda_{j}x_{j}, \, x_{j} \in K, \, \sum_{j=1}^{m} \lambda_{j}=1, \, \lambda_{j} \geq 0,\, j=1, \ldots, m\; \text{for all}\; m\geq 1\right\}.
\end{align*}
Describing the convex hull of a given set $K$ is a basic problem in mathematics. By imposing additional geometric structures on $K$ one may hope to give a {\em simpler} description of $\mathrm{conv}(K)$. Perhaps a good starting point is when $K$ is a space curve which is the topic of our paper.
Let $[a,b]$ be an interval in $\mathbb{R}$, and let $\gamma_{1}(t), \ldots, \gamma_{n+1}(t)$ be real valued functions on $[a,b]$. We start with two main questions which are ultimately related to each other.
\begin{question}\label{que2} Describe the boundary of the convex hull of $\gamma([a,b])$, where
$$
\gamma(t)=(\gamma_{1}(t), \ldots, \gamma_{n+1}(t)), \quad t \in [a,b].
$$
\end{question}
The next question, known as the {\em general moment problem} \cite{Kem1968, Karlin1, krn}, is a certain probabilistic reformulation of Question~\ref{que2}.
\begin{question}\label{que1} Find
\begin{align}
M^{\mathrm{sup}}(x_{1}, \ldots, x_{n}) &\stackrel{\mathrm{def}}{=} \sup \; \{ \mathbb{E}\gamma_{n+1}(Y) \, :\, \mathbb{E}\gamma_{1}(Y)=x_{1}, \ldots , \mathbb{E} \gamma_{n}(Y)=x_{n}\}, \label{sup1}\\
M^{\mathrm{inf}}(x_{1}, \ldots, x_{n}) &\stackrel{\mathrm{def}}{=} \inf \; \{ \mathbb{E}\gamma_{n+1}(Y) \, :\, \mathbb{E}\gamma_{1}(Y)=x_{1}, \ldots , \mathbb{E} \gamma_{n}(Y)=x_{n}\}, \label{inf1}
\end{align}
where supremum or infimum is taken over all random variables $Y$ with values in $[a,b]$ such that $\gamma_{j}(Y)$ are measurable for all $j$, $1\leq j \leq n+1$.
\end{question}
The answers to both of these questions are given in terms of {\em lower and upper principal representations} in two remarkable monographs \cite{krn, Karlin1} (see also a brief survey \cite{pin01}) under the assumption (A1) which says that the sequences $(1, \gamma_{1}(t), \ldots, \gamma_{n}(t))$ and $(1, \gamma_{1}(t), \ldots, \gamma_{n+1}(t))$ are $T_{+}$-systems on $[a,b]$, we refer the reader to Subsection~\ref{markovs} for more details.
In this paper we give a new self-contained geometric approach to both of these questions for a subclass of (A1), curves with so called {\em totally positive torsion}.
\begin{definition}
A curve $\gamma \in C^{n+1}((a,b), \mathbb{R}^{n+1}) \cap C([a,b], \mathbb{R}^{n+1})$ is said to have totally positive torsion if all the leading principal minors of the matrix
\begin{align}\label{mm22}
(\gamma'(t), \gamma''(t), \ldots, \gamma^{(n+1)}(t))
\end{align}
are positive for all $t \in (a,b)$.
\end{definition}
Perhaps an instructive example to keep in mind is $\gamma(t)=(t, t^{2}, \ldots, t^{n}, \gamma_{n+1}(t))$ where the total positivity of the torsion on $(a,b)$ is the same as $\gamma_{n+1}^{(n+1)}(t)>0$ on $(a,b)$.
In fact the only property that will be needed from the principal minors of the matrix (\ref{mm22}) is that they are non-vanishing. Indeed, we can consider an invertible linear image of $\gamma$, namely a new curve $t \mapsto (\varepsilon_{1}\gamma_{1}(t), \ldots, \varepsilon_{n+1} \gamma_{n+1}(t))$ with an appropriate choice of signs $\varepsilon_{j} = \pm 1$ and reduce the study of the convex hulls to the curves with totally positive torsion (an invertible linear transformation $T$ maps convex hull of a set $K$ to the convex hull of the image $T(K)$).
In Section~\ref{istoria} we provide an overview of the literature on results related to Questions 1 and 2. Section~\ref{glavnaya} is devoted to the statements of main results of the paper, and Section ~\ref{damtkiceba} contains the proofs. Here we give a short summary of the theorems that we recover in this paper and that were previously known in \cite{krn, Karlin1}. The results we state hold in $\mathbb{R}^{n+1}$ for all $n\geq 1$, and all space curves $\gamma : [a,b] \to \mathbb{R}^{n+1}$ with totally positive torsion. Set $\bar{\gamma}(t) \stackrel{\mathrm{def}}{=} (\gamma_{1}(t), \ldots, \gamma_{n}(t))$, and let us denote by $\mathrm{conv}(\gamma([a,b]))$ the convex hull of the image of $[a,b]$ under the map $\gamma$.
\subsection*{Summary of the results:}
\begin{itemize}
\item[(1)] Boundary of the convex hull of $\gamma([a,b])$ will be given in a parametric form.
\item[(2)] Explicit diffeomorphism will be constructed between the interior of simplicies and the interior of the convex hull of $\gamma([a,b])$
\item[(3)] Formulas for the surface area of the boundary of the convex hull of $\gamma([a,b])$ will be obtained, Corollary~\ref{area1}, and different formulas for the volume of the convex hull will be presented, Corollary~\ref{provolume}.
\item[(4)] Any single affine hyperplane intersects the space curve $\gamma :[a,b] \to \mathbb{R}^{n+1}$ in at most $n+1$ points. Minimal number $k$ points required to represent any point $x \in \mathrm{conv}(\gamma([a,b]))$ as a convex combination of $k$ points of $\gamma([a,b])$ is at most $\lfloor \frac{n+3}{2}\rfloor$. Moreover, $k = \lfloor \frac{n+3}{2}\rfloor$ for any interior point of $\mathrm{conv}(\gamma([a,b]))$.
\item[(5)]
Parametric representations will be given for functions $M^{\sup}$ and $M^{\inf}$. The obtained parametric forms change depending on whether $n$ is even or odd.
\textup{(i)} If $n$ is even then
\begin{align*}
&M^{\sup}\left(\lambda_{0} \bar{\gamma}(b)+\sum_{j=1}^{\frac{n}{2}} \lambda_{j} \bar{\gamma}(x_{j}) \right) = \lambda_{0} \gamma_{n+1}(b)+\sum_{j=1}^{\frac{n}{2}}\lambda_{j} \gamma_{n+1}(x_{j}),\\
&M^{\inf}\left(\lambda_{0} \bar{\gamma}(a)+\sum_{j=1}^{\frac{n}{2}} \lambda_{j} \bar{\gamma}(x_{j}) \right) = \lambda_{0} \gamma_{n+1}(a)+\sum_{j=1}^{\frac{n}{2}}\lambda_{j} \gamma_{n+1}(y_{j}),
\end{align*}
for all $\lambda_{0}, \lambda_{j} \in [0,1], x_{j} \in [a,b]$, $j=1, \ldots, \frac{n}{2}$ with $\sum_{0\leq k \leq \frac{n}{2}} \lambda_{k}=1$.
\textup{(ii)} If $n$ is odd then
\begin{align*}
&M^{\sup}\left(\lambda_{0}\bar{\gamma}(a)+\lambda_{1}\bar{\gamma}(b)+\sum_{j=2}^{\frac{n+1}{2}} \lambda_{j} \bar{\gamma}(x_{j}) \right) = \lambda_{0} \gamma_{n+1}(a)+\lambda_{1} \gamma_{n+1}(b)+\sum_{j=2}^{\frac{n+1}{2}}\lambda_{j} \gamma_{n+1}(x_{j}), \\
&M^{\inf}\left(\sum_{j=1}^{\frac{n+1}{2}} \beta_{j} \bar{\gamma}(x_{j}) \right) = \sum_{j=1}^{\frac{n+1}{2}}\beta_{j} \gamma_{n+1}(x_{j}),
\end{align*}
for all $\lambda_{0}, \lambda_{j}, \beta_{j} \in [0,1], x_{j} \in [a,b]$, $j=1, \ldots, \frac{n+1}{2}$ with $\sum_{0\leq j \leq \frac{n+1}{2}} \lambda_{j}=\sum_{1\leq j \leq \frac{n+1}{2}}\beta_{j}=1$.
\item[(6)] Explicit random variables $Y$ will be constructed which attain supremum and infimum correspondingly in (\ref{sup1}) and (\ref{inf1}) for each given $x = (x_{1}, \ldots, x_{n})$ from the domain of definition of $M^{\sup}$ and $M^{\inf}$.
\end{itemize}
We will also see that
\begin{align*}
\partial\, \mathrm{conv}(\gamma([a,b]))=\{(x,M^{\mathrm{sup}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\} \cup \{(x,M^{\mathrm{inf}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\},
\end{align*}
i.e., the {\em upper hull} of $\mathrm{conv}(\gamma([a,b]))$ coincides with the graph of $M^{\sup}$, and the lower hull with the graph of $M^{\inf}$. Besides this summary, we also recover several results previously known to Karlin--Sharpley \cite{Karlin2} for {\em moment curves} using our techniques (see Corollary~\ref{nobel2}). In Proposition~\ref{sensitive}, we also show that the results obtained in this paper are sensitive to the assumption on a curve having totally positive torsion.
\subsection{What is known about Questions 1 and 2? } \label{istoria}
In what follows we set $x \stackrel{\mathrm{def}}{=}(x_{1}, \ldots, x_{n}) \in \mathbb{R}^{n}$, and $\mathbb{E} \bar{\gamma}(Y) \stackrel{\mathrm{def}}{=} (\mathbb{E}\gamma_{1}(Y), \ldots, \mathbb{E}\gamma_{n}(Y))$.
We remark that both $M^{\mathrm{\sup}}$ and $M^{\mathrm{\inf}}$ depend on $n \geq 1$, $x \in \mathbb{R}^{n}, [a,b] \subset \mathbb{R}$, and $\gamma$. We shall remind the basic fact that the convex hull of a compact set is compact. For simplicity we shall use the symbol $M$ for $M^{\mathrm{sup}}(x)$.
There are series of results describing $M$ for some particular $\gamma$. A common goal is to have a parametric representation for it. However, as soon as $n$ is large it becomes difficult to find parametric representation for $M$ in such generality.
\subsubsection{Convex envelopes and Carath\'eodory number}
Under some mild assumptions on $\gamma$, say $\gamma$ is continuous on $[a,b]$ is sufficient (see \cite{Kem1968, Rog1958}), $M$ is defined on $\mathrm{conv}(\bar{\gamma}([a,b]))$. Moreover, for any $x \in \mathrm{conv}(\bar{\gamma}([a,b]))$, $M(x)$ is the solution of the {\em dual problem}
\begin{align}\label{dual}
M(x) = \inf_{d_{0} \in \mathbb{R}, d \in \mathbb{R}^{n}} \{ d_{0}+ \langle d,x \rangle \;\; \text{such that}\;\; d_{0}+ \langle d, \bar{\gamma}(t) \rangle \geq \gamma_{n+1}(t)\; \text{for all} \; t \in [a,b]\},
\end{align}
where $\langle a,b\rangle$ denotes the dot product in $\mathbb{R}^{n}$. Thus $M$ is the minimal concave function defined on $ \mathrm{conv}(\bar{\gamma}([a,b]))$ with the obstacle condition $M(\bar{\gamma}(t)) \geq \gamma_{n+1}(t)$ for all $t \in [a,b]$. So the graph $(x,M(x))$, $x \in \mathrm{conv}(\bar{\gamma}([a,b]))$ belongs to the boundary of $\mathrm{conv} (\gamma([a,b]))$. Carath\'eodory's theorem says that $(x,M(x))$ is convex combination of at most $n+2$ points from $\gamma([a,b])$. However, due to the fact $(x, M(x)) \in \partial\, \mathrm{conv} (\gamma([a,b]))$, one can see that $n+1$ points suffice by considering any affine hyperplane $H$ supporting $\mathrm{conv} (\gamma([a,b]))$ at $(x,M(x))$. Since $\gamma([a,b])$ lies on one side of $H$, it follows that the points, whose convex combination is $(x,M(x))$, must lie in $H$, and we can apply Carath\'eodory's theorem to $H \cap \gamma([a,b])$ in $n+1$ dimensional space $H$.
This leads us to another representation
\begin{align}\label{carath}
M(x) = \sup_{\sum_{j=1}^{n+1}c_{j} \bar{\gamma}(t_{j})=x} \left\{ \sum_{j=1}^{n+1} c_{j} \gamma_{n+1}(t_{j})\; :\; \sum_{j=1}^{n+1}c_{j}=1, \; c_{\ell} \geq 0,\; t_{\ell} \in [a,b], \;1\leq \ell \leq n+1\right\}.
\end{align}
Probabilistic way of looking at (\ref{carath}) is that the supremum and infimum in (\ref{sup1}) and (\ref{inf1}) is attained on random variables $Y$ whose density is the sum of delta masses on at most $n+1$ points in $[a,b]$, i.e., $\sum_{j=1}^{n+1}c_{j}\delta_{t_{j}}$, with $t_{j} \in [a,b]$ for all $j=1, \ldots, n+1$.
A direction of research focuses on understanding for which curves $\gamma$, the number $n+1$ appearing in $\sum_{j=1}^{n+1}c_{j}\delta_{t_{j}}$ can be made smaller. As we just described this is related to the following question: {\em
given a curve $\gamma :[a,b] \to \mathbb{R}^{n+1}$, and a point $y \in \partial\, \mathrm{conv}(\gamma([a,b]))$, find the smallest number of points $b(y)$ on $\gamma([a,b])$ whose convex combination coincides with $x$.}
The integer $b(y)$ is called Carath\'eodory number for $y$, and it is defined for all $y \in \mathrm{conv}(\gamma([a,b]))$. Carath\'eodory number $b(\gamma)$ of a set $\gamma([a,b])$ is defined as
\begin{align}\label{karate1}
b(\gamma) \stackrel{\mathrm{def}}{=}\sup_{x \in \mathrm{conv}(\gamma([a,b]))}b(x).
\end{align}
By Carath\'eodory's theorem $b(\gamma) \leq n+2$ for curves in $\mathbb{R}^{n+1}$. For certain curves $\gamma$, the number $b(\gamma)$ can be strictly smaller than $n+2$. Fenchel's theorem \cite{Fenchel, Hanner} asserts that if the compact set $\gamma([a,b])$ cannot be separated by a hyperplane into two non-empty disjoint sets then $b(\gamma)\leq n+1$. In particular, for continuous curves $\gamma$ over closed intervals $[a,b]$ the Carath\'eodory's number is at most $n+1$ giving one more justification of (\ref{carath}) for continuous maps $\gamma$. See \cite{Baran} where Carath\'eodory number and an extension of Fenchel's theorem is studied for certain type of sets in $\mathbb{R}^{n+1}$.
\subsubsection{A Convex Optimization Approach}
Another direction of research reduces (\ref{dual}) to what is called {\em positive semidefinite optimization problem} under the assumption
$$
\gamma(t) = (t,t^{2}, \ldots, t^{n}, \mathbbm{1}_{I}(t)),
$$
where $I$ is an interval in $\mathbb{R}$.
Finding upper or lower bounds on $\mathbb{E} \mathbbm{1}_{I}(Y) = \mathbb{P}(Y \in I)$ given the first $n$ moments of $Y$ is of important interests as it would refine the classical Chebyshev and Markov inequalities. To give a feeling how the corresponding positive semidefinite optimization problem looks like we cite Theorem~11 in \cite{Berts}: the tight upper bound on $\mathbb{P}(Y \geq 1)$ over all nonnegative random variables $Y$ given the first $n$ moments $\mathbb{E}Y^{j}=x_{j}$, $1\leq j \leq n$ coincides with
\begin{align*}
M^{\mathrm{sup}}(x)\, =\, \min_{d_{0}, \ldots, d_{n} \in \mathbb{R}} \quad d_{0}+\sum_{j=1}^{n}d_{j} x_{j}
\end{align*}
Subject to
\begin{align*}
\quad &0 = \sum_{i,j\, :\, i+j=2\ell-1} t_{ij}, \qquad \qquad \quad \; \,\ell=1, \ldots, n,\\
&(d_{0}-1)+\sum_{j=\ell}^{n} d_{j} \binom{j}{\ell}=t_{00},\\
&\sum_{j=\ell}^{n} d_{j} \binom{j}{\ell} = \sum_{i,j\, :\, i+j=2\ell}t_{ij}, \quad \quad \; \,\ell=1, \ldots, n,\\
&0 = \sum_{i,j\, :\, i+j=2\ell-1} z_{ij}, \qquad \qquad \quad \; \, \ell = 1, \ldots, n,\\
&\sum_{j=0}^{\ell} d_{j} \binom{n-j}{\ell-j} = \sum_{i,j\, :\, i+j=2\ell} z_{ij} \quad \ell=0, \ldots, n,\\
&T, Z \geq 0,
\end{align*}
where $T, Z \geq 0$ means that the matrices $T=\{t_{ij}\}_{i,j=0}^{n}, Z = \{z_{ij}\}_{i,j=0}^{n}$ are positive semidefinite.
The advantage of having such a semidefinite optimization problem is that it can be solved in a {\em polynomial time}. However, it is not clear to us how practical are these results if one wants to verify bounds $M(x) \leq R(x)$ for a given function $R$ and all $x$ in $\mathrm{conv}(\overline{\gamma}([0,1]))$. In \cite{Berts} the authors provide explicit formulas for the tight upper bound on $\mathbb{P}(Y>\lambda)$ for $n=3$ over all nonnegative random variables with given first 3 moments.
\subsubsection{Tchebysheff systems, convex curves, and Markov moment problem}\label{markovs}
The system of continuous functions $(\gamma_{0}(t), \ldots, \gamma_{n}(t))$, on an interval $[a,b]$ is called Tchebysheff system (or $T$-system) if any nontrivial linear combination $\sum_{j=0}^{n} a_{j} \gamma_{j}(t)$ has at moat $n$ roots on $[a,b]$. As the monographs \cite{krn, Karlin1} deal with general Markov moment problem with arbitrary Borel measures, and in this paper we consider only probability measures, in what follows we will be assuming that $\gamma_{0}(t)=1$ to make the presentation consistent with \cite{krn, Karlin1}. Under such an assumption the corresponding curve $t\mapsto (\gamma_{1}(t), \ldots, \gamma_{n}(t))$ is called {\em convex curve}.
The sequence $(\gamma_{0}(t), \ldots, \gamma_{n}(t))$ is called $T_{+}$-system if
\begin{align}\label{nudel}
\mathrm{det}(\{ \gamma_{i}(t_{j})\}_{i,j=0}^{n})>0
\end{align}
on the simplex $\Sigma = \{ a\leq t_{0}<\ldots, <t_{n} \leq b\}$. Notice that any $T$-system can be made into $T_{+}$-system just by flipping the sign in front of $\gamma_{n}$ if necessary.
If $(\gamma_{0}(t), \ldots, \gamma_{k}(t))$ is $T_{+}$-system on $[a,b]$ for any $k=0,\ldots, n$ then the sequence $(\gamma_{0}(t), \ldots, \gamma_{n}(t))$ is called $M_{+}$-system on $[a,b]$. Checking the positivity of the determinant (\ref{nudel}) seems a bit unpractical as one needs to verify the inequality on the simplex
$\Sigma$. The following proposition gives a simple sufficient criteria for the system to be $M_{+}$ system.
\begin{theorem}[Chapter VIII, \cite{Karlin1}]\label{man1}
Let $\gamma_{0}(t), \ldots, \gamma_{n}(t)$ be in $C([a,b])\cap C^{n}((a,b))$. Then for the sequence $(\gamma_{0}(t), \ldots, \gamma_{n}(t))$ to be $M_{+}$-system on $[a,b]$ it is necessary\footnote{Here $\gamma_{j}^{(0)}(t)=\gamma_{j}(t)$} that $\mathrm{det}(\{ \gamma_{i}^{(j)}(t)\}_{i,j=0}^{k})\geq 0$ on $(a,b)$ for all $k=0,\ldots, n$, and it is sufficient that $\mathrm{det}(\{ \gamma_{i}^{(j)}(t)\}_{i,j=0}^{k})> 0$ on $(a,b)$ for all $k=1,\ldots, n$.
\end{theorem}
We say that $(\gamma_{1}(t), \ldots, \gamma_{n+1}(t))$ satisfies $(A1)$ condition if $\gamma_{1}(t), \ldots, \gamma_{n+1}(t)$ are in $C([a,b])\cap C^{n+1}((a,b))$ such that
\begin{align*}
(1,\gamma_{1}(t), \ldots, \gamma_{n}(t)) \quad \text{and} \quad (1,\gamma_{1}(t), \ldots, \gamma_{n+1}(t)) \quad \text{are} \quad T_{+}-\text{systems on} \quad [a,b] \quad (A1)
\end{align*}
Clearly if $\gamma(t) = (\gamma_{1}(t), \ldots, \gamma_{n+1}(t))$ has totally positive torsion on $(a,b)$ then the condition $(A1)$ holds by Theorem~\ref{man1}. On the other hand if the sequence $(\gamma_{0}(t), \ldots, \gamma_{n+1})$ satisfies only the assumption (A1) then the probability distribution of a random variable $X$ achieving supremum or infimum in Question~\ref{que1} is given in terms of {\em upper and lower principal representations}, see Chapter III and IV in \cite{krn}, and also Proposition 2 in a brief survey \cite{pin01}. In particular, Carath\'eodory number is at most $\lfloor \frac{n+3}{2}\rfloor$ for the curves $t\mapsto (\gamma_{1}(t), \ldots, \gamma_{n+1}(t))$ in $\mathbb{R}^{n+1}$ satisfying the assumption (A1).
A typical example of the convex curve is the moment curve
$$
\gamma(t) = (t, \ldots, t^{n+1}) \in \mathbb{R}^{n+1},
$$
Assume $[a,b]=[0,1]$. In~\cite{Karlin2} the authors show that if $x=(x_{1}, \ldots, x_{n})$ belongs to the interior of $\mathrm{conv}(\bar{\gamma}([0,1]))$ then $M^{\mathrm{sup}}(x)$ and $M^{\mathrm{inf}}(x)$ are the unique solutions $x_{n+1}$ of the linear equations
\begin{align}\label{nobel}
K_{n+1}=0 \quad \text{and} \quad S_{n+1}=0,
\end{align}
correspondingly, where $K_{k}, S_{k}$ are defined as
\begin{align}\label{Sharp1}
S_{2k} = \det
\begin{pmatrix}1 & x_{1} & \ldots & x_{k}\\
\vdots & & & \\
x_{k} & x_{k+1} & \ldots & x_{2k}\end{pmatrix}, \quad S_{2k+1} = \det
\begin{pmatrix}x_{1} & x_{2} & \ldots & x_{k+1}\\
\vdots & & & \\
x_{k+1} & x_{k+2} & \ldots & x_{2k+1}\end{pmatrix},
\end{align}
and
\begin{align}\label{kar1}
K_{2k} = \det
\begin{pmatrix}x_{1}-x_{2} & x_{2}-x_{3} & \ldots & x_{k}-x_{k+1}\\
\vdots & & & \\
x_{k}-x_{k+1} & x_{k+1}-x_{k+2} & \ldots & x_{2k-1}-x_{2k}\end{pmatrix},\\
K_{2k+1} = \det
\begin{pmatrix}1-x_{1} & x_{1}-x_{2} & \ldots & x_{k}-x_{k+1}\\
\vdots & & & \\
x_{k}-x_{k+1} & x_{k+1}-x_{k+2} & \ldots & x_{2k}-x_{2k+1}\end{pmatrix}. \nonumber
\end{align}
An important contribution of \cite{Karlin2} is that the authors give complete description of $\partial \, \mathrm{conv}(\gamma([0,1]))$ which allowed them to obtain a geometric point of view on the classical orthogonal polynomials. For example, knowing the width in $x_{n+1}$ direction of the set $\mathrm{conv}(\gamma([0,1]))$ one can recover the classical fact that among all polynomials of degree $n+1$ on $[0,1]$ with the leading coefficient $1$ the Tchebyshev polynomials minimize the maximum of the absolute value on $[0,1]$ (Theorem 25.2 in ~\cite{Karlin2}).
Karlin--Sharpley did announce an intend to settle the case when $[a,b]$ is replaced by $[-1,1]$, $\mathbb{R}^{+}$ or $\mathbb{R}$. After looking into a literature, to the best of our knowledge the corresponding results appeared in the monograph of Karlin--Studden~\cite{Karlin1}.
In ~\cite{Sch0101} Schoenberg obtained a formula for the volume of a smooth closed\footnote{Here closed curve means $\nu(0)=\nu(2\pi)$} convex curve $\nu : [0, 2\pi] \mapsto \mathbb{R}^{n}$ in even-dimensional Euclidean space
\begin{align*}
\mathrm{Vol}(\mathrm{conv}(\nu([0, 2\pi]))) = \pm \frac{1}{n!(n/2)!}\int_{[0,2\pi]^{\frac{n}{2}}}\det (\nu(t_{1}), \ldots, \nu(t_{n/2}), \nu'(t_{1}), \ldots, \nu'(t_{n/2}))dt_{1}\ldots dt_{n/2},
\end{align*}
and as a corollary, using Fourier series, he derived an isoperimetric inequality
$$
(\mathrm{length}(\nu))^{n}\geq (\pi n)^{n/2}(n/2)! n! \mathrm{Vol}(\mathrm{conv}(\nu([0, 2\pi]))),
$$
where $\mathrm{length}(\nu)$ denotes the Euclidean length of $\nu$, and $\mathrm{Vol}(\cdot)$ denotes the Euclidean volume. The volumes of the convex hull of $\gamma([a,b])$, such that $\gamma(0)=0$ and the sequence $(1, \gamma_{1}(t), \ldots, \gamma_{n}(t))$ forms the $T$-system were obtained both in odd and even dimensions in \cite{krn, Karlin1}, see for example, Theorem 6.1, Ch. IV in \cite{Karlin1}.
\subsubsection{Other results for systems different from $T$-system} In \cite{sed1, sed2} Sedykh describes possible {\em singularities} of the boundary of convex hulls of a curve in $\mathbb{R}^{3}$. In~\cite{Krist1}, using tools from algebraic geometry, namely, {\em De Jonqui\`eres' formula}, the authors compute number of {\em complex tritangent planes} of the {\em algebraic boundary} of the convex hull of an algebraic space curve in $\mathbb{R}^{3}$ in terms of its genus and degree of the curve. Moreover, in \cite{Krist1} the authors also find an algebraic elimination method for computing {\em tritangent planes} and {\em edge surfaces} of the boundary of the convex hulls of algebraic space curves in $\mathbb{R}^{3}$. {\em Algebraic boundary} of the convex hull of an algebraic variety was studied \cite{KRBS1}, where the authors extended several results from \cite{Krist1} to higher dimensions. In \cite{Freed}, using topological results it is shown that the number of tritangent planes to a smooth {\em generic} curve in $\mathbb{R}^{3}$ with nonvanishing torsion is even.
Convex hulls of space curves have appeared implicitly or explicitly in other works in relation to problems not directly related to them. We do not intend to provide the full list of references, however, let us mention some of the examples. Finding sharp constants in such classical estimates as John--Nirenberg inequality is related to finding convex hulls {\em in non-convex domains} of certain space curves. In particular, in \cite{Iv1, Iv2}, an algorithm is presented which finds the convex hull of a space curve $\gamma(t) = (t, t^{2}, f(t))$ defined on $\mathbb{R}$, under the assumption that $f'''(t)$ changes sign finitely many times (notice that the sign of $f'''$ coincides with the sign of the torsion of $\gamma(t)$). As the number of sign changes of $f'''$ increase the ``complexity'' of computing the convex hull of $\gamma(t)$ increases too. The method obtained in \cite{Iv1,Iv2} is illustrated on a particular example in \cite{Vasyunin} for the family of space curves $\gamma_{\alpha}(t)=(t,t^{2}, g_{\alpha}(t))$ where $g_{\alpha}(t)$ is a parametric family of functions defined for all $\alpha>0$ as follows
\begin{align*}
g_{\alpha}(t) = \begin{cases}
-\cos(t), & |t|\leq \alpha \\
\frac{1}{2}(t^{2}-\alpha^{2})\cos \alpha+(\sin \alpha-\alpha \cos \alpha)(|t|-\alpha)-\cos \alpha, & |t|\geq \alpha.
\end{cases}
\end{align*}
Notice that the quadratic part for $|t|\geq \alpha$ is chosen in such a way that $g_{\alpha} \in C^{2}(\mathbb{R})$. Clearly $g'''_{\alpha}(t)=-\sin(t)$ for $|t|\leq \alpha$, and $g'''_{\alpha}(t)=0$ for $|t|\geq \alpha$. We see that as $\alpha$ increases the number of sign changes of $g'''_{\alpha}(t)$ increases too. In \cite{Vasyunin} the upper boundary of the convex hull of the space curve $\gamma_{\alpha}(t)$, $t \in \mathbb{R}$, is found in the non-convex parametric domain\footnote{By convex hull of $\gamma_{\alpha}$ in $\Omega_{\varepsilon}$ we mean all possible convex combinations of those points on $\gamma_{\alpha}$ such that the projection of the resulting convex hull of these points onto $\mathbb{R}^{2}$ lies inside $\Omega_{\varepsilon}$}.
$\Omega_{\varepsilon} = \{ (x,y) \in \mathbb{R}^{2}\, :\, x^{2} \leq y \leq x^{2}+\varepsilon^{2}\}.$
In the limiting case $\varepsilon \to \infty$ one recovers the upper boundary of the convex hull of the space curve $\gamma_{\alpha}(t)$.
In sharpening the triangle inequality in $L^{p}$ spaces, for each $p \in \mathbb{R} \setminus\{0\}$ the paper \cite{IM} finds the boundary of the convex hull of a space curve $\gamma(t) = (t, \sqrt{1-t^{2}}, ((1-t)^{1/p}+(1+t)^{1/p})^{p}), t \in [-1,1]$. In~\cite{IVZ} the boundary of the convex hull of a closed space curve is described which is the union of the following three curves
\begin{align*}
&\left(\frac{1}{t^{p}+(1-t)^{p}+1}, \frac{t^{p}}{t^{p}+(1-t)^{p}+1}, \frac{(1+t)^{p}}{t^{p}+(1-t)^{p}+1}\right), \quad t \in [0,1];\\
&\left(\frac{(1-t)^{p}}{t^{p}+(1-t)^{p}+1}, \frac{1}{t^{p}+(1-t)^{p}+1}, \frac{(2-t)^{p}}{t^{p}+(1-t)^{p}+1}\right), \quad t \in [0,1];\\
&\left(\frac{t^{p}}{t^{p}+(1-t)^{p}+1}, \frac{(1-t)^{p}}{t^{p}+(1-t)^{p}+1}, \frac{|1-2t|^{p}}{t^{p}+(1-t)^{p}+1}\right), \quad t \in [0,1].
\end{align*}
\subsection*{Acknowledgments}
We are grateful to Pavel Zatitskiy for pointing our attention to the reference~\cite{krn}.
The authors would like to thank V.~Sedykh for providing references on topological results on the convex hulls of space curves.
\section{Statements of main results}\label{glavnaya}
\label{sec:Statements}
For any $v=(v_{1}, \ldots, v_{d}) \in \mathbb{R}^{d}$ we set $\overline{v}=(v_{1}, \ldots, v_{d-1})$ to be the projection onto the first $d-1$ coordinates, and we set $v^{z}=v_{d}$ to be the projection onto the last coordinate. For any $a<b$ define the following sets
\begin{align*}
&\Delta^{k}_{c} := \{ (r_{1}, \ldots, r_{k}) \in \mathbb{R}^{k}\, :\, r_{j} \geq 0, j=1, \ldots, k, \, r_{1}+\ldots+r_{k}\leq 1\},\\
&\Delta_{*}^{k} := \{ (y_{1}, \ldots, y_{k}) \in \mathbb{R}^{k}\, :\, a\leq y_{1}\leq y_{2} \leq \ldots\leq y_{k}\leq b\}.
\end{align*}
Let $n\geq 1$. If $n=2\ell$ we define
\begin{align*}
&U_{n} : \Delta_{c}^{\ell} \times \Delta_{*}^{\ell} \ni (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \mapsto \sum_{j=1}^{\ell} \lambda_{j} \gamma(x_{j}) + (1-\sum_{j=1}^{\ell}\lambda_{j}) \gamma(b);\\
&L_{n} :\Delta_{c}^{\ell} \times \Delta_{*}^{\ell} \ni (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \mapsto (1-\sum_{j=1}^{\ell}\lambda_{j})\gamma(a)+\sum_{j=1}^{\ell} \lambda_{j} \gamma(x_{j}),
\end{align*}
and if $n=2\ell-1$ we define
\begin{align*}
&U_{n} :\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1} \ni (\beta_{1}, \ldots, \beta_{\ell}, x_{2},\ldots, x_{\ell}) \mapsto (1-\sum_{j=1}^{\ell}\beta_{j})\gamma(a) +\sum_{j=2}^{\ell} \beta_{j} \gamma(x_{j})+\beta_{1} \gamma(b);\\
&L_{n} :\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell} \ni (\beta_{2}, \ldots, \beta_{\ell}, x_{1},\ldots, x_{\ell}) \mapsto (1-\sum_{j=2}^{\ell} \beta_{j})\gamma(x_{1})+\sum_{j=2}^{\ell} \beta_{j} \gamma(x_{j}).
\end{align*}
If $n=1$ we set $U_{1} : [0,1]=:\Delta_{c}^{1}\times \Delta_{*}^{0} \mapsto (1-\beta_{1})
\gamma(a)+\beta_{1}\gamma(b)$, and $L_{1} : [a,b]=:\Delta_{c}^{0}\times \Delta_{*}^{1} \mapsto \gamma(x_{1})$.
Together with maps $U_{n}$ and $L_{n}$ we define functions $B^{\sup}$ (and $B^{\inf}$) on the image of $\overline{U}$ (or $\overline{L}$) such that
\begin{align}
&B^{\sup}(\overline{U}_{n})=U^{z}_{n}, \label{vog}\\
&B^{\inf}(\overline{L}_{n}) = L^{z}_{n} \label{vyp}.
\end{align}
We remark that at this moment $B^{\sup}$ (and $B^{\inf}$) is not {\em well defined}, i.e., it could be that there are points $s_{1}, s_{2}$, $s_{1} \neq s_{2}$ such that $\overline{U}_{n}(s_{1})=\overline{U}_{n}(s_{2})$ and at the same time $U^{z}_{n}(s_{1})\neq U^{z}_{n}(s_{2})$. However, we will see that the next theorem, in particular, claims that both functions $B^{\sup}, B^{\inf}$ are well defined.
\begin{theorem}\label{mth010}
Let $\gamma : [a,b] \mapsto \mathbb{R}^{n+1}$ be in $C([a,b])\cap C^{n+1}((a,b))$ with totally positive torsion.
If $n =2\ell$, $\ell \geq 1$, we have
\begin{align}
& \overline{U}_{2\ell}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell})) =\overline{L}_{2\ell}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell}))= \partial\, \mathrm{conv}(\overline{\gamma}([a,b])), \label{b2l}\\
&\overline{U}_{2\ell} : \mathrm{int} (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell}) \mapsto \mathrm{int}(\mathrm{conv}(\overline{\gamma}([a,b]))) \quad \text{is diffeomorphism}, \label{diff2lu}\\
&\overline{L}_{2\ell} : \mathrm{int} (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell}) \mapsto \mathrm{int}(\mathrm{conv}(\overline{\gamma}([a,b]))) \quad \text{is diffeomorphism}. \label{diff2ll}
\end{align}
If $n=2\ell-1$ we have
\begin{align}
& \overline{U}_{2\ell-1}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1})) =\overline{L}_{2\ell-1}(\partial\, (\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell}))= \partial\, \mathrm{conv}(\overline{\gamma}([a,b])), \label{b2l-1}\\
&\overline{U}_{2\ell-1} : \mathrm{int} (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1}) \mapsto \mathrm{int}(\mathrm{conv}(\overline{\gamma}([a,b]))) \quad \text{is diffeomorphism}, \label{diff2l-1u}\\
&\overline{L}_{2\ell-1} : \mathrm{int} (\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell}) \mapsto \mathrm{int}(\mathrm{conv}(\overline{\gamma}([a,b]))) \quad \text{is diffeomorphism}. \label{diff2l-1l}
\end{align}
For all $n\geq 1$,
\begin{align}\label{welld}
B^{\sup}, B^{\inf} \quad \text{are well defined}, \quad B^{\sup}, B^{\inf} \in C(\mathrm{conv}(\overline{\gamma}([a,b]))) \cap C^{1}(\mathrm{int}(\mathrm{conv}(\overline{\gamma}([a,b])))).
\end{align}
Next, for all $n\geq 1$ we have\footnote{When $n=1$ the equality $B^{\sup}(\overline{\gamma})=\gamma_{2}$ should be replaced by $B^{\sup}(\overline{\gamma})\geq \gamma_{2}.$}
\begin{align}
&B^{\sup} \quad \text{is minimal concave on} \quad \mathrm{conv}(\overline{\gamma}([a,b])) \quad \text{with} \quad \, B^{\sup}(\overline{\gamma})=\gamma_{n+1}; \label{mincon1}\\
&B^{\inf} \quad \text{is maximal convex on} \quad \mathrm{conv}(\overline{\gamma}([a,b])) \quad \text{with}\quad \, B^{\inf}(\overline{\gamma})=\gamma_{n+1}; \label{maxcon2}
\end{align}
Moreover,
\begin{align}
&B^{\inf}(y)=B^{\sup}(y) \quad \text{if and only if} \quad y \in \partial\, \mathrm{conv}(\overline{\gamma}([a,b])), \label{giff}\\
& \partial\, \mathrm{conv}(\gamma([a,b]))=\{(x,B^{\mathrm{sup}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\} \cup \{(x,B^{\mathrm{inf}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\}. \label{union}
\end{align}
\end{theorem}
The statement of Theorem~\ref{mth010} may seem a bit technical, however, we think that the intuition behind the construction of the convex hulls is natural. We refer the reader to schematic pictures in Fig.~\ref{fig:sketches} for better understanding of the claims made in the theorem. In Fig.~\ref{fig:4d} the domain $\mathrm{conv}(\overline{\gamma}([a,b]))$ of $B^{\sup}$ in $\mathbb{R}^{3}$ is foliated by triangles where $B^{\sup}$ is linear on each such triangle.
\newcommand{\showsketch}[1]{%
\begin{minipage}{.33\textwidth}%
\begin{center}
\includegraphics[trim = 1 1 1 1 , clip, width=\textwidth]{d=#1.pdf}
$n+1 = #1$
\end{center}
\end{minipage}%
}
\begin{figure}[t]
\centering
\showsketch{1}\showsketch{3}\showsketch{5}
\showsketch{2}\showsketch{4}\showsketch{6}
\caption{These schematic pictures clarify how the convex hull of the space $\gamma$ with totally positive torsion is parametrized. If $n$ is even then the {\em upper hull} is described by convex combination of $\frac{n}{2}+1$ points of $\gamma$, where among these points, $\frac{n}{2}$ are {\em free}, i.e., they are chosen in an arbitrary way on the space curve, and the last point $\gamma(b)$ is always fixed. For the {\em lower hull} $\gamma(a)$ is fixed instead of $\gamma(b)$. If $n$ is odd the picture is asymmetric. In this case the {\em upper hull} fixes $2$ endpoints $\gamma(a)$ and $\gamma(b)$ and has $\frac{n-1}{2}$ free points. The lower hull has $\frac{n+1}{2}$ free points, and no fixed points. The case $n=0$ (the convex hull of an interval), not mentioned in Theorem~\ref{mth010}, was helpful to guess the construction in higher dimensions, it has two fixed points $\gamma(a)$ and $\gamma(b)$. Compare with the exact pictures for the cases $n+1= 2,3,4$ shown in Figures \ref{fig:2d},\ref{fig:3d} and \ref{fig:4d}. }
\label{fig:sketches}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{4d_Top.png}
\caption{For $n+1=3+1$ the set $\mathrm{conv}(\overline{\gamma}([a,b]))$ is foliated by triangles (simplices) with vertices $\overline{\gamma}(a), \overline{\gamma}(b)$ and $\overline{\gamma}(t)$ for each $t \in (a,b)$. The function $B^{\sup}$ is linear on each such triangle and $B^{\sup}(\overline{\gamma})=\gamma_{4}$. Also $B^{\sup}=B^{\inf}$ on edges of each triangle.}
\label{fig:4d}
\end{figure}
\vskip0.3cm
Perhaps it may seem that the total positivity of the torsion, i.e., the fact that the leading principal minors of $(\gamma', \ldots, \gamma^{(n+1)})$ have positive signs on $(a,b)$, is a redundant assumption for Theorem~\ref{mth010} to hold true. However, the next proposition shows that the total positivity is a sensitive assumption.
\begin{proposition}\label{sensitive}
There exists a curve $\gamma : [-1,1] \to \mathbb{R}^{2+1}$ in $C^{\infty}([-1,1])$ such that the leading principal minors of $(\gamma', \gamma'', \gamma''')$ are positive on $[-1,1]$ except the $2\times2$ and $3\times 3$ principal minors vanish at $t=0$, and the map $B^{\sup}$ defined by (\ref{vog}) is not concave on $\mathrm{conv}(\overline{\gamma}([-1,1]))$.
\end{proposition}
The next theorem answers Question~\ref{que1}, and also provides us with optimizers, i.e., the random variables $Y$ which attain supremum (infimum) in Question~\ref{que1}.
\begin{theorem}\label{mth1} Let $\gamma : [a,b] \to \mathbb{R}^{n+1}$, $\gamma \in C([a,b]) \cap C^{n+1}((a,b))$ be such that all the leading principal minors of the $(n+1)\times (n+1)$ matrix $(\gamma'(t), \ldots, \gamma^{(n+1)}(t))$ are positive for all $t \in (a,b)$. Then
\begin{align}
\sup_{a\leq Y\leq b} \{ \mathbb{E}\gamma_{n+1}(Y) \, :\, \mathbb{E}\bar{\gamma}(Y)=x\} &=B^{\mathrm{sup}}(x),\label{extr01}\\
\inf_{a\leq Y\leq b} \{ \mathbb{E}\gamma_{n+1}(Y) \, :\, \mathbb{E}\bar{\gamma}(Y)=x\} &=B^{\mathrm{inf}}(x),\label{extr02}
\end{align}
hold for all $x \in \mathrm{conv}(\overline{\gamma}([a,b]))$, where $B^{\sup}$ and $B^{\inf}$ are given by (\ref{vog}) and (\ref{vyp}). Moreover, given $x \in \mathrm{conv}(\overline{\gamma}([a,b]))$ the supremum in (\ref{extr01}) (infimum in (\ref{extr02})) is attained by the random variable $\zeta(x)$ (the random variable $\xi(x)$) defined as follows:
Case 1: $n=2\ell-1$. Then by (\ref{b2l-1}) and (\ref{diff2l-1u}), $x =(1-\sum_{j=1}^{\ell}\beta_{j})\overline{\gamma}(a)+\sum_{j=2}^{\ell}\beta_{j} \overline{\gamma}(x_{j})+\beta_{1}\overline{\gamma}(b)$ for some $(\beta_{1}, \ldots, \beta_{\ell}, x_{2}, \ldots, x_{\ell})\in \Delta_{c}^{\ell}\times\Delta_{*}^{\ell-1}$. Set $\mathbb{P}(\zeta(x)=a)=1-\sum_{j=1}^{\ell}\beta_{j}$, $\mathbb{P}(\zeta(x)=b)=\beta_{1}$, and $\mathbb{P}(\zeta(x)=x_{j})=\beta_{j}$ for $j=2,\ldots, \ell$. Also, by (\ref{b2l-1}) and (\ref{diff2l-1l}), $x =(1-\sum_{j=2}^{\ell}\lambda_{j})\overline{\gamma}(y_{1})+\sum_{j=2}^{\ell}\lambda_{j} \overline{\gamma}(y_{j})$ for some $(\lambda_{2}, \ldots, \lambda_{\ell}, y_{1}, \ldots, y_{\ell})\in \Delta_{c}^{\ell-1}\times\Delta_{*}^{\ell}$. Set $\mathbb{P}(\xi(x)=y_{1})=1-\sum_{j=2}^{\ell}\lambda_{j}$, and $\mathbb{P}(\xi(x)=y_{j})=\lambda_{j}$ for $j=2,\ldots, \ell$.
Case 2: $n=2\ell$. Then by (\ref{b2l}) and (\ref{diff2lu}), $x =\sum_{j=1}^{\ell}\beta_{j} \overline{\gamma}(x_{j})+(1-\sum_{j=1}^{\ell}\beta_{j})\overline{\gamma}(b)$ for some $(\beta_{1}, \ldots, \beta_{\ell}, x_{1}, \ldots, x_{\ell})\in \Delta_{c}^{\ell}\times\Delta_{*}^{\ell}$. Set $\mathbb{P}(\zeta(x)=b)=1-\sum_{j=1}^{\ell}\beta_{j}$, and $\mathbb{P}(\zeta(x)=x_{j})=\beta_{j}$ for $j=1,\ldots, \ell$. Also, by (\ref{b2l}) and (\ref{diff2ll}), $x =(1-\sum_{j=1}^{\ell}\lambda_{j})\overline{\gamma}(a)+\sum_{j=1}^{\ell}\lambda_{j} \overline{\gamma}(y_{j})$ for some $(\lambda_{1}, \ldots, \lambda_{\ell}, y_{1}, \ldots, y_{\ell})\in \Delta_{c}^{\ell}\times\Delta_{*}^{\ell}$. Set $\mathbb{P}(\xi(x)=a)=1-\sum_{j=1}^{\ell}\lambda_{j}$, and $\mathbb{P}(\xi(x)=y_{j})=\lambda_{j}$ for $j=1,\ldots, \ell$.
\end{theorem}
The next corollary recovers the result of Karlin--Sharpley \cite{Karlin2}, i.e., the equations (\ref{nobel}) in case of the moment curve.
\begin{corollary}\label{nobel2}
Let $\gamma(t) = (t, \ldots, t^{n}, t^{n+1}) : [0,1] \to \mathbb{R}^{n+1}$. If $x=(x_{1}, \ldots, x_{n}) \in \mathrm{int}(\mathrm{conv} (\overline{\gamma}([0,1])))$ then $B^{\sup}(x)$ and $B^{\inf}(x)$ are the unique solutions $x_{n+1}$ of the equations $K_{n+1}=0$ and $S_{n+1}=0$ correspondingly, where $K_{n+1}$ and $S_{n+1}$ are defined by (\ref{Sharp1}) and (\ref{kar1}).
\end{corollary}
In the next corollary we give a sufficient local description of convex curves. Recall that a curve $\gamma : [a,b] \to \mathbb{R}^{n}$ is called {\em convex} if no $n+1$ its different points lie in a single affine hyperplane.
\begin{corollary}\label{karatecor}
Let $\gamma : [a,b] \to \mathbb{R}^{n}$, $\gamma \in C([a,b]) \cap C^{n}((a,b))$ be such that all the leading principal minors of the $n\times n$ matrix $(\gamma'(t), \ldots, \gamma^{(n)}(t))$ are positive for all $t \in (a,b)$. Then $\gamma$ is convex. In particular, for any integer $k$, $1\leq k \leq n$, the equation $c_{0}+c_{1}\gamma_{1}(t)+\ldots+c_{k}\gamma_{k}(t)=0$ has at most $k$ roots on $[a,b]$ provided that $(c_{0}, \ldots, c_{k})\neq (0, \ldots, 0)$.
\end{corollary}
Recall the definition of Carath\'eodory number $b(\gamma)$ of a curve $\gamma : [a,b] \to \mathbb{R}^{n}$, i.e., the smallest integer $k$ such that any point of $\mathrm{conv}(\gamma([a,b]))$ can be represented as convex combination of at most $k$ points of $\gamma([a,b])$, see (\ref{karate1}). The next corollary directly follows from Theorem~\ref{mth010} (parts (\ref{b2l-1}), (\ref{diff2l-1l}), (\ref{b2l}), and (\ref{diff2ll})).
\begin{corollary}\label{karatekid}
Let $\gamma : [a,b] \to \mathbb{R}^{n}$, $\gamma \in C([a,b]) \cap C^{n}((a,b))$ be a curve with totally positive torsion. Then its Carath\'eodory number equals to $\lfloor \frac{n+2}{2}\rfloor$.
\end{corollary}
In the next corollary we obtain formulas for the volumes of the convex hulls of a space curve having totally positive torsion both in even and odd dimensions.
\begin{corollary}\label{provolume}
Let $\gamma : [a,b] \to \mathbb{R}^{n}$, $\gamma \in C([a,b]) \cap C^{n}((a,b))$ be a curve with totally positive torsion. If $n=2 \ell$ then
\begin{align*}
\mathrm{Vol}(\mathrm{conv}(\gamma([a,b])))& \\
=\frac{(-1)^{\frac{\ell(\ell-1)}{2}}}{(2\ell)!}& \int_{a\leq x_{1}\leq \ldots \leq x_{\ell} \leq b} \mathrm{det}(\gamma(x_{1})-\gamma(a), \ldots, \gamma(x_{\ell})-\gamma(a), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell})) \, dx \\
=\frac{(-1)^{\frac{\ell(\ell-1)}{2}}}{(2\ell)!}& \int_{a\leq x_{1}\leq \ldots \leq x_{\ell} \leq b} \mathrm{det}(\gamma(x_{1})-\gamma(b), \ldots, \gamma(x_{\ell})-\gamma(b), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell})) \, dx.
\end{align*}
If $n=2\ell-1$ then
\begin{align*}
&\mathrm{Vol}(\mathrm{conv}(\gamma([a,b]))) \\
&=\frac{(-1)^{\frac{(\ell-1)(\ell-2)}{2}}}{(2\ell-1)!} \int_{a\leq x_{2}\leq \ldots \leq x_{\ell} \leq b} \mathrm{det}(\gamma(b)-\gamma(a), \gamma(x_{2})-\gamma(a), \ldots, \gamma(x_{\ell})-\gamma(a), \gamma'(x_{2}), \ldots, \gamma'(x_{\ell})) \, dx \\
&=\frac{(-1)^{\frac{\ell(\ell-1)}{2}}}{(2\ell-1)!} \int_{a\leq x_{1}\leq \ldots \leq x_{\ell} \leq b} \mathrm{det}(\gamma(x_{2})-\gamma(x_{1}), \ldots, \gamma(x_{\ell})-\gamma(x_{1}), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell}))\, dx.
\end{align*}
\end{corollary}
Let $\mathrm{Area}$ denote $n$ dimensional Lebesgue measure in $\mathbb{R}^{n+1}$, and let $A^{\mathrm{Tr}}$ be the transpose of a matrix $A$.
\begin{corollary}\label{area1}
Let $\gamma : [a,b] \to \mathbb{R}^{n+1}$, $\gamma \in C^{1}([a,b]) \cap C^{n+1}((a,b))$ be a curve with totally positive torsion. If $n=2 \ell$ then
\begin{align*}
\mathrm{Area}(\partial \; \mathrm{conv}(\gamma([a,b]))) = \frac{1}{n!} \int_{a\leq x_{1}\leq \ldots \leq x_{\ell}\leq b} \left( \sqrt{\det S_{a}^{\mathrm{Tr}}S_{a}} +\sqrt{\det S_{b}^{\mathrm{Tr}}S_{b}} \right) dx,
\end{align*}
where $S_{r} = (\gamma(x_{1})-\gamma(r), \ldots, \gamma(x_{\ell})-\gamma(r), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell}))$ is $(2\ell+1)\times 2\ell$ matrix, and $dx$ is $\ell$ dimensional Lebesgue measure.
If $n=2\ell-1$ then
\begin{align*}
\mathrm{Area}(\partial \; \mathrm{conv}(\gamma([a,b]))) = \frac{1}{n!} \int_{a\leq x_{2}\leq \ldots \leq x_{\ell}\leq b} \sqrt{\det \Psi^{\mathrm{Tr}}\Psi} \, d\tilde{x} +\frac{1}{n!} \int_{a\leq x_{1}\leq \ldots \leq x_{\ell}\leq b} \sqrt{\det \Phi^{\mathrm{Tr}}\Phi} \, dx,
\end{align*}
where $\Psi = (\gamma(b)-\gamma(a), \gamma(x_{2})-\gamma(a), \ldots, \gamma(x_{\ell})-\gamma(a), \gamma'(x_{2}), \ldots, \gamma'(x_{\ell}))$, $\Phi = (\gamma(x_{2})-\gamma(x_{1}), \ldots, \gamma(x_{\ell})-\gamma(x_{1}), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell}))$ are $2\ell \times (2\ell-1)$ size matrices, and $d\tilde{x}$ denotes $\ell-1$ dimensional Lebesgue measure.
\end{corollary}
\section{The proof of main results}\label{damtkiceba}
Sometimes we will omit the index $n$ and simply write $U, L$ instead of $U_{n}, L_{n}$, and it will be clear from the context what is the corresponding number $n$. Before we start proving Theorem~\ref{mth010}, first let us state several lemmas that will be helpful throughout the rest of the paper. The next lemma illustrates {\em local to global} principle.
\begin{lemma}\label{klasika}
If the torsion of $\gamma$ is totally positive on $(a,b)$ then
\begin{align}\label{dplane}
\det(\gamma'(x_{1}), \gamma'(x_{2}), \ldots \gamma'(x_{n+1}))>0
\end{align}
for all $a<x_{1}<\ldots<\ldots<x_{n+1}<b$.
\end{lemma}
\begin{proof}
Without loss of generality assume $[a,b]=[0,1]$. The lemma can be derived from the identity (9) obtained in \cite{DW}. As the lemma is an important step in the proofs of the main results stated in this paper, for the readers convenience we decided to include the proof of the lemma without invoking the identity from \cite{DW}.
We have
\begin{align}
&\det \begin{pmatrix}
\gamma'_{1}(x_{1}) & \gamma'_{1}(x_{2}) & \dots & \gamma'_{1}(x_{n+1})\\
\gamma'_{2}(x_{1}) & \gamma'_{2}(x_{2}) & \ldots & \gamma'_{2}(x_{n+1})\\
\vdots & \vdots &\ddots & \vdots \\
\gamma'_{n+1}(x_{1}) & \gamma'_{n+1}(x_{2}) & \dots & \gamma'_{n+1}(x_{n+1})
\end{pmatrix} = \nonumber\\
&\det \begin{pmatrix}
1 & 1& \dots & 1\\
\frac{\gamma'_{2}(x_{1})}{\gamma'_{1}(x_{1})} & \frac{\gamma'_{2}(x_{2})}{\gamma'_{1}(x_{2})} & \ldots & \frac{\gamma'_{2}(x_{n+1})}{\gamma'_{1}(x_{n+1})}\\
\vdots & \vdots &\ddots & \vdots \\
\frac{\gamma'_{n+1}(x_{1})}{\gamma'_{1}(x_{1})} & \frac{\gamma'_{n+1}(x_{2})}{\gamma'_{1}(x_{2})} & \dots & \frac{\gamma'_{n+1}(x_{n+1})}{\gamma'_{1}(x_{n+1})}
\end{pmatrix}\, \prod_{j=1}^{n+1} \gamma'_{1}(x_{j}) = \nonumber\\
&\det \begin{pmatrix}
1 & 0& \dots & 0\\
\frac{\gamma'_{2}(x_{1})}{\gamma'_{1}(x_{1})} & \frac{\gamma'_{2}(x_{2})}{\gamma'_{1}(x_{2})}- \frac{\gamma'_{2}(x_{1})}{\gamma'_{1}(x_{1})} & \ldots & \frac{\gamma'_{2}(x_{n+1})}{\gamma'_{1}(x_{n+1})}- \frac{\gamma'_{2}(x_{1})}{\gamma'_{1}(x_{1})} \\
\vdots & \vdots &\ddots & \vdots \\
\frac{\gamma'_{n+1}(x_{1})}{\gamma'_{1}(x_{1})} & \frac{\gamma'_{n+1}(x_{2})}{\gamma'_{1}(x_{2})} -\frac{\gamma'_{n+1}(x_{1})}{\gamma'_{1}(x_{1})} & \dots & \frac{\gamma'_{n+1}(x_{n+1})}{\gamma'_{1}(x_{n+1})} -\frac{\gamma'_{n+1}(x_{1})}{\gamma'_{1}(x_{1})}
\end{pmatrix}\, \prod_{j=1}^{n+1} \gamma'_{1}(x_{j}) = \nonumber\\
&\det \begin{pmatrix}
\frac{\gamma'_{2}(x_{2})}{\gamma'_{1}(x_{2})}- \frac{\gamma'_{2}(x_{1})}{\gamma'_{1}(x_{1})} & \ldots & \frac{\gamma'_{2}(x_{n+1})}{\gamma'_{1}(x_{n+1})}- \frac{\gamma'_{2}(x_{1})}{\gamma'_{1}(x_{1})} \\
\vdots &\ddots & \vdots \\
\frac{\gamma'_{n+1}(x_{2})}{\gamma'_{1}(x_{2})} -\frac{\gamma'_{n+1}(x_{1})}{\gamma'_{1}(x_{1})} & \dots & \frac{\gamma'_{n+1}(x_{n+1})}{\gamma'_{1}(x_{n+1})} -\frac{\gamma'_{n+1}(x_{1})}{\gamma'_{1}(x_{1})}
\end{pmatrix}\, \prod_{j=1}^{n+1} \gamma'_{1}(x_{j}) \stackrel{(*)}{=} \nonumber\\
&\det \begin{pmatrix}
\frac{\gamma'_{2}(x_{2})}{\gamma'_{1}(x_{2})}- \frac{\gamma'_{2}(x_{1})}{\gamma'_{1}(x_{1})} & \ldots & \frac{\gamma'_{2}(x_{n+1})}{\gamma'_{1}(x_{n+1})}- \frac{\gamma'_{2}(x_{n})}{\gamma'_{1}(x_{n})} \\
\vdots &\ddots & \vdots \\
\frac{\gamma'_{n+1}(x_{2})}{\gamma'_{1}(x_{2})} -\frac{\gamma'_{n+1}(x_{1})}{\gamma'_{1}(x_{1})} & \dots & \frac{\gamma'_{n+1}(x_{n+1})}{\gamma'_{1}(x_{n+1})} -\frac{\gamma'_{n+1}(x_{n})}{\gamma'_{1}(x_{n})}
\end{pmatrix}\, \prod_{j=1}^{n+1} \gamma'_{1}(x_{j}) = \nonumber\\
&\int_{x_{1}}^{x_{2}} \int_{x_{2}}^{x_{3}} \dots \int_{x_{n}}^{x_{n+1}} \det
\begin{pmatrix}
\left(\frac{\gamma'_{2}(y_{1})}{\gamma'_{1}(y_{1})}\right)' & \ldots & \left(\frac{\gamma'_{2}(y_{n})}{\gamma'_{1}(y_{n})}\right)' \\
\vdots &\ddots & \vdots \\
\left(\frac{\gamma'_{n+1}(y_{1})}{\gamma'_{1}(y_{1})}\right)' & \dots & \left(\frac{\gamma'_{n+1}(y_{n})}{\gamma'_{1}(y_{n})}\right)'
\end{pmatrix} dy_{1} dy_{2}\dots dy_{n}\, \prod_{j=1}^{n+1} \gamma'_{1}(x_{j}), \nonumber
\end{align}
where in the equality $(*)$ we used the property of the determinant that if $v_{1}, \ldots, v_{k}$ are column vectors in $\mathbb{R}^{k}$ then $\det(v_{2}-v_{1}, v_{3}-v_{1}, \ldots, v_{k}-v_{1})) = \det(v_{2}-v_{1}, v_{3}-v_{2}, \ldots, v_{k}-v_{k-1})$ by subtracting the columns from each other.
The leading principal minors of the matrix $(\gamma', \gamma'', \ldots, \gamma^{(n+1)})$ are positive. In particular $\gamma'_{1}$ is positive on $(0,1)$, and hence the factor $\prod_{j=1}^{n+1}\gamma'(x_{j})>0$. To verify (\ref{dplane}) it suffices to show
\begin{align}\label{dplane2}
\det
\begin{pmatrix}
\left(\frac{\gamma'_{2}(y_{1})}{\gamma'_{1}(y_{1})}\right)' & \ldots & \left(\frac{\gamma'_{2}(y_{n})}{\gamma'_{1}(y_{n})}\right)' \\
\vdots &\ddots & \vdots \\
\left(\frac{\gamma'_{n+1}(y_{1})}{\gamma'_{1}(y_{1})}\right)' & \dots & \left(\frac{\gamma'_{n+1}(y_{n})}{\gamma'_{1}(y_{n})}\right)'
\end{pmatrix} >0 \quad \text{for all} \quad 0<y_{1}<y_{2}<\ldots<y_{n}<1.
\end{align}
We will repeat the same computation as before but now for the determinant in (\ref{dplane2}), and, eventually, we will see that the proof of the lemma will be just $n$ times the application of the previous computation together with an identity for determinants that we have not described yet.
Before we proceed let us make couple of observations. We started with the determinant of $(n+1)\times (n+1)$ matrix. Next, we divided the columns by the entries in the first row which consist of $\gamma'_{1}>0$, and after the Gaussian elimination and the fundamental theorem of calculus we ended up with the integral of the determinant of $n \times n$, and we also acquired the factor $\prod_{j=1}^{n+1} \gamma'_{1}(x_{j}) >0$. To repeat the same computation for the determinant in (\ref{dplane2}) and the ones that we obtain in a similar manner we should verify that the entries in the first row of all such new matrices (of smaller sizes) are positive. Such entries are changed as follows
\begin{align}\label{iteracia}
\gamma'_{1} \stackrel{\mathrm{step \,1}}{\mapsto} \left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)' \stackrel{\mathrm{step\, 2}}{\mapsto} \left( \frac{\left(\frac{\gamma'_{3}}{\gamma'_{1}}\right)'}{\left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)'} \right)' \stackrel{\mathrm{step\, 3}}{\mapsto} \left(\frac{\left( \frac{\left(\frac{\gamma'_{4}}{\gamma'_{1}}\right)'}{\left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)'} \right)'}{\left( \frac{\left(\frac{\gamma'_{3}}{\gamma'_{1}}\right)'}{\left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)'} \right)'}\right)' \stackrel{\mathrm{step\, 4}}{\mapsto} \ldots\, .
\end{align}
We claim that after $k$'th step, $1 \leq k \leq n$, the obtained entry is of the form $\frac{\Delta_{k+1} \Delta_{k-1}}{\Delta^{2}_{k}}$,
where $\Delta_{\ell }$ denotes the leading $\ell \times \ell $ principal minor of the matrix $(\gamma',\gamma'', \ldots, \gamma^{(n+1)})$ (by definition we set $\Delta_{0}:=1$). Assuming the claim, Lemma~\ref{klasika} follows immediately because of the condition $\Delta_{\ell}>0$ on $(0,1)$ for all $0\leq \ell \leq n+1$.
To verify the claim we set $T = (\gamma', \gamma'', \ldots, \gamma^{(n+1)})$. Given subsets $I, J \subset \{1, \ldots, n+1\}$ we define $T_{I\times J}$ to be the determinant of the submatrix of $T$ formed by choosing the rows of the index set $I$ and the columns of index set $J$.
We have
\begin{align}
\left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)' &= \frac{\gamma''_{2}\gamma'_{1}-\gamma''_{1}\gamma'_{2}}{\gamma'_{1}}=\frac{T_{\{1,2\}\times\{1,2\}}}{T^{2}_{\{1\}\times \{1\}}}, \nonumber\\
\left(\frac{\gamma'_{\ell}}{\gamma'_{1}}\right)' &= \frac{T_{\{1,\ell\}\times\{1,2\}}}{T^{2}_{\{1\}\times \{1\}}}, \quad \text{for all} \quad \ell \geq 2; \nonumber\\
\left( \frac{\left(\frac{\gamma'_{\ell}}{\gamma'_{1}}\right)'}{\left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)'} \right)' &= \left(\frac{T_{\{1,\ell\}\times\{1,2\}}}{T_{\{1,2\}\times\{1,2\}}}\right)' \stackrel{(*)}{=} \frac{T_{\{1,\ell\}\times\{1,3\}} T_{\{1,2\}\times \{1,2\}} - T_{\{1,\ell\}\times\{1,2\}} T_{\{1,2\}\times \{1,3\}}}{T^{2}_{\{1,2\}\times\{1,2\}}} \nonumber\\
&\stackrel{(**)}{=}\frac{T_{\{1,2,\ell\}\times\{1,2,3\}}\, T_{\{1\}\times\{1\}}}{T^{2}_{\{1,2\}\times\{1,2\}}}, \quad \text{for all} \quad \ell \geq 3, \label{ind2}
\end{align}
where $(*)$ follows from the identity $(T_{I\times \{1,2,\ldots, k-1, k\}})'=T_{I\times \{1,2,\ldots, k-1, k+1\}}$, and $(**)$ follows from the following general identity for determinants:
\begin{align}\label{tozhd1}
T_{\{[k-2], \ell\}\times \{[k-2], k\}} T_{[k-1]\times [k-1]}-T_{\{[k-2], \ell\}\times [k-1]} T_{[k-1]\times\{[k-2], k\}}=T_{\{[k-1],\ell\}\times [k]} T_{[k-2]\times [k-2]}
\end{align}
for all $k, 3 \leq k \leq n+1$, where we set $[d]:=\{1,2, \ldots, d\}$ for a positive integer $d$. Before we verify the identity (\ref{tozhd1}), notice that it also implies
\begin{align}
\left(\frac{T_{\{[k-2], \ell \}\times[k-1]}}{T_{[k-1]\times [k-1]}}\right)'&=\frac{T_{\{[k-2], \ell\}\times \{[k-2], k\}} T_{[k-1]\times [k-1]}-T_{\{[k-2], \ell\}\times [k-1]} T_{[k-1]\times\{[k-2], k\}}}{T^{2}_{[k-1]\times [k-1]}} \nonumber\\
&= \frac{T_{\{[k-1],\ell\}\times [k]} T_{[k-2]\times [k-2]}}{T^{2}_{[k-1]\times [k-1]}}, \label{tozhd2}
\end{align}
for all $k, \ell$ such that $3\leq k \leq n+1$ and $k-1\leq \ell \leq n+1$. Therefore
\begin{align*}
\left(\frac{\left( \frac{\left(\frac{\gamma'_{\ell}}{\gamma'_{1}}\right)'}{\left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)'} \right)'}{\left( \frac{\left(\frac{\gamma'_{3}}{\gamma'_{1}}\right)'}{\left(\frac{\gamma'_{2}}{\gamma'_{1}}\right)'} \right)'}\right)' \stackrel{(\ref{ind2})}{=} \left(\frac{T_{\{1,2,\ell\}\times\{1,2,3\}}}{T_{\{1,2,3\}\times\{1,2,3\}}}\right)' \stackrel{(\ref{tozhd2})}{=} \frac{T_{\{[3],\ell\}\times [4]} T_{[2]\times [2]}}{T^{2}_{[3]\times [3]}}.
\end{align*}
In particular, after step 3, the entry in (\ref{iteracia}) becomes $\frac{T_{[4]\times [4]} T_{[2]\times [2]}}{T^{2}_{[3]\times [3]}}>0$ because $T_{[k]\times[k]}=\Delta_{k}$. It then follows that after step $k$, the entry in (\ref{iteracia}) takes the form
\begin{align*}
\left(\frac{T_{\{[k-1],k+1\}\times[k]}}{T_{[k]\times[k]}}\right)' \stackrel{(\ref{tozhd1})}{=} \frac{T_{[k+1]\times[k+1]} T_{[k-1]\times [k-1]}}{T^{2}_{[k]\times[k]}} = \frac{\Delta_{k+1} \Delta_{k-1}}{\Delta_{k}} >0,
\end{align*}
for all $1\leq k \leq n$. Thus the proof of Lemma~\ref{klasika} is complete provided that the determinant identity (\ref{tozhd1}) is verified. Let $\Delta$ be an invertible $(k-2)\times (k-2)$ matrix, $p,w,u,q \in \mathbb{R}^{k-2}$, and let $a,b,c,d \in \mathbb{R}$. To verify the identity (\ref{tozhd1}) it suffices to show that
\begin{align}\label{sila}
\det \begin{pmatrix}
\Delta & q^{T} \\
w & a
\end{pmatrix} \det \begin{pmatrix}
\Delta & u^{T} \\
p & b
\end{pmatrix} - \det \begin{pmatrix}
\Delta & u^{T} \\
w & c
\end{pmatrix} \det \begin{pmatrix}
\Delta & q^{T} \\
p & d
\end{pmatrix} = \det \begin{pmatrix}
\Delta & u^{T} & q^{T} \\
p & b & d \\
w & c & a
\end{pmatrix} \det \Delta.
\end{align}
Since $\det \begin{pmatrix}
A & B \\
C & D
\end{pmatrix} = \det A \, \det (D -CA^{-1}B)$ for an invertible $m\times m$ matrix $A$, and arbitrary $n\times n$ matrix $D$, $n\times m$ matrix $B$, and $m\times n$ matrix $C$, we see that (\ref{sila}) simplifies to
\begin{align*}
&(\det \Delta)^{2} \left[ (a-w\Delta^{-1} q^{T}) (b-p\Delta^{-1} u^{T})-(c-w \Delta^{-1} u^{T})(d-p\Delta^{-1} q^{T})\right]=\\
&(\det \Delta)^{2} \det \left( \begin{pmatrix}
b & d \\
c & a
\end{pmatrix}- \begin{pmatrix}
p\\
w
\end{pmatrix} \Delta^{-1} \begin{pmatrix}
u^{T} q^{T}
\end{pmatrix}\right),
\end{align*}
which holds because
$\begin{pmatrix}
p\\
w
\end{pmatrix} \Delta^{-1} \begin{pmatrix}
u^{T} q^{T}
\end{pmatrix} = \begin{pmatrix}
p \Delta^{-1}u^{T} & p\Delta^{-1} q^{T}\\
w \Delta^{-1} u^{T} & w \Delta^{-1}q^{T}
\end{pmatrix}$. The lemma is proved.
\end{proof}
\begin{corollary}\label{klasikac}
Let $a<b$, and let $\beta : [a,b] \to \mathbb{R}^{m}$ be a curve $\beta \in C([a,b])\cap C^{m}((a,b))$ with totally positive torsion. Choose any $a\leq z_{1}<\ldots<z_{m}\leq b$ and $r \in [0,1]\setminus\{z_{1}, \ldots, z_{m}\}$. Then the vectors $\beta(z_{1})-\beta(r), \ldots, \beta(z_{m})-\beta(r)$ are linearly independent in $\mathbb{R}^{m}$.
\end{corollary}
\begin{proof}
Let $\nu$, $0\leq \nu \leq m$, be chosen in such a way that $r \in [z_{\nu}, z_{\nu+1}]$. Here we set $z_{0}:=a$, and $z_{m+1}:=b$. We have
\begin{align*}
&\det(\beta(z_{1})-\beta(r), \ldots, \beta(z_{m})-\beta(r)) =\\
&\pm \det(\beta(z_{2})-\beta(z_{1}),\ldots, \beta(r)-\beta(z_{\nu}), \beta(z_{\nu+1})-\beta(r), \ldots, \beta(z_{m})-\beta(z_{m-1}))=\\
& \pm \int_{z_{m-1}}^{z_{m}}\ldots \int_{r}^{z_{\nu+1}}\int_{z_{\nu}}^{r}\ldots \int_{z_{1}}^{z_{2}}\det(\beta'(s_{1}), \ldots,\beta'(s_{\nu}), \beta'(s_{\nu+1}),\ldots \beta'(s_{m}))ds_{1}\ldots ds_{m}\neq 0
\end{align*}
by Lemma~\ref{klasika}.
\end{proof}
Certain parts of the proof of Theorem~\ref{mth010} will require induction on the dimension $n+1$. In particular, we will need to verify the base cases when $n=1$ (the odd case) and $n=2$ (even case).
In what follows without loss of generality we assume $[a,b]=[0,1]$, and $\gamma(0)=0$.
\subsection{The proof of Theorem~\ref{mth010} in dimension 1+1}\label{kkl1}
This case is trivial and Theorem~\ref{mth010} essentially follows by looking at Fig.~\ref{fig:2d}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.6\textwidth]{2d.pdf}
\end{center}
\caption{ Proof of Theorem \ref{mth010} for dimension $n+1 = 1+1$.}
\label{fig:2d}
\end{figure}
If we reparametrize the curve $\gamma$ as $\tilde{\gamma}(t):= \gamma(\gamma_{1}^{-1}(t))$, $t \in (0,\gamma_{1}(1))$, then $\tilde{\gamma}$ has totally positive torsion. So $\tilde{\gamma}(t) = (t, g(t)), t \in (0, \gamma_{1}(1))$ where $g(0)=0$, and $\frac{d^{2}}{dt^{2}}g(t)>0$ for all $t\in (0,\gamma_{1}(1))$. We have $U_{1}(\beta_{1})=\beta_{1}\gamma(1), \beta_{1} \in [0,1]$, is the line joining the endpoints of $\tilde{\gamma}$. Also $L_{1}(x_{1})=\gamma(x_{1}), x_{1} \in [0,1]$, is the curve coinciding with $\tilde{\gamma}$. It is easy to see that in this case Theorem~\ref{mth010} holds true.
\subsection{The proof of Theorem~\ref{mth010} in dimension 2+1}
\subsubsection{The lower hull}\label{3low}
Recall that
\begin{align*}
\overline{L}_{2} :\Delta_{c}^{1}\times \Delta_{*}^{1} = [0,1]^{2} \ni (\alpha, x) \mapsto \alpha \overline{\gamma}(x).
\end{align*}
We claim
\begin{align}
&\overline{L}_{2}(\partial ([0,1]^{2})) =\partial ( \mathrm{conv}(\overline{\gamma}([0,1]))); \label{3db}\\
&\overline{L}_{2} :\mathrm{int}([0,1]^{2}) \mapsto \mathrm{int}(\mathrm{conv}(\overline{\gamma}([0,1]))) \quad \text{is diffeomorphism.} \label{3ddiff}
\end{align}
To verify (\ref{3db}) it suffices to show that $\overline{\gamma}$ is the convex curve in $\mathbb{R}^{2}$. Convexity of $\overline{\gamma}$ can be verified in a similar way as in Section~\ref{kkl1}. However, here we present one more proof which later will be adapted to higher dimensions too. Assume contrary, i.e., there exists $0\leq a <b<c \leq 1$ such that $\overline{\gamma}(a), \overline{\gamma}(b), \overline{\gamma}(c)$ lie on the same line, i.e.,
\begin{align}\label{ura1}
0=\det(\overline{\gamma}(b)-\overline{\gamma}(a), \overline{\gamma}(c)-\overline{\gamma}(b)) = \int_{a}^{b} \int_{b}^{c} \det(\overline{\gamma}'(y_{1}), \overline{\gamma}'(y_{2}))dy_{1}dy_{2}.
\end{align}
The equation (\ref{ura1}) is in contradiction with Lemma~\ref{klasika} applied to $\overline{\gamma}$.
To verify (\ref{3ddiff}), by Hadamard-Caccioppoli theorem it suffices to check that the differential of $\overline{L}:=\overline{L}_{2}$ at the interior of $[0,1]^{2}$ has full rank, and the map $\overline{L}_{2}$ is injection. The injectivity will be verified later in all dimensions simultaneously (see the section on proofs of (\ref{diff2l-1u}), (\ref{diff2l-1l}), (\ref{diff2lu}), and (\ref{diff2ll})). To verify the full rank property we have $D \overline{L} = (\overline{L}_{\alpha}, \overline{L}_{x}) =\alpha \det(\overline{\gamma}(x),\overline{\gamma}'(x))$. On the other hand
\begin{align}\label{3dtozhd1}
\det (\overline{\gamma}(x),\overline{\gamma}'(x)) = \int_{0}^{x} \det(\overline{\gamma}'(y_{1}), \overline{\gamma}'(x))dy_{1} \stackrel{\text{Lemma}~\ref{klasika}}{>} 0.
\end{align}
Thus, see Fig.~\ref{fig:3d},
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{Bottom_3d.png}
\includegraphics[width=.49\textwidth]{Top_3d.png}
\caption{Two pieces of the boundary of the convex hull of $\gamma$: the lower hull $L_{2}$ (left) and the upper hull $U_{2}$}
\label{fig:3d}
\end{figure}
$$
L_{2} : \Delta_{c}^{1}\times \Delta_{*}^{1}=[0,1]^{2} \ni (\alpha, x) \mapsto \alpha \gamma(x)
$$
parametrizes a surface in $\mathbb{R}^{3}$ which is a graph of a function $B^{\mathrm{inf}}$ defined on $\mathrm{conv}(\overline{\gamma}([0,1]))$ as follows
\begin{align*}
B^{\mathrm{inf}}(\alpha \overline{\gamma}(x)) = \alpha \gamma_{3}(x), \quad \text{for all} \quad (\alpha, x) \in [0,1]^{2}.
\end{align*}
Let us check that $B^{\mathrm{inf}}$ is convex. Indeed, at any point $(\alpha_{0}, x_{0}) \in \mathrm{int}([0,1]^{2})$ the set of points $\xi \in \mathbb{R}^{3}$ belonging to the tangent plane at point $L_{2}(\alpha_{0}, x_{0})$ is found as the solution of the equation
\begin{align}\label{trieq}
\det(L_{\alpha}(\alpha_{0}, x_{0}),L_{x}(\alpha_{0}, x_{0}), \xi-L(\alpha_{0}, x_{0})) = \alpha_{0} \det(\gamma(x_{0}), \gamma'(x_{0}), \xi)=0.
\end{align}
For $\xi=e_{3}$, where $e_{3}=(0,0,1)$ we have
\begin{align*}
&\det(\gamma(x_{0}), \gamma'(x_{0}), e_{3}) = \det(\overline{\gamma}(x_{0}), \overline{\gamma}'(x_{0}))\stackrel{(\ref{3dtozhd1})}{>}0.
\end{align*}
Therefore, to verify the convexity of $B^{\mathrm{inf}}$, i.e., the surface $L([0,1]^{2})$ lies above the tangent plane at point $L(\alpha_{0}, x_{0}),$ it suffices to show that
$$
\det(\gamma(x_{0}), \gamma'(x_{0}), L(\alpha, x)) = \alpha \det(\gamma(x_{0}), \gamma'(x_{0}), \gamma(x)) \geq 0.
$$
If $x=x_{0}$ there is nothing to prove. If $x>x_{0}$ then
\begin{align*}
\det(\gamma(x_{0}), \gamma'(x_{0}), \gamma(x)) = \int_{0}^{x_{0}}\int_{x_{0}}^{x}\det(\gamma'(y_{1}),\gamma'(x_{0}), \gamma'(y_{3}))dy_{1}dy_{3} \stackrel{\text{Lemma}~\ref{klasika}}{>}0.
\end{align*}
Similarly, if $x<x_{0}$, by Lemma~\ref{klasika} we have
\begin{align*}
\det(\gamma(x_{0}), \gamma'(x_{0}), \gamma(x)) = \int_{x}^{x_{0}} \int_{0}^{x} \det(\gamma'(y_{1}),\gamma'(x_{0}), \gamma'(y_{3}))dy_{1} dy_{3} >0.
\end{align*}
To verify that $B^{\mathrm{inf}}$ is the maximal convex function defined on $\mathrm{conv}(\overline{\gamma}([0,1]))$ such that $B(\overline{\gamma}(s)) = \gamma_{3}(s)$, notice that since every point $(\xi,B^{\mathrm{inf}}(\xi))$, where $\xi \in \mathrm{conv}(\overline{\gamma}([0,1]))$, is the convex combination of some points of $\gamma$, it follows that any other candidate $\tilde{B}$ would be smaller than $B$ by convexity.
\subsubsection{The upper hull} \label{3up}
Consider the map
\begin{align*}
\overline{U}_{2} : \Delta_{c}^{1}\times \Delta_{*}^{1} = [0,1]^{2} \ni (\alpha, x ) \mapsto \alpha \overline{\gamma}(x)+(1-\alpha)\overline{\gamma}(1).
\end{align*}
Similarly as before $\Phi$ satisfies (\ref{3db}) and (\ref{3ddiff}). The property (\ref{3db}) follows from from the convexity of $\overline{\gamma}$. The property (\ref{3ddiff}) follows from
\begin{align*}
\det(\overline{U}_{\alpha}, \overline{U}_{x}) = \alpha \det(\overline{\gamma}(x)-\overline{\gamma}(1), \overline{\gamma}'(x))= \int_{x}^{1}\det(\overline{\gamma}'(x), \overline{\gamma}'(y_{2}))dy_{2} \neq 0
\end{align*}
for all $(\alpha, x) \in \mathrm{int}([0,1]^{2})$ by Lemma~\ref{klasika} applied to $\overline{\gamma}$.
Next, we show that
\begin{align*}
B^{\mathrm{sup}}(\alpha \overline{\gamma}(x)+(1-\alpha)\overline{\gamma}(1)) = \alpha \gamma_{3}(x)+(1-\alpha)\gamma_{3}(1)
\end{align*}
defines a minimal concave function on $\mathrm{conv}(\overline{\gamma}([0,1]))$ with the property $B^{\mathrm{sup}}(\overline{\gamma})= \gamma_{3}$, see Fig 3. Let $U(\alpha, x) = \alpha \gamma(x)+(1-\alpha)\gamma(1)$. The equation of the tangent plane at point $U(\alpha_{0}, x_{0})$, where $(\alpha_{0}, x_{0}) \in \mathrm{int}([0,1]^{2})$, is given by
\begin{align*}
&\det(U_{\alpha}(\alpha_{0}, x_{0}), U_{x}(\alpha_{0}, x_{0}), \xi -U(\alpha_{0}, x_{0})) =\alpha_{0}\det(\gamma(x_{0})-\gamma(1),\gamma'(x_{0}), \xi-\alpha_{0}(\gamma(x_{0})-\gamma(1))-\gamma(1))\\
&=\alpha_{0}\det(\gamma(x_{0})-\gamma(1),\gamma'(x_{0}), \xi-\gamma(1))=0.
\end{align*}
For $\xi=\lambda e_{3}$ with $\lambda \to +\infty$ we have
\begin{align*}
&\mathrm{sign}[\det(\gamma(x_{0})-\gamma(1),\gamma'(x_{0}), \lambda e_{3}-\gamma(1))] = \mathrm{sign}[ \det (\overline{\gamma}(x_{0})-\overline{\gamma}(1),\overline{\gamma}'(x_{0})]\\
&=\mathrm{sign}\left[\int_{x_{0}}^{1}\det (\overline{\gamma}'(x_{0}) ,\overline{\gamma}(y_{2})) dy_{2}\right] >0
\end{align*}
by Lemma~\ref{klasika} applied to $\overline{\gamma}$. Therefore, the concavity of $B^{\mathrm{sup}}$ would follow from the following inequality
\begin{align*}
\det(\gamma(x_{0})-\gamma(1),\gamma'(x_{0}), U(\alpha, x)-\gamma(1))=\alpha \det(\gamma(x_{0})-\gamma(1),\gamma'(x_{0}), \gamma(x)-\gamma(1)) \leq 0
\end{align*}
for all $x_{0}, \alpha, x \in [0,1]$. If $x=x_{0}$ there is nothing to prove. Consider $x>x_{0}$ (the case $x<x_{0}$ is similar). Then
\begin{align*}
&\det(\gamma(x_{0})-\gamma(1),\gamma'(x_{0}), \gamma(x)-\gamma(1))=\det(\gamma(x_{0})-\gamma(1),\gamma'(x_{0}), \gamma(x)-\gamma(x_{0}))=\\
&-\det(\gamma(x_{0})-\gamma(x), \gamma'(x_{0}), \gamma(1)-\gamma(x_{0})) = -\int_{x}^{x_{0}}\int_{x_{0}}^{1}\det(\gamma'(y_{1}), \gamma'(x_{0}), \gamma'(y_{2}))dy_{2}dy_{1}<0
\end{align*}
by Lemma~\ref{klasika}.
\vskip1cm
The properties (\ref{giff}) and (\ref{union}) will be verified in sections \ref{giffsub} and \ref{unionsub}.
\subsection{The proof of Theorem~\ref{mth010} in an arbitrary dimension $n+1$}
\begin{proof}
Since Theorem~\ref{mth010} contains several statements the whole proof will be split into several parts.
{\em The proof of claims (\ref{b2l-1}) and (\ref{b2l}).}
The proof will be by induction on $n$. We have checked the statement for $n=1,2$. First we consider the case when $n=2\ell-1$. We shall verify the claim (\ref{b2l-1}) by showing that $\overline{U}_{2\ell-1}|_{\partial(\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1})}$, i.e., the restriction of $\overline{U}_{2\ell-1}$ on $\partial(\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1})$, coincides with maps $U_{2\ell-2}$ and $L_{2\ell-2}$ (similarly for $\overline{L}_{2\ell-1}|_{\partial(\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell})}$). Since by the induction the union of the images of $U_{2\ell-2}$ and $L_{2\ell-2}$ coincides with the boundary of the convex hull of $\overline{\gamma}([0,1])$, see (\ref{union}), we obtain the claim.
Recall that
\begin{align*}
\overline{U}_{2\ell-1} :\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1} \ni (\beta_{1}, \ldots, \beta_{\ell}, y_{2},\ldots, y_{\ell}) \mapsto \beta_{1} \overline{\gamma}(1)+\sum_{j=2}^{\ell} \beta_{j} \overline{\gamma}(y_{j}),
\end{align*}
and
\begin{align*}
&U_{2\ell-2} : \Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell-1} \ni (\lambda_{1}, \ldots, \lambda_{\ell-1}, x_{1}, \ldots, x_{\ell-1}) \mapsto \sum_{j=1}^{\ell-1} \lambda_{j} \overline{\gamma}(x_{j}) + (1-\sum_{j=1}^{\ell-1}\lambda_{j}) \overline{\gamma}(1),\\
&L_{2\ell-2} :\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell-1} \ni (\lambda_{1}, \ldots, \lambda_{\ell-1}, z_{1}, \ldots, z_{\ell-1}) \mapsto \sum_{j=1}^{\ell-1} \lambda_{j} \overline{\gamma}(z_{j}).
\end{align*}
If $\beta_{1}=0$ then $\overline{U}_{2\ell-1}$ coincides with $L_{2\ell-2}$. If $\sum_{j=1}^{n}\beta_{j}=1$, i.e., $\beta_{1}=1-\sum_{j=2}^{\ell}\beta_{j}$, then $\overline{U}_{2\ell-1}$ coincides with $U_{2\ell-2}$. Thus, we have
\begin{align*}
\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) \stackrel{\mathrm{induction}}{=} U_{2\ell-2}(\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell-1}) \cup L_{2\ell-2}(\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell-1}) \subset \overline{U}_{2\ell-1}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1})).
\end{align*}
On the other hand, if $\beta_{p}=0$ for some $p \in \{2, \ldots, \ell\}$, then $\overline{U}_{2\ell-1}$ coincides with $L_{2\ell-2}$ restricted to $z_{1}=1$. If at least one of the following conditions hold: a) $y_{2}=0$; b) $y_{s}=y_{s+1}$ for some $s \in \{2, \ldots, \ell-1\}$; c) $y_{\ell}=1$, then $\overline{U}_{2\ell-1}$ coincides with $U_{2\ell-2}$ restricted to $x_{1}=0$. Thus we obtain $\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) = \overline{U}_{2\ell-1}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1}))$.
Next, we verify that $\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) = \overline{L}_{2\ell-1}(\partial\, (\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell}))$. We recall
\begin{align*}
\overline{L}_{2\ell-1} :\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell} \ni (\beta_{2}, \ldots, \beta_{\ell}, y_{1},\ldots, y_{\ell}) \mapsto \sum_{j=2}^{\ell} \beta_{j} \overline{\gamma}(y_{j})+(1-\sum_{j=2}^{\ell} \beta_{j})\overline{\gamma}(y_{1}).
\end{align*}
If $y_{\ell}=1$ then $\overline{L}_{2\ell-1}$ coincides with $U_{2\ell-2}$. If $y_{1}=0$ then $\overline{L}_{2\ell-1}$ coincides with $L_{2\ell-2}$. Thus, by induction $\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) \subset \overline{L}_{2\ell-1}(\partial\, (\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell}))$.
Next, if $y_{s}=y_{s+1}$ for some $s \in \{1, \ldots, \ell-1\}$ then $\overline{L}_{2\ell-1}$ coincides with $L_{2\ell-2}$ restricted to $\lambda_{1}=1-\sum_{j=2}^{\ell-1}\lambda_{j}$. Also, if $\sum_{j=2}^{\ell}\beta_{j}=1$ then $\overline{L}_{2\ell-1}$ coincides with $L_{2\ell-2}$. Finally, if $\beta_{s}=0$ for some $s \in \{2, \ldots, \ell\}$ then $\overline{L}_{2\ell-1}$ coincides with $L_{2\ell-2}$ restricted to $\sum_{j=1}^{\ell-1}\lambda_{j}=1$. Thus we obtain
$\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) = \overline{L}_{2\ell-1}(\partial\, (\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell}))$.
Next, we assume $n=2\ell$. First we verify (\ref{b2l}). As before we claim that the restriction of $\overline{U}_{2\ell}$ on $\partial(\Delta_{c}^{\ell} \times \Delta_{*}^{\ell})$ coincides with maps $U_{2\ell-1}$ and $L_{2\ell-1}$ (similarly for $\overline{L}_{2\ell}$). Since by the induction the union of the images of $U_{2\ell-1}$ and $L_{2\ell-1}$ coincide with the boundary of the convex hull of $\overline{\gamma}([0,1])$, see (\ref{union}), we obtain the claim.
We recall that
\begin{align*}
\overline{U}_{2\ell} : \Delta_{c}^{\ell} \times \Delta_{*}^{\ell} \ni (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \mapsto \sum_{j=1}^{\ell} \lambda_{j} \overline{\gamma}(x_{j}) + (1-\sum_{j=1}^{\ell}\lambda_{j}) \overline{\gamma}(1);
\end{align*}
and
\begin{align*}
&U_{2\ell-1} :\Delta_{c}^{\ell} \times \Delta_{*}^{\ell-1} \ni (\beta_{1}, \ldots, \beta_{\ell}, y_{2},\ldots, y_{\ell}) \mapsto \beta_{1} \overline{\gamma}(1)+\sum_{j=2}^{\ell} \beta_{j} \overline{\gamma}(y_{j});\\
&L_{2\ell-1} :\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell} \ni (\beta_{2}, \ldots, \beta_{\ell}, z_{1},\ldots, z_{\ell}) \mapsto (1-\sum_{j=2}^{\ell} \beta_{j})\overline{\gamma}(z_{1})+\sum_{j=2}^{\ell} \beta_{j} \overline{\gamma}(z_{j}).
\end{align*}
Notice that if $\sum_{j=1}^{\ell} \lambda_{j}=1$ then $\overline{U}_{2\ell}$ coincides with $L_{2\ell-1}$. On the other hand, if $x_{1}=0$ then $\overline{U}_{2\ell}$ coincides with $U_{2\ell-1}$. Thus, by induction we have $\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) \subset \overline{U}_{2\ell}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell}))$. Also notice that if $\lambda_{p}=0$ for some $p \in \{1, \ldots, \ell\}$ (or $x_{s}=x_{s+1}$ for some $s \in \{1, \ldots, \ell-1\}$, or $x_{\ell}=1$) then $\overline{U}_{2\ell}$ coincides with $U_{2\ell-1}$ restricted to the boundary of $\Delta_{c}^{\ell-1} \times \Delta_{*}^{\ell}$ (if $\lambda_{p}=0$ or $x_{\ell}=1$ take $\beta_{1} = 1-\sum_{j=2}^{\ell}\beta_{j}$, if $x_{s}=x_{s+1}$ take $\beta_{1} = 1-\sum_{j=2}^{\ell}\beta_{j}$). Thus we obtain $\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) = \overline{U}_{2\ell}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell}))$.
Next, we verify the claim $\overline{L}_{2\ell}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell})) = \partial\, \mathrm{conv}(\overline{\gamma}([0,1]))$.
We recall that
\begin{align*}
\overline{L}_{2\ell} :\Delta_{c}^{\ell} \times \Delta_{*}^{\ell} \ni (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \mapsto \sum_{j=1}^{\ell} \lambda_{j} \overline{\gamma}(x_{j}),
\end{align*}
If $\sum_{j=1}^{\ell}\lambda_{j}=1$ then $\overline{L}_{2\ell}$ coincides with $L_{2\ell-1}$. If $x_{\ell}=1$ then $\overline{L}_{2\ell}$ coincides with $U_{2\ell-1}$. Thus by induction we have $\partial\, \mathrm{conv}(\overline{\gamma}([0,1])) \subset \overline{L}_{2\ell}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell}))$
If $\lambda_{p}=0$ for some $p \in \{1, \ldots, \ell\}$, or $x_{1}=0$, then $\overline{L}_{2\ell}$ coincides with $U_{2\ell-1}$ if we choose $\beta_{1}=0$. Finally, if $x_{s}=x_{s+1}$ for some $s \in \{1, \ldots, \ell-1\}$, then $\overline{L}_{2\ell}$ coincides with $U_{2\ell-1}$ if we choose $\beta_{1}=0$, and $\beta_{s+1}=\lambda_{s}+\lambda_{s+1}$. Therefore, we have $\overline{L}_{2\ell}(\partial\, (\Delta_{c}^{\ell} \times \Delta_{*}^{\ell})) \subset \partial\, \mathrm{conv}(\overline{\gamma}([0,1]))$, and the claim (\ref{b2l}) is verified.
{\em The proof of claims (\ref{diff2l-1u}), (\ref{diff2l-1l}), (\ref{diff2lu}) and (\ref{diff2ll}).}
We start by showing that the Jacobian of the map $\overline{U}_{n}$ has full rank at the interior points of its domain. Hence the map is local diffeomoerphism by inverse function theorem. Therefore, the map is surjective, otherwise the image of its domain would have a boundary in the interior of the codomain (boundary goes to boundary by (\ref{b2l}) and (\ref{b2l-1})) and this would contradict the local diffeomoerphism. Next, we show that the map $\overline{U}_{n}$ is injective, and hence proper. So we conclude that $\overline{U}_{n}$ is diffeomorphism. The similar reasoning will be done for $\overline{L}_{n}$.
First we verify that the Jacobian matrices $\nabla \overline{U}_{n}$ and $\nabla \overline{L}_{n}$ have full rank at the interior points of their domains.
Assume $n=2\ell-1$. We have
\begin{align*}
&\det(\nabla \overline{U}_{2\ell-1}) =\det(\overline{\gamma}(1), \overline{\gamma}(x_{2}), \ldots, \overline{\gamma}(x_{\ell}), \beta_{2} \overline{\gamma}'(x_{2}), \ldots, \beta_{\ell}\overline{\gamma}'(x_{\ell}))\\
&=\pm \det(\overline{\gamma}(x_{2}), \overline{\gamma}'(x_{2}), \overline{\gamma}(x_{3}), \overline{\gamma}'(x_{3}), \ldots, \overline{\gamma}(x_{\ell}), \overline{\gamma}'(x_{\ell}), \overline{\gamma}(1)) \prod_{j=2}^{\ell} \beta_{j}\\
& = \pm \det(\overline{\gamma}(x_{2})-\overline{\gamma}(0), \overline{\gamma}'(x_{2}), \overline{\gamma}(x_{3})-\overline{\gamma}(x_{2}), \overline{\gamma}'(x_{3}), \ldots, \overline{\gamma}(x_{\ell})-\overline{\gamma}(x_{\ell-1}), \overline{\gamma}'(x_{\ell}), \overline{\gamma}(1)-\overline{\gamma}(x_{\ell})) \prod_{j=2}^{\ell} \beta_{j}\\
&=\pm \prod_{j=2}^{\ell}\beta_{j}\, \int_{x_{\ell}}^{1}\ldots \int_{x_{2}}^{x_{3}} \int_{0}^{x_{2}} \det(\overline{\gamma}'(s_{1}), \overline{\gamma}'(x_{2}),\overline{\gamma}'(s_{2}), \ldots, \overline{\gamma}'(x_{\ell}), \overline{\gamma}'(s_{\ell}))ds_{1} ds_{2}\ldots ds_{\ell}.
\end{align*}
Thus $\det(\nabla \overline{U}_{2\ell-1})$ is nonzero by Lemma~\ref{klasika}.
Next, we verify that $\det(\nabla \overline{L}_{2\ell-1})\neq 0$, Indeed,
\begin{align*}
&\det(\nabla \overline{L}_{2\ell-1}) = \\
&\det( \overline{\gamma}(x_{2})-\overline{\gamma}(x_{1}), \overline{\gamma}(x_{3})-\overline{\gamma}(x_{1}), \ldots, \overline{\gamma}(x_{\ell})-\overline{\gamma}(x_{1}), \overline{\gamma}'(x_{1}), \ldots, \overline{\gamma}'(x_{\ell}) ) (1-\sum_{j=2}^{\ell}\beta_{j})\prod_{j=2}^{\ell}\beta_{j}=\\
& \det( \overline{\gamma}(x_{2})-\overline{\gamma}(x_{1}), \overline{\gamma}(x_{3})-\overline{\gamma}(x_{2}), \ldots, \overline{\gamma}(x_{\ell})-\overline{\gamma}(x_{\ell-1}), \overline{\gamma}'(x_{1}), \ldots, \overline{\gamma}'(x_{\ell}) ) (1-\sum_{j=2}^{\ell}\beta_{j})\prod_{j=2}^{\ell}\beta_{j}=\\
&\pm \det(\overline{\gamma}'(x_{1}), \overline{\gamma}(x_{2})-\overline{\gamma}(x_{1}), \overline{\gamma}'(x_{2}), \overline{\gamma}(x_{3})-\overline{\gamma}(x_{2}), \ldots, \overline{\gamma}(x_{\ell})-\overline{\gamma}(x_{\ell-1}), \overline{\gamma}'(x_{\ell}) ) (1-\sum_{j=2}^{\ell}\beta_{j})\prod_{j=2}^{\ell}\beta_{j}=\\
&\pm (1-\sum_{j=2}^{\ell}\beta_{j})\prod_{j=2}^{\ell}\beta_{j} \times \\
&\int_{x_{\ell-1}}^{x_{\ell}}\ldots \int_{x_{2}}^{x_{3}}\int_{x_{1}}^{x_{2}}\det( \overline{\gamma}'(x_{1}), \overline{\gamma}'(s_{1}), \overline{\gamma}'(x_{2}), \overline{\gamma}'(s_{2}), \ldots, \overline{\gamma}'(s_{\ell-1}), \overline{\gamma}'(x_{\ell})) ds_{1} ds_{2}\ldots ds_{\ell-1} \neq 0
\end{align*}
by Lemma~\ref{klasika}.
Assume $n=2\ell$. We have
\begin{align*}
&\det(\nabla \overline{U}_{2\ell}) = \det(\overline{\gamma}(x_{1})-\overline{\gamma}(1), \ldots, \overline{\gamma}(x_{\ell})-\overline{\gamma}(1), \overline{\gamma}'(x_{1}), \ldots, \overline{\gamma}'(x_{\ell})) \prod_{j=1}^{\ell} \lambda_{j}=\\
&\pm \det(\overline{\gamma}'(x_{1}), \overline{\gamma}(x_{1})-\overline{\gamma}(x_{2}), \overline{\gamma}'(x_{2}), \overline{\gamma}(x_{2})-\overline{\gamma}(x_{3}), \ldots, \overline{\gamma}'(x_{\ell}), \overline{\gamma}(x_{\ell})-\overline{\gamma}(1))\prod_{j=1}^{\ell} \lambda_{j} = \\
& \pm \int_{x_{\ell}}^{1}\ldots \int_{x_{2}}^{x_{3}}\int_{x_{1}}^{x_{2}}\det(\overline{\gamma}'(x_{1}), \overline{\gamma}'(s_{1}), \overline{\gamma}'(x_{2}), \overline{\gamma}'(s_{2}), \ldots, \overline{\gamma}'(x_{\ell}), \overline{\gamma}'(s_{\ell}))ds_{1}ds_{2}\ldots ds_{\ell} \prod_{j=1}^{\ell}\lambda_{j}
\end{align*}
which is nonzero by Lemma~\ref{klasika}.
Finally, we verify $\det(\nabla \overline{L}_{2\ell}) \neq 0$. We have
\begin{align*}
&\det(\nabla \overline{L}_{2\ell}) =\det(\overline{\gamma}(x_{1}), \ldots, \overline{\gamma}(x_{\ell}), \overline{\gamma}'(x_{1}), \ldots, \overline{\gamma}'(x_{\ell})) \prod_{j=1}^{\ell}\lambda_{j}=\\
&\pm \det(\overline{\gamma}(x_{1})-\overline{\gamma}(0),\overline{\gamma}'(x_{1}), \overline{\gamma}(x_{2})-\overline{\gamma}(x_{1}), \overline{\gamma}'(x_{2}), \ldots,\overline{\gamma}(x_{\ell})-\overline{\gamma}(x_{\ell-1}),\overline{\gamma}'(x_{\ell})) \prod_{j=1}^{\ell}\lambda_{j} = \\
&\pm \int_{x_{\ell-1}}^{x_{\ell}} \ldots \int_{x_{1}}^{x_{2}}\int_{0}^{x_{1}}\det(\overline{\gamma}'(s_{1}), \overline{\gamma}'(x_{1}), \overline{\gamma}'(s_{2}), \overline{\gamma}'(x_{2}), \ldots, \overline{\gamma}'(s_{\ell}), \overline{\gamma}'(x_{\ell})) ds_{1} ds_{2}\ldots ds_{\ell}\prod_{j=1}^{\ell}\lambda_{j}.
\end{align*}
Thus $\det(\nabla \overline{L}_{2\ell}) \neq 0$ by Lemma~\ref{klasika}.
Next, we show that the map $\overline{U}_{n}$ is injective in the interior of its domain. Assume $n=2\ell$. Let $(\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell})$ and $(\beta_{1}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell})$ be two different points in $\mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ such that $\overline{U}_{\ell}$ takes the same values on these points. Then
\begin{align}\label{linin}
\sum_{j=1}^{\ell}\lambda_{j}(\overline{\gamma}(x_{j})-\overline{\gamma}(1)) - \sum_{k=1}^{\ell} \beta_{k} (\overline{\gamma}(y_{k})-\overline{\gamma}(1))=0.
\end{align}
We claim that (\ref{linin}) holds if and only if $x_{j}=y_{j}$ and $\lambda_{j}=\beta_{j}$ for all $j=1, \ldots, \ell$. Indeed, we need the following
\begin{lemma}\label{ltorsion}
For any numbers $z_{j}$, $1\leq j \leq 2\ell$, such that $0<z_{1}<z_{2}<\ldots<z_{2\ell}\leq 1$, and any $r \in [0,1]\setminus\{z_{1}, \ldots, z_{2\ell}\}$, the vectors $\overline{\gamma}(z_{1})-\overline{\gamma}(r), \ldots, \overline{\gamma}(z_{2\ell})-\overline{\gamma}(r)$ are linearly independent in $\mathbb{R}^{2\ell}$.
\end{lemma}
\begin{proof}
The lemma follows from Corollary~\ref{klasikac} applied to $\beta=\overline{\gamma}$.
\end{proof}
Let $N$ be the cardinality of the set $Q=\{x_{1}, \ldots, x_{\ell}\} \cap \{y_{1}, \ldots, y_{\ell}\}$. If $N=\ell$ then necessarily $x_{j}=y_{j}$ for all $j=1, \ldots, \ell$, and the equation (\ref{linin}) combined with Lemma~\ref{ltorsion} implies that $\lambda_{j}=\beta_{j}$ for all $j=1, \ldots, \ell$. Therefore, assume $N<\ell$. Then we can split the sum (\ref{linin}) into the sum of 3 terms: the sum of $\lambda_{j} (\overline{\gamma}(x_{j})-\overline{\gamma}(1))$ where $x_{j} \notin Q$; the sum $(\lambda_{j}-\beta_{i_{j}})(\overline{\gamma}(x_{j})-\overline{\gamma}(1))$ where $x_{j} \in Q$; and the sum $\beta_{j} (\overline{\gamma}(y_{j})-\overline{\gamma}(1))$ where $y_{j}\notin Q$. Since $\beta_{j}$ and $\lambda_{j}$ cannot be zero, then applying Lemma~\ref{ltorsion} with $r=1$ we get a contradiction.
Next, we verify the injectivity of $\overline{L}_{2\ell}$ on the interior of its domain. Let $(\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell})$ and $(\beta_{1}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell})$ belong to $\mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ and satisfy
\begin{align*}
\sum_{j=1}^{\ell} \lambda_{j} \overline{\gamma}(x_{j}) -\sum_{k=1}^{\ell}\beta_{k} \overline{\gamma}(y_{k})=0.
\end{align*}
By applying Lemma~\ref{ltorsion} with $r=0$ and invoking the set $Q$ as before we obtain $x_{j}=y_{j}$, $\lambda_{j}=\beta_{j}$ for all $j=1, \ldots, \ell$.
Assume $n=2\ell-1$. To verify the injectivity of $\overline{U}_{2\ell-1}$ on the interior of $\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$ we pick points $(\lambda_{1}, \ldots, \lambda_{\ell}, x_{2}, \ldots, x_{\ell})$ and $(\beta_{1}, \ldots, \beta_{\ell}, y_{2}, \ldots, y_{\ell})$ from $\mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$, and we assume
\begin{align}\label{kidev1}
(\lambda_{1}-\beta_{1})\overline{\gamma}(1)+\sum_{j=2}^{\ell}\lambda_{j} \overline{\gamma}(x_{j})-\sum_{j=2}^{\ell}\beta_{j} \overline{\gamma}(y_{j})=0.
\end{align}
\begin{lemma}\label{lltorsion}
For any numbers $0<z_{1}<\ldots<z_{2\ell-2}<1$ the vectors $\overline{\gamma}(z_{1}), \ldots, \overline{\gamma}(z_{2\ell-2}), \overline{\gamma}(1)$ are linearly independent in $\mathbb{R}^{2\ell-1}$.
\end{lemma}
\begin{proof}
The lemma follows from Corollary~\ref{klasikac} applied to $\beta=\overline{\gamma}$, $z_{2\ell-1}=1$, and $r=0$.
\end{proof}
Invoking the set $Q$, and repeating the same reasoning as in the case of injectivity of $\overline{U}_{2\ell}$, we see that the equality (\ref{kidev1}) combined with Lemma~\ref{lltorsion} implies $x_{j}=y_{j}$ for all $j=2, \ldots, \ell$, and $\lambda_{j}=\beta_{j}$ for all $j=1, \ldots, \ell$.
To verify the injectivity of $\overline{L}_{2\ell-1}$ on the interior of $\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell}$ we pick points $(\lambda_{2}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell})$ and $(\beta_{2}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell})$ from $\mathrm{int}(\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell})$, and we assume
\begin{align}\label{eq0011}
(1-\sum_{j=2}^{\ell}\lambda_{j})\overline{\gamma}(x_{1})+\sum_{j=2}^{\ell}\lambda_{j}\overline{\gamma}(x_{j})=(1-\sum_{j=2}^{\ell}\beta_{j})\overline{\gamma}(y_{1})+\sum_{j=2}^{\ell}\beta_{j}\overline{\gamma}(y_{j}).
\end{align}
Without loss of generality assume $y_{1}\leq x_{1}$. We rewrite (\ref{eq0011}) as follows
\begin{align}\label{sum01}
(1-\sum_{j=2}^{\ell}\lambda_{j})(\overline{\gamma}(x_{1})-\overline{\gamma}(y_{1}))+\sum_{j=2}^{\ell}\lambda_{j}(\overline{\gamma}(x_{j})-\overline{\gamma}(y_{1}))-\sum_{j=2}^{\ell}\beta_{j}(\overline{\gamma}(y_{j})-\overline{\gamma}(y_{1}))=0.
\end{align}
Notice that if the points $x_{1}, \ldots, x_{\ell}, y_{1}, \ldots, x_{\ell}$ are different from each other, and they belong to the interval $(0,1)$, then the vectors $\overline{\gamma}(x_{1})-\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}(x_{\ell})-\overline{\gamma}(y_{1}), \overline{\gamma}(y_{2})-\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(y_{1})$ are linearly independent. The proof of the linear independence proceeds absolutely in the same way as the proof of Lemma~\ref{ltorsion} therefore we omit the proof to avoid the repetitions. Let $Q = \{x_{2}, \ldots, x_{\ell}\}\cap \{y_{2}, \ldots, y_{\ell}\}$, $X=\{x_{2}, \ldots, x_{\ell}\}$ and $Y=\{y_{2}, \ldots, y_{\ell}\}$. Then (\ref{sum01}) takes the form
\begin{align}
&(1-\sum_{j=2}^{\ell}\lambda_{j})(\overline{\gamma}(x_{1})-\overline{\gamma}(y_{1}))+\sum_{j\, :\, x_{j} \in X\setminus Q}\lambda_{j}(\overline{\gamma}(x_{j})-\overline{\gamma}(y_{1}))+\nonumber\\
&\sum_{j\, :\, x_{j} \in Q}(\lambda_{j}-\beta_{k_{j}})(\overline{\gamma}(x_{j})-\overline{\gamma}(y_{1}))
-\sum_{j\, :\, y_{j} \in Y\setminus Q}\beta_{j}(\overline{\gamma}(y_{j})-\overline{\gamma}(y_{1}))=0. \label{pirx}
\end{align}
If $y_{1}<x_{1}$ then from the linear independence we obtain that $x_{j}=y_{j}$ for all $j=1,\ldots, \ell$, and $\lambda_{j}=\beta_{j}$ for all $j=2,\ldots, \ell$. In what follows we assume $y_{1}<x_{1}$.
Notice that if for any $y \in Y \setminus Q$ we have $y\neq x_{1}$ then (\ref{pirx}) contradicts to the linear independence. On the other hand if for some $y_{j^{*}}\in Y\setminus Q$ we have $y_{j^{*}}=x_{1}$ (we remark that there can be only one such $y_{j^{*}}$ in $Y\setminus Q$, moreover, $y_{j^{*}}\notin Q$) then (\ref{pirx}) we can rewrite as
\begin{align}
&(1-\beta_{j}^{*}-\sum_{j=2}^{\ell}\lambda_{j})(\overline{\gamma}(x_{1})-\overline{\gamma}(y_{1}))+\sum_{j\, :\, x_{j} \in X\setminus Q}\lambda_{j}(\overline{\gamma}(x_{j})-\overline{\gamma}(y_{1}))+\nonumber\\
&\sum_{j\, :\, x_{j} \in Q}(\lambda_{j}-\beta_{k_{j}})(\overline{\gamma}(x_{j})-\overline{\gamma}(y_{1}))
-\sum_{j\, :\, y_{j} \in Y\setminus Q, \, y_{j}\neq y_{j^{*}}}\beta_{j}(\overline{\gamma}(y_{j})-\overline{\gamma}(y_{1}))=0. \label{pirx1}
\end{align}
Invoking the linear independence we must have $1-\beta_{j}^{*}-\sum_{j=2}^{\ell}\lambda_{j}=0$. Since $\lambda_{j}, \beta_{j} >0$ we have $X\setminus Q$ and $Y\setminus (Q \cup\{y_{j^{*}}\})$ are empty. Then $Q$ has cardinality $\ell-1$ and $Q$ does not contain $y_{j^{*}}$ which is a contradiction.
\subsubsection{The proof of (\ref{welld})}
Assume $n=2\ell$. Since $\overline{U}_{n}$ and $\overline{L}_{n}$ are diffeomorphisms between $\mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$
and $\mathrm{int}(\mathrm{conv}(\overline{\gamma}([0,1])))$ we see that the equations
\begin{align}
&B^{\sup}(\overline{U}(t))=U^{z}(t), \label{be1}\\
&B^{\inf}(\overline{L}(t))=L^{z}(t) \label{be2}
\end{align}
for all $t \in \mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ define functions $B^{\sup}$ and $B^{\inf}$ uniquely on $\mathrm{int}(\mathrm{conv}(\overline{\gamma}([0,1])))$. We would like to extend the definitions of $B^{\sup}$ and $B^{\inf}$ to the boundary of $\mathrm{conv}(\overline{\gamma}([0,1]))$ just by taking $t \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ in (\ref{be1}) and (\ref{be2}). To make sure that the choice $t \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ in (\ref{be1}) defines $B^{\sup}$ (and $B^{\inf}$) uniquely and continuously on $\mathrm{conv}(\overline{\gamma}([0,1]))$ we shall verify the following
\begin{lemma}\label{gran1}
If $\overline{U}(t_{1})=\overline{U}(t_{2})$ for some $t_{1}, t_{2} \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell}$, then $U^{z}(t_{1})=U^{z}(t_{2})$. Similarly, if $\overline{L}(t_{1})=\overline{L}(t_{2})$ for some $t_{1}, t_{2} \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell}$, then $L^{z}(t_{1})=L^{z}(t_{2})$.
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $t_{1}, t_{2} \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ otherwise the lemma follows from (\ref{b2l}), (\ref{diff2lu}), and (\ref{diff2ll}).
First we show $\overline{L}(t_{1})=\overline{L}(t_{2})$ for some $t_{1}, t_{2} \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ implies $L^{z}(t_{1})=L^{z}(t_{2})$. If $t_{1}=t_{2}$ there is nothing to prove, therefore, we assume $t_{1}\neq t_{2}$. For $t_{1} = (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ we have
\begin{align*}
\overline{L}_{2\ell}(t_{1}) = \sum_{j=1}^{\ell} \lambda_{j}\overline{\gamma}(x_{j}).
\end{align*}
Among $\lambda_{1}, \ldots, \lambda_{\ell}$ many of them can be zero so we reduce the sum into $\sum_{j=1}^{\ell_{1}} \lambda_{q_{j}} \overline{\gamma}(x_{q_{j}})$ where $\lambda_{q_{j}}>0$, $\ell_{1}\leq \ell$, and $0\leq x_{q_{1}}\leq \ldots \leq x_{q_{\ell_{1}}}\leq 1$. Next, among $x_{q_{1}}, \ldots, x_{q_{\ell_{1}}}$ many can be equal to each other. Those $x_{q_{j}}$ who are equal to each other we group them together, and those $x_{j}$'s which are zero we remove from the sum by reducing the sum if necessary. This brings as to the following expression
\begin{align*}
\overline{L}_{2\ell}(t_{1})=\sum_{k=1}^{m}\lambda_{I_{k}} \overline{\gamma}(x_{I_{k}})
\end{align*}
where $ I_{k} \subset \{1, \ldots, \ell\}$, the sets $I_{k}$ are disjoint for all $k=1, \ldots, m$. Here, $0<x_{I_{1}}<\ldots<x_{I_{m}}\leq 1$; for any $k, 1\leq k \leq m$ we have $x_{j}=x_{I_{k}}$ for all $j \in I_{k}$; for any $k$, $1\leq k \leq m$ we set $0<\lambda_{I_{k}} := \sum_{j \in I_{k}} \lambda_{j}$. We remark that if $I_{k}=\emptyset$ then the term $\lambda_{I_{k}} \overline{\gamma}(x_{I_{k}})$ is zero by definition.
Similarly, for $t_{2} = (\beta_{1}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell}) \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ we can write
\begin{align*}
\overline{L}_{2\ell}(t_{2})=\sum_{k=1}^{v}\beta_{J_{k}} \overline{\gamma}(y_{J_{k}})
\end{align*}
with $v \leq \ell$.
As in the previous section, from linear independence of the vectors $\overline{\gamma}(z_{1}), \ldots, \overline{\gamma}(z_{2\ell})$, where $0<z_{1}<\ldots<z_{2\ell}\leq 1$, it follows that $ \overline{L}_{2\ell}(t_{1})= \overline{L}_{2\ell}(t_{2})$ holds if and only if $v=m$, $x_{I_{k}}=y_{J_{k}}$, and $\lambda_{I_{k}}=\beta_{J_{k}}$ for all $k=1, \ldots, m$. Hence $L^{z}_{2\ell}(t_{1})=L^{z}_{2\ell}(t_{2})$.
The proof for the map $\overline{U}_{2\ell}$ proceeds in the same way as for $\overline{L}_{2\ell}$. Indeed, the equality $\overline{U}_{2\ell}(t_{1})=\overline{U}_{2\ell}(t_{2})$ implies
$\sum_{j=1}^{\ell} \lambda_{j}(\overline{\gamma}(x_{j})-\overline{\gamma}(1)) = \sum_{j=1}^{\ell} \beta_{j}(\overline{\gamma}(y_{j})-\overline{\gamma}(1)).$
By removing zero terms, and grouping the similar terms inside the sums as before we obtain the equation
\begin{align*}
\sum_{k=1}^{m} \lambda_{I_{k}}(\overline{\gamma}(x_{I_{k}})-\overline{\gamma}(1))=\sum_{k=1}^{v} \beta_{J_{k}}(\overline{\gamma}(y_{J_{k}})-\overline{\gamma}(1)),
\end{align*}
where we also removed the terms containing those $x_{j}$ and $y_{i}$ which are equal to $1$.
Applying Lemma~\ref{ltorsion} with $r=1$
we obtain that $v=m$ and $x_{I_{k}}=y_{J_{k}}$ for all $k=1,\ldots, m$, and $\lambda_{I_{k}}=\beta_{J_{k}}$. Hence $U^{z}_{2\ell}(t_{1})=U^{z}_{2\ell}(t_{2})$
\end{proof}
Next, we prove the analog of Lemma~\ref{gran1} for $n=2\ell-1$.
\begin{lemma}\label{gran2}
If $\overline{U}(t_{1})=\overline{U}(t_{2})$ for some $t_{1}, t_{2} \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$, then $U^{z}(t_{1})=U^{z}(t_{2})$. Similarly, if $\overline{L}(t_{1})=\overline{L}(t_{2})$ for some $t_{1}, t_{2} \in \Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell}$, then $L^{z}(t_{1})=L^{z}(t_{2})$.
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $t_{1}, t_{2} \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$ (similarly, $t_{1}, t_{2} \in \partial (\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell})$ in the second claim of the lemma) otherwise the lemma follows from (\ref{b2l-1}), (\ref{diff2l-1u}), and (\ref{diff2l-1l}).
We show that the equality $\overline{U}(t_{1})=\overline{U}(t_{2})$ for some
$t_{1}=(\lambda_{1}, \ldots, \lambda_{\ell}, x_{2}, \ldots, x_{\ell})$, and $t_{2}=(\beta_{1}, \ldots, \beta_{\ell}, y_{2}, \ldots, y_{\ell})$ in $\partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$ implies $L^{z}(t_{1})=L^{z}(t_{2})$. We can further assume $t_{1}\neq t_{2}$ otherwise there is nothing to prove. We have
\begin{align}\label{jau1}
\lambda_{1} \overline{\gamma}(1)+\sum_{j=2}^{\ell}\lambda_{j} \overline{\gamma}(x_{j}) = \beta_{1} \overline{\gamma}(1)+\sum_{j=2}^{\ell}\beta_{j}\overline{\gamma}(y_{j}).
\end{align}
As in the previous lemma, in the left hand side of (\ref{jau1}) we reduce the sum by removing those $\lambda_{j}$'s which are equal to zero. We further reduce the sum by considering only positive $x_{j}$'s. Next, among the numbers $0\leq x_{2}\leq \ldots \leq x_{\ell}\leq 1$, those who are equal to each other we group them together, and those $x_{j}$'s which are equal to $1$ we group with $\lambda_{1} \overline{\gamma}(1)$. Eventually, the left hand side of (\ref{jau1}) takes the form $\lambda_{I_{0}}\overline{\gamma}(1)+\sum_{j=1}^{m}\lambda_{I_{j}}\overline{\gamma}(x_{j})$, where $m\leq \ell-1$, $0<x_{I_{1}}<\ldots<x_{I_{m}}<1$, and $\lambda_{I_{j}} = \sum_{j \in I_{j}}\lambda_{j}$ with $\lambda_{I_{0}}\geq 0$ and $\lambda_{I_{j}}>0$ for all $j=1, \ldots, m$. Making a similar reduction in the right hand side of (\ref{jau1}), we see that (\ref{jau1}) takes the form
\begin{align}\label{jau2}
(\lambda_{I_{0}}-\beta_{J_{0}})\overline{\gamma}(1)+\sum_{j=1}^{m}\lambda_{I_{j}}\overline{\gamma}(x_{I_{j}}) - \sum_{j=1}^{v}\beta_{J_{j}}\overline{\gamma}(y_{J_{j}})=0.
\end{align}
Since $1+m+v\leq 2\ell-1$ it follows from Lemma~\ref{lltorsion} that (\ref{jau2}) holds if and only if $m=v$, $\lambda_{I_{j}}=\beta_{J_{j}}$ for all $j=2, \ldots, m$, and $x_{I_{j}}=y_{J_{j}}$ for all $j=1,\ldots, m$. It then follows that $U^{z}(t_{1})=U^{z}(t_{2})$.
Next, we show that the equality $\overline{L}(t_{1})=\overline{L}(t_{2})$ for some
$t_{1}=(\lambda_{2}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell})$, and $t_{2}=(\beta_{2}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell})$ in $\partial (\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell})$ implies $L^{z}(t_{1})=L^{z}(t_{2})$. Without loss of generality assume $t_{1} \neq t_{2}$ and $y_{1}\leq x_{1}$. The equality $\overline{L}(t_{1})=\overline{L}(t_{2})$ implies
\begin{align*}
(1-\sum_{j=2}^{\ell}\lambda_{j})\overline{\gamma}(x_{1})+\sum_{j=2}^{\ell}\lambda_{j} \overline{\gamma}(x_{j})= (1-\sum_{j=2}^{\ell}\beta_{j})\overline{\gamma}(y_{1})+\sum_{j=2}^{\ell}\beta_{j} \overline{\gamma}(y_{j}),
\end{align*}
which we can rewrite as
\begin{align}\label{sum19}
(1-\sum_{j=2}^{\ell}\lambda_{j})(\overline{\gamma}(x_{1})-\overline{\gamma}(y_{1}))+\sum_{j=2}^{\ell}\lambda_{j}(\overline{\gamma}(x_{j})-\overline{\gamma}(y_{1}))-\sum_{j=2}^{\ell}\beta_{j}(\overline{\gamma}(y_{j})-\overline{\gamma}(y_{1}))=0.
\end{align}
We would like to show $L^{z}(t_{1})-L^{z}(t_{2})=0$. Notice that
\begin{align}
&L^{z}(t_{1})-L^{z}(t_{2}) =(1-\sum_{j=2}^{\ell}\lambda_{j})(\gamma_{n+1}(x_{1})-\gamma_{n+1}(y_{1}))+\sum_{j=2}^{\ell}\lambda_{j}(\gamma_{n+1}(x_{j})-\gamma_{n+1}(y_{1}))-\label{sum13}\\
&\sum_{j=2}^{\ell}\beta_{j}(\gamma_{n+1}(y_{j})-\gamma_{n+1}(y_{1})). \nonumber
\end{align}
Rearranging and grouping equal terms in (\ref{sum19}) as in the previous arguments we can rewrite (\ref{sum19}) as
\begin{align}
&(1-\sum_{j=1}^{m_{1}} \lambda_{I^{1}_{j}} -\beta_{I_{0}})(\overline{\gamma}(x_{1})-\overline{\gamma}(y_{1})) + \sum_{j=1}^{m_{2}}\lambda_{I_{j}^{2}}(\overline{\gamma}(x_{I_{j}^{2}})-\overline{\gamma}(y_{1})) \nonumber\\
&+\sum_{j=1}^{m_{3}}(\lambda_{I_{j}^{3}} - \beta_{J_{j}^{1}})(\overline{\gamma}(x_{I_{j}^{3}})-\overline{\gamma}(y_{1})) - \sum_{j=1}^{m_{4}} \beta_{J_{j}^{2}}(\overline{\gamma}(y_{J_{j}^{2}})-\overline{\gamma}(y_{1}))=0, \label{sum18}
\end{align}
where $m_{1}, m_{2}, m_{4}$ are non-negative integers with $1+m_{2}+m_{3}+m_{4}\leq 2\ell-1$ (if $m_{k}=0$ then the corresponding sum is set to be zero), $I_{j}^{1}, I_{j}^{2}, I_{j}^{3}, J_{j}^{1}, J_{j}^{2}$ are subsets of $\{2, \ldots, \ell\}$, $\beta_{I_{0}}\geq 0$, $\lambda_{I_{j}^{k}}=\sum_{j \in I_{j}^{k}} \lambda_{j}>0$, $\beta_{J_{j}^{k}}=\sum_{j \in J_{j}^{k}} \beta_{j}>0$, $\lambda_{I_{j}^{3}}\neq \beta_{J_{j}^{1}}$, and the points $x_{1}, \{x_{I_{j}^{2}}\}_{j=1}^{m_{2}},\{x_{I_{j}^{3}}\}_{j=1}^{m_{3}},\{y_{J_{j}^{2}}\}_{j=1}^{m_{4}}$ are different from each other, none of them (except of $x_{1}$) coincides with $y_{1}$, and all of them (except of $x_{1}$) belong to $(0,1]$. We remark that $x_{1}$ can be equal to $y_{1}$. In a similar way we can rewrite (\ref{sum13}) as (\ref{sum18}), i.e.,
\begin{align*}
&L^{z}(t_{1})-L^{z}(t_{2})=\\
&(1-\sum_{j=1}^{m_{1}} \lambda_{I^{1}_{j}} -\beta_{I_{0}})(\gamma_{n+1}(x_{1})-\gamma_{n+1}(y_{1})) + \sum_{j=1}^{m_{2}}\lambda_{I_{j}^{2}}(\gamma_{n+1}(x_{I_{j}^{2}})-\gamma_{n+1}(y_{1})) \\
&+\sum_{j=1}^{m_{3}}(\lambda_{I_{j}^{3}} - \beta_{J_{j}^{1}})(\gamma_{n+1}(x_{I_{j}^{3}})-\gamma_{n+1}(y_{1})) - \sum_{j=1}^{m_{4}} \beta_{J_{j}^{2}}(\gamma_{n+1}(y_{J_{j}^{2}})-\gamma_{n+1}(y_{1})).
\end{align*}
The next lemma follows from Corollary~\ref{klasikac}.
\begin{lemma}\label{lgtorsion}
For any numbers $z_{j}$, $1\leq j \leq 2\ell-1$, such that $0<z_{1}<z_{2}<\ldots<z_{2\ell}\leq 1$, and any $r \in [0,1]\setminus\{z_{1}, \ldots, z_{2\ell}\}$, the vectors $\overline{\gamma}(z_{1})-\overline{\gamma}(r), \ldots, \overline{\gamma}(z_{2\ell-1})-\overline{\gamma}(r)$ are linearly independent in $\mathbb{R}^{2\ell-1}$.
\end{lemma}
If $y_{1}=x_{1}$ then $L^{z}(t_{1})-L^{z}(t_{2})=0$ follows from (\ref{sum18}) and Lemma~\ref{lgtorsion}. If $y_{1}<x_{1}$, then applying Lemma~\ref{lgtorsion} to (\ref{sum18}) we see that $1-\sum_{j=1}^{m_{1}} \lambda_{I^{1}_{j}} -\beta_{I_{0}}=0$ and $m_{2}=m_{3}=m_{4}=0$, which implies that $L^{z}(t_{1})-L^{z}(t_{2})=0$. Lemma~\ref{gran2} is proved.
\end{proof}
\subsubsection{The proof of (\ref{mincon1}) and (\ref{maxcon2})}
We start with (\ref{mincon1}). Assume $n=2\ell-1$. First we show that $B^{\sup}(\overline{\gamma})=\gamma_{n+1}$. We remind that
\begin{align*}
B^{\sup}(\beta_{1} \overline{\gamma}(1)+\sum_{j=2}^{\ell}\beta_{j} \overline{\gamma}(x_{j}))=\beta_{1} \gamma_{n+1}(1)+\sum_{j=2}^{\ell}\beta_{j} \gamma_{n+1}(x_{j}),
\end{align*}
holds for all $(\beta_{1}, \ldots, \beta_{\ell}, x_{2}, \ldots, x_{\ell}) \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$. We claim that if $\beta_{1} \overline{\gamma}(1)+\sum_{j=2}^{\ell}\beta_{j} \overline{\gamma}(x_{j}) = \overline{\gamma}(y)$ for some $y \in [0,1]$ then $\beta_{1} \gamma_{n+1}(1)+\sum_{j=2}^{\ell}\beta_{j} \gamma_{n+1}(x_{j})=\gamma_{n+1}(y)$. Indeed, $\overline{\gamma}(y)=\overline{U}(t_{2})$ with $t_{2} = (1,0, \ldots, 0, y) \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$, and $\beta_{1} \gamma_{n+1}(1)+\sum_{j=2}^{\ell}\beta_{j} \gamma_{n+1}(x_{j})=\overline{U}(t_{1})$ with $t_{1}=(\beta_{1}, \ldots, \beta_{\ell}, x_{1}, \ldots, x_{\ell}) \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$. Thus the claim follows from Lemma~\ref{gran2}.
Next, we show that $B^{\sup}$ is concave on $\mathrm{conv}(\overline{\gamma}([0,1]))$. As the surface parametrized by $U_{n}(t)$, $t \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$, coincides with the graph $\{ (x,B^{\sup}(x)), x \in \mathrm{conv}(\overline{\gamma}([0,1]))\}$, and $B^{\sup} \in C(\mathrm{conv}(\overline{\gamma}([0,1])))$, it suffices to show that the tangent plane $T$ at $U_{n}(s)$, for any $s=(\lambda_{1}, \ldots, \lambda_{\ell}, y_{2}, \ldots, y_{\ell})\in \mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$, lies {\em above} the surface $U_{n}$. The equation of the tangent plane $T$ at $U(s):=U_{n}(s)$ is given as
\begin{align*}
T(x):=\det(U_{\beta_{1}}(s), \ldots, U_{\beta_{\ell}}(s), U_{x_{2}}(s), \ldots, U_{x_{\ell}}(s), x-U(s))=0, \quad x \in \mathbb{R}^{n+1}.
\end{align*}
We have
\begin{align*}
T(x) = \lambda_{1} \cdots \lambda_{\ell}\det(\gamma(1), \gamma(y_{2}), \ldots, \gamma(y_{\ell}), \gamma'(y_{2}), \ldots, \gamma'(y_{\ell}), x).
\end{align*}
To show that the tangent plane $T$ lies {\em above} the surface, first we should find the sign of $T(\lambda e_{n+1})$ as $\lambda \to \infty$, where $e_{n+1}=(\underbrace{0, \ldots, 0, 1}_{n+1})$. For sufficiently large positive $\lambda$ we have
\begin{align*}
\mathrm{sign}(T(\lambda e_{n+1})) = \mathrm{sign}( \det(\overline{\gamma}(1), \overline{\gamma}(y_{2}), \ldots, \overline{\gamma}(y_{\ell}), \overline{\gamma}'(y_{2}), \ldots, \overline{\gamma}'(y_{\ell}))).
\end{align*}
On the other hand we have
\begin{align*}
&\det(\overline{\gamma}(1), \overline{\gamma}(y_{2}), \ldots, \overline{\gamma}(y_{\ell}), \overline{\gamma}'(y_{2}), \ldots, \overline{\gamma}'(y_{\ell})) = \\
&(-1)^{\frac{(\ell-1)(\ell-2)}{2}} \det(\overline{\gamma}(y_{2}), \overline{\gamma}'(y_{2}), \ldots, \overline{\gamma}(y_{\ell}), \overline{\gamma}'(y_{\ell}), \overline{\gamma}(1))
=\\
&(-1)^{\frac{(\ell-1)(\ell-2)}{2}}\det(\overline{\gamma}(y_{2})-\overline{\gamma}(0), \overline{\gamma}'(y_{2}), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(y_{\ell-1}), \overline{\gamma}'(y_{\ell}), \overline{\gamma}(1)-\overline{\gamma}(y_{\ell}))=\\
&(-1)^{\frac{(\ell-1)(\ell-2)}{2}} \int_{y_{\ell}}^{1} \int_{y_{\ell-1}}^{y_{\ell}}\ldots \int_{0}^{y_{2}}\det (\overline{\gamma}'(v_{2}), \overline{\gamma}'(y_{2}), \ldots,\overline{\gamma}'(v_{\ell}), \overline{\gamma}'(y_{\ell}), \overline{\gamma}'(v_{
\ell+1}))dv_{2} \ldots dv_{\ell} dv_{\ell+1}.
\end{align*}
Thus, Lemma~\ref{klasika} applied to $\overline{\gamma}$ shows that $\mathrm{sign}(T(\lambda e_{n+1}))$, for sufficiently large $\lambda$, coincides with $(-1)^{\frac{(\ell-1)(\ell-2)}{2}}$. Therefore, the surface $U(t)$ being {\em below} the tangent plane $T$ simply means that $(-1)^{\frac{(\ell-1)(\ell-2)}{2}}T(U(t))\leq 0$ for all $t=(\beta_{1}, \ldots, \beta_{\ell}, x_{2}, \ldots, x_{\ell}) \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$. We have
\begin{align*}
T(U(t)) = \sum_{j=2}^{\ell} \beta_{j} \det(\gamma(1), \gamma(y_{2}), \ldots, \gamma(y_{\ell}), \gamma'(y_{2}), \ldots, \gamma'(y_{\ell}), \gamma(x_{j}))\, \prod_{k=1}^{\ell}\lambda_{k}.
\end{align*}
It suffices to verify that
\begin{align}\label{nacili1}
(-1)^{\frac{(\ell-1)(\ell-2)}{2}}\det(\gamma(1), \gamma(y_{2}), \ldots, \gamma(y_{\ell}), \gamma'(y_{2}), \ldots, \gamma'(y_{\ell}), \gamma(u)) \leq 0
\end{align}
for all $u \in [0,1]$. We have
\begin{align}
&(-1)^{\frac{(\ell-1)(\ell-2)}{2}}\det(\gamma(1), \gamma(y_{2}), \ldots, \gamma(y_{\ell}), \gamma'(y_{2}), \ldots, \gamma'(y_{\ell}), \gamma(u)) \nonumber \\
&=\det(\gamma(y_{2}), \gamma'(y_{2}), \ldots, \gamma(y_{\ell}), \gamma'(y_{\ell}), \gamma(1), \gamma(u)). \label{perm1}
\end{align}
If $u \in [y_{\ell},1]$, then
\begin{align*}
&\det(\gamma(y_{2}), \gamma'(y_{2}), \ldots, \gamma(y_{\ell}), \gamma'(y_{\ell}), \gamma(1), \gamma(u))=\\
&-\det(\gamma(y_{2}), \gamma'(y_{2}), \ldots, \gamma(y_{\ell}), \gamma'(y_{\ell}),\gamma(u),\gamma(1))=\\
&-\det(\gamma(y_{2})-\gamma(0), \gamma'(y_{2}), \ldots, \gamma(y_{\ell})-\gamma(y_{\ell-1}), \gamma'(y_{\ell}),\gamma(u)-\gamma(y_{\ell}),\gamma(1)-\gamma(u))=\\
&-\int_{u}^{1}\int_{y_{\ell}}^{x_{j}}\int_{y_{\ell-1}}^{y_{\ell}}\ldots \int_{0}^{y_{2}}\det(\gamma'(v_{2}), \gamma'(y_{2}), \ldots, \gamma'(v_{\ell}), \gamma'(y_{\ell}),\gamma'(v_{\ell+1}),\gamma'(v_{\ell+2}))dv_{2}\ldots dv_{\ell}dv_{\ell+1}dv_{\ell+2}
\end{align*}
is non-positive by Lemma~\ref{klasika}.
If $u \in [0,y_{2}]$ we again use (\ref{perm1}). Next, we move the column $\gamma(u)$ to the left of the column $\gamma(y_{2})$. Notice that we will acquire the negative sign because passing the couples $\gamma(y_{i}), \gamma'(y_{i})$ does not change the sign of the determinant, the negative sign arises by passing $\gamma(1)$. Using the similar integral representation as before together with Lemma~\ref{klasika} we see that the inequality (\ref{nacili1}) holds true in the case $u \in [0, \gamma(y_{2})]$. The case $u \in [y_{i}, y_{i+1}]$ for some $i \in \{2, \ldots, \ell-1\}$, is similar to the previous case. Indeed, first we apply (\ref{perm1}), then we place the column $\gamma(u)$ between the columns $\gamma'(y_{i}), \gamma(y_{i+1})$ (thus we acquire the negative sign), we use the similar integral representation as before together with Lemma~\ref{klasika} to conclude that (\ref{nacili1}) holds true in this case too. This finishes the proof of concavity of $B^{\sup}$ on $\mathrm{conv}(\overline{\gamma}([0,1]))$.
Next, we show that $B^{\sup}$ is the minimal concave function in a family of concave functions $G$ on $\mathrm{conv}(\overline{\gamma}([0,1]))$ with the obstacle condition $G(\overline{\gamma}(s)) \geq \gamma_{n+1}(s)$ for all $s \in [0,1]$. Indeed, pick an arbitrary point $x \in \mathrm{conv}(\overline{\gamma}([0,1]))$. We would like to show $G(x) \geq B^{\sup}(x)$. There exists $(\lambda_{1}, \ldots, \lambda_{\ell}, y_{2}, \ldots, y_{\ell}) \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1}$ such that $x = \lambda_{1} \overline{\gamma}(1)+\sum_{j=2}^{\ell}\lambda_{j} \overline{\gamma}(y_{j}).$
Therefore
\begin{align*}
B^{\sup}(x) = \lambda_{1} \gamma_{n+1}(1)+\sum_{j=2}^{\ell}\lambda_{j}\gamma_{n+1}(y_{j})\leq \lambda_{1} G(\overline{\gamma}(1))+\sum_{j=2}^{\ell}\lambda_{j}G(\overline{\gamma}(y_{j})) \leq G(x).
\end{align*}
Next we consider $B^{\sup}$ when $n=2\ell$. We only check the concavity of $B^{\sup}$ because the remaining properties (minimality and the obstacle condition $B^{\sup}(\overline{\gamma})=\gamma_{n+1}$) are verified similarly as in the dimension $n=2\ell-1$. The equation of the tangent plane $T$ at point
\begin{align*}
U(s) :=U_{n}(s) = \sum_{j=1}^{\ell} \beta_{j} \gamma(y_{j})+(1-\sum_{j=1}^{\ell}\beta_{j})\gamma(1),
\end{align*}
where $s=(\beta_{1}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell}) \in \mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$, is given as
\begin{align*}
T(x) := \det(U_{\beta_{1}}, \ldots, U_{\beta_{\ell}}, U_{y_{1}}, \ldots, U_{y_{\ell}}, x-U(s))=0, \quad x \in \mathbb{R}^{n+1}.
\end{align*}
We have
\begin{align*}
\mathrm{sign}(T(x)) = \mathrm{sign}(\det(\gamma(y_{1})-\gamma(1), \ldots, \gamma(y_{\ell})-\gamma(1),\gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), x-\gamma(1))).
\end{align*}
Next,
\begin{align*}
\mathrm{sign}(T(\lambda e_{n+1})) = \mathrm{sign}(\det(\overline{\gamma}(y_{1})-\overline{\gamma}(1), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(1),\overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}'(y_{\ell})))
\end{align*}
as $\lambda \to +\infty$. On the other hand we have
\begin{align*}
&\det(\overline{\gamma}(y_{1})-\overline{\gamma}(1), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(1),\overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}'(y_{\ell})) = \\
&(-1)^{\ell}\det(\overline{\gamma}(y_{2})-\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(y_{\ell-1}),\overline{\gamma}(1)-\overline{\gamma}(y_{\ell}),\overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}'(y_{\ell}))=\\
&(-1)^{\frac{\ell(\ell-1)}{2}}\det(\overline{\gamma}'(y_{1}),\overline{\gamma}(y_{2})-\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}'(y_{\ell}),\overline{\gamma}(1)-\overline{\gamma}(y_{\ell}))=\\
&(-1)^{\frac{\ell(\ell-1)}{2}} \int_{y_{\ell}}^{1}\ldots \int_{y_{1}}^{y_{2}} \det(\overline{\gamma}'(y_{1}),\overline{\gamma}'(x_{1}), \ldots, \overline{\gamma}'(y_{\ell}),\overline{\gamma}'(x_{\ell}))dx_{1} \ldots dx_{\ell}.
\end{align*}
Thus, it follows from Lemma~\ref{klasika} that $\mathrm{sign}(T(\lambda e_{n+1})) = (-1)^{\frac{\ell(\ell-1)}{2}}$ as $\lambda \to \infty$. Therefore, to verify concavity of $B^{\sup}$ it suffices to show $(-1)^{\frac{\ell(\ell-1)}{2}} T(U(t)) \leq 0$ for all $t = (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell}$. We have
\begin{align*}
T(U(t))= \sum_{j=1}^{\ell} \lambda_{j} \det(\gamma(y_{1})-\gamma(1), \ldots, \gamma(y_{\ell})-\gamma(1),\gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(x_{j})-\gamma(1))\prod_{i=1}^{\ell}\beta_{i}.
\end{align*}
It suffices to show that $(-1)^{\frac{\ell(\ell-1)}{2}} \det(\gamma(y_{1})-\gamma(1), \ldots, \gamma(y_{\ell})-\gamma(1),\gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(u)-\gamma(1)) \leq 0$ for all $u \in [0,1]$. Assume $u \in [y_{i}, y_{i+1}]$ for some $i\in \{1,\ldots, \ell-1\}$. We have
\begin{align*}
&(-1)^{\frac{\ell(\ell-1)}{2}} \det(\gamma(y_{1})-\gamma(1), \ldots, \gamma(y_{\ell})-\gamma(1),\gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(u)-\gamma(1))=\\
&-\det(\gamma'(y_{1}), \gamma(1)-\gamma(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(1)-\gamma(y_{\ell}), \gamma(1)-\gamma(u))=-\\
&\det(\gamma'(y_{1}), \gamma(1)-\gamma(y_{1}), \ldots, \gamma'(y_{i}), \gamma(1)-\gamma(y_{i}),\gamma(1)-\gamma(u), \gamma'(y_{i+1}), \gamma(1)-\gamma(y_{i+1}), \ldots, \gamma(1)-\gamma(y_{\ell}))=\\
&-\det(\gamma'(y_{1}), \gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma'(y_{i}), \gamma(u)-\gamma(y_{i}),\gamma(y_{i+1})-\gamma(u), \gamma'(y_{i+1}), \gamma(y_{i+2})-\gamma(y_{i+1}), \ldots, \gamma(1)-\gamma(y_{\ell}))\\
&=- \int_{y_{\ell}}^{1}\ldots \int_{u}^{y_{i+1}} \int_{y_{i}}^{u} \ldots \int_{y_{1}}^{y_{2}}\det(\gamma'(y_{1}), \gamma'(v_{1}), \ldots, \gamma'(y_{i}), \gamma'(w), \gamma'(v_{i}), \gamma'(y_{i+1}), \ldots, \gamma'(v_{\ell})) dv_{1}\ldots dw dv_{i} \ldots dv_{\ell}
\end{align*}
which has a nonpositive sign by Lemma~\ref{klasika} (here $y_{i+2}$ for $i=\ell-1$ is set to be $1$). The cases $u \in [0,y_{1}]$, and $u \in [y_{\ell},1]$ are treated similarly.
Next, we verify (\ref{maxcon2}). The obstacle condition $B^{\inf}(\overline{\gamma})=\gamma_{n+1}$ and the minimality (assuming $B^{\inf}$ is convex) are verified similarly as in the case $B^{\sup}$. So, in what follows we only verify convexity of $B^{\inf}$.
Assume $n=2\ell-1$. The equation of the tangent plane $T$ at point
\begin{align*}
L(s):=L_{n}(s) = (1-\sum_{j=2}^{\ell}\beta_{j})\gamma(y_{1})+\sum_{j=2}^{\ell}\beta_{j} \gamma(y_{j}),
\end{align*}
where $s=(\beta_{2}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell}) \in \mathrm{int}(\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell})$ is given by
\begin{align*}
&T(x):=\det(L_{\beta_{2}}, \ldots, L_{\beta_{\ell}}, L_{y_{1}}, \ldots, L_{y_{\ell}}, x-L(s)) =\\
&\det(\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), x-\gamma(y_{1}))\, (1-\sum_{j=2}^{\ell}\beta_{j}) \prod_{j=2}^{\ell}\beta_{j}.
\end{align*}
We have
\begin{align*}
\mathrm{sign}(T(\lambda e_{n+1})) = \mathrm{sign}(\det(\overline{\gamma}(y_{2})-\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(y_{1}), \overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}'(y_{\ell})))
\end{align*}
as $\lambda \to +\infty$. On the other hand
\begin{align*}
&\det(\overline{\gamma}(y_{2})-\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(y_{1}), \overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}'(y_{\ell})) =\\
&(-1)^{\frac{\ell(\ell-1)}{2}}\det(\overline{\gamma}'(y_{1}),\overline{\gamma}(y_{2})-\overline{\gamma}(y_{1}), \ldots,\overline{\gamma}'(y_{\ell-1}), \overline{\gamma}(y_{\ell})-\overline{\gamma}(y_{\ell-1}),\overline{\gamma}'(y_{\ell}))=\\
&(-1)^{\frac{\ell(\ell-1)}{2}} \int_{y_{\ell-1}}^{y_{\ell}}\ldots \int_{y_{1}}^{y_{2}} \det(\overline{\gamma}'(y_{1}),\overline{\gamma}'(v_{2}), \ldots,\overline{\gamma}'(y_{\ell-1}), \overline{\gamma}'(v_{\ell}),\overline{\gamma}'(y_{\ell})) dv_{2} \ldots dv_{\ell}.
\end{align*}
Thus $\mathrm{sign}(T(\lambda e_{n+1})) = (-1)^{\frac{\ell(\ell-1)}{2}}$ by Lemma~\ref{klasika} as $\lambda \to +\infty$. Therefore, $B^{\inf}$ is convex if $(-1)^{\frac{\ell(\ell-1)}{2}} T(L(t))\geq 0$ for all $t = (\lambda_{2}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \in \Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell}$. We have
\begin{align*}
&T(L(t)) = \det(\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), L(t)-\gamma(y_{1}))\, (1-\sum_{j=2}^{\ell}\beta_{j}) \prod_{j=2}^{\ell}\beta_{j}=\\
&(1-\sum_{k=2}^{\ell}\lambda_{k})\det(\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(x_{1})-\gamma(y_{1}))\, (1-\sum_{j=2}^{\ell}\beta_{j}) \prod_{j=2}^{\ell}\beta_{j}\\
&+\sum_{k=2}^{\ell}\lambda_{k} \det(\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(x_{k})-\gamma(y_{1}))\, (1-\sum_{j=2}^{\ell}\beta_{j}) \prod_{j=2}^{\ell}\beta_{j}.
\end{align*}
Thus, to verify convexity of $B^{\inf}$, it suffices to show
\begin{align*}
(-1)^{\frac{\ell(\ell-1)}{2}}\det(\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(u)-\gamma(y_{1}))\geq 0
\end{align*}
for all $u \in [0,1]$. Notice that
\begin{align*}
&(-1)^{\frac{\ell(\ell-1)}{2}}\det(\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(u)-\gamma(y_{1})) =\\
& \det(\gamma'(y_{1}),\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma'(y_{\ell-1}),\gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{\ell}), \gamma(u)-\gamma(y_{1})).
\end{align*}
Next, assume $u \in [y_{i}, y_{i+1}]$ for some $i \in \{1, \ldots, \ell-1\}$ (the cases $u \in [0, y_{1}]$ and $u\in [y_{\ell},1]$ are considered similarly). We have
\begin{align*}
&\det(\gamma'(y_{1}),\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma'(y_{\ell-1}),\gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{\ell}), \gamma(u)-\gamma(y_{1}))=\\
&\det(\gamma'(y_{1}),\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma'(y_{i}), \gamma(u)-\gamma(y_{1}),\gamma(y_{i+1})-\gamma(y_{1}), \gamma'(y_{i+1}), \ldots, \gamma(y_{\ell})-\gamma(y_{1}), \gamma'(y_{\ell}))=\\
&\det(\gamma'(y_{1}),\gamma(y_{2})-\gamma(y_{1}), \ldots, \gamma'(y_{i}), \gamma(u)-\gamma(y_{i}),\gamma(y_{i+1})-\gamma(u), \gamma'(y_{i+1}), \ldots, \gamma(y_{\ell})-\gamma(y_{\ell-1}), \gamma'(y_{\ell}))\\
&=\int_{y_{\ell-1}}^{y_{\ell}}\ldots \int_{u}^{y_{i+1}}\int_{y_{i}}^{u}\ldots \int_{y_{1}}^{y_{2}}\\
&\det(\gamma'(y_{1}),\gamma'(v_{1}), \ldots, \gamma'(y_{i}), \gamma'(w),\gamma'(v_{i}), \gamma'(y_{i+1}), \ldots, \gamma'(v_{\ell-1}), \gamma'(y_{\ell})) dv_{1}\ldots dv_{i}dw\ldots dv_{\ell-1}.
\end{align*}
Thus $T(L(t))\geq 0$ by Lemma~\ref{klasika}.
Next, we consider $B^{\inf}$ when $n=2\ell$. As in the previous cases we only verify convexity of $B^{\inf}$ (minimality and the obstacle condition $B^{\inf}(\overline{\gamma})=\gamma_{n+1}$ are verified easily).
The equation of the tangent plane $T$ at point
\begin{align*}
L(s):=L_{n}(s) = \sum_{j=1}^{\ell}\beta_{j} \gamma(y_{j}),
\end{align*}
where $s=(\beta_{1}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell}) \in \mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ is given by
\begin{align*}
&T(x):=\det(L_{\beta_{1}}, \ldots, L_{\beta_{\ell}}, L_{y_{1}}, \ldots, L_{y_{\ell}}, x-L(s)) =
\det(\gamma(y_{1}), \ldots, \gamma(y_{\ell}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), x)\, \prod_{j=1}^{\ell}\beta_{j}.
\end{align*}
We have
\begin{align*}
\mathrm{sign}(T(\lambda e_{n+1})) = \mathrm{sign}(\det(\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}(y_{\ell}), \overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}'(y_{\ell})))
\end{align*}
as $\lambda \to +\infty$. On the other hand
\begin{align}
&\det(\overline{\gamma}(y_{1}), \ldots, \overline{\gamma}(y_{\ell}), \overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}'(y_{\ell})) =\label{nishani1}\\
&(-1)^{\frac{\ell(\ell-1)}{2}}\det(\overline{\gamma}(y_{1})-\overline{\gamma}(0),\overline{\gamma}'(y_{1}), \ldots, \overline{\gamma}(y_{\ell})-\overline{\gamma}(y_{\ell-1}),\overline{\gamma}'(y_{\ell}))= \nonumber\\
&(-1)^{\frac{\ell(\ell-1)}{2}} \int_{y_{\ell-1}}^{y_{\ell}}\ldots \int_{0}^{y_{1}} \det(\overline{\gamma}'(v_{1}),\overline{\gamma}'(y_{1}), \ldots,\overline{\gamma}'(v_{\ell}), \overline{\gamma}'(y_{\ell})) dv_{1} \ldots dv_{\ell}. \nonumber
\end{align}
Thus $\mathrm{sign}(T(\lambda e_{n+1})) = (-1)^{\frac{\ell(\ell-1)}{2}}$ by Lemma~\ref{klasika} as $\lambda \to +\infty$. Therefore, $B^{\inf}$ is convex if $(-1)^{\frac{\ell(\ell-1)}{2}} T(L(t))\geq 0$ for all $t = (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \in \Delta_{c}^{\ell}\times \Delta_{*}^{\ell}$. We have
\begin{align*}
&T(L(t)) = \sum_{k=1}^{\ell}\lambda_{k}\det(\gamma(y_{1}), \ldots, \gamma(y_{\ell}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(x_{k}))\, \prod_{j=1}^{\ell}\beta_{j}.
\end{align*}
Thus, to verify convexity of $B^{\inf}$, it suffices to show
\begin{align*}
(-1)^{\frac{\ell(\ell-1)}{2}}\det(\gamma(y_{1}), \ldots, \gamma(y_{\ell}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(u))\geq 0 \quad \text{for all} \quad u \in [0,1].
\end{align*}
Notice that
\begin{align*}
(-1)^{\frac{\ell(\ell-1)}{2}}\det(\gamma(y_{1}), \ldots, \gamma(y_{\ell}), \gamma'(y_{1}), \ldots, \gamma'(y_{\ell}), \gamma(u)) = \det(\gamma(y_{1}),\gamma'(y_{1}), \ldots, \gamma(y_{\ell}), \gamma'(y_{\ell}), \gamma(u)).
\end{align*}
Next, assume $u\in [y_{i}, y_{i+1}]$ for some $i \in \{1, \ldots, \ell-1\}$ (the cases $u\in [0,y_{1}]$ or $u \in [y_{\ell},1]$ are considered similarly). Set $y_{0}=0$. We have
\begin{align*}
&\det(\gamma(y_{1}),\gamma'(y_{1}), \ldots, \gamma(y_{\ell}), \gamma'(y_{\ell}), \gamma(u)) =\\
&\det(\gamma(y_{1}),\gamma'(y_{1}), \ldots, \gamma(y_{i}), \gamma'(y_{i}), \gamma(u), \gamma(y_{i+1}), \gamma'(y_{i+1}), \ldots)=\\
&\det(\gamma(y_{1})-\gamma(0),\gamma'(y_{1}), \ldots, \gamma(y_{i}) - \gamma(y_{i-1}), \gamma'(y_{i}), \gamma(u)-\gamma(y_{i}), \gamma(y_{i+1})-\gamma(u), \gamma'(y_{i+1}), \ldots)=\\
&\int_{y_{\ell-1}}^{y_{\ell}}\ldots \int_{u}^{y_{i-1}}\int_{y_{i}}^{u}\int_{y_{i-1}}^{y_{i}}\ldots \int_{0}^{y_{1}}\\
&\det(\gamma'(v_{1}),\gamma'(y_{1}), \ldots, \gamma'(v_{i}) , \gamma'(y_{i}), \gamma'(w), \gamma'(v_{i+1}), \gamma'(y_{i+1}), \ldots, \gamma'(v_{\ell}), \gamma'(y_{\ell})) dv_{1} \ldots dv_{i} dw dv_{i+1}\ldots dv_{\ell}.
\end{align*}
Thus $T(L(t))\geq 0$ by Lemma~\ref{klasika}.
\subsubsection{The proof of (\ref{giff})}\label{giffsub}
First we show the implication $B^{\sup}(u)=B^{\inf}(u) \Rightarrow u \in \partial\, \mathrm{conv}(\overline{\gamma}([0,1]))$. Consider the case $n=2\ell$. Assume contrary, i.e., $u \in \mathrm{int}(\mathrm{conv}(\overline{\gamma}([0,1])))$. Then using (\ref{diff2lu}), (\ref{diff2ll}) we can find $t = (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell})$ and $s=(\beta_{1}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell})$, both in $\mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$, such that
\begin{align*}
u=\sum_{j=1}^{\ell}\lambda_{j} \overline{\gamma}(x_{j})+(1-\sum_{j=1}^{\ell}\lambda_{j})\overline{\gamma}(1)=\sum_{j=1}^{\ell}\beta_{j}\overline{\gamma}(y_{j}).
\end{align*}
The equality $B^{\sup}(u)=B^{\inf}(u)$ implies (see (\ref{be1}), (\ref{be2}))
\begin{align*}
\sum_{j=1}^{\ell}\lambda_{j} \gamma(x_{j})+(1-\sum_{j=1}^{\ell}\lambda_{j})\gamma(1)=\sum_{j=1}^{\ell}\beta_{j}\gamma(y_{j}).
\end{align*}
We see that $\gamma(1)$ is a linear combination of $2\ell$ vectors $\gamma(x_{j}), \gamma(y_{j}), j=1,\ldots, \ell$ which leads us to a contradiction with Corollary~\ref{klasikac}. Thus $u \in \partial\, \mathrm{conv}(\overline{\gamma}([0,1]))$.
Next, consider the case $n=2\ell-1$ and assume the contrary, i.e., $u \in \mathrm{int}(\mathrm{conv}(\overline{\gamma}([0,1])))$. Similarly as before we have
\begin{align}\label{ukanaskneli}
\lambda_{1} \gamma(1)+\sum_{j=2}^{\ell}\lambda_{j} \gamma(x_{j})=(1-\sum_{j=2}^{\ell}\beta_{j})\gamma(y_{1})+\sum_{j=2}^{\ell}\beta_{j} \gamma(y_{j})
\end{align}
for some $t=(\lambda_{1}, \ldots, \lambda_{\ell}, x_{2}, \ldots, x_{\ell}) \in \mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$ and $s = (\beta_{2}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell}) \in \mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$. The equality (\ref{ukanaskneli}) shows that $\gamma(1)$ is a linear combination of $2\ell-1$ vectors $\{\gamma(x_{j})\}_{j=2}^{\ell}$, $\{\gamma(y_{j})\}_{j=1}^{\ell}$ which contradicts to Corollary~\ref{klasikac}.
Next we show the implication $u \in \partial\, \mathrm{conv}(\overline{\gamma}([0,1])) \Rightarrow B^{\sup}(u)=B^{\inf}(u)$. Consider $n=2\ell$. Suppose
\begin{align*}
\overline{U}(t) \stackrel{\mathrm{def}}{=}\sum_{j=1}^{\ell}\lambda_{j} \overline{\gamma}(x_{j})+(1-\sum_{j=1}^{\ell}\lambda_{j})\overline{\gamma}(1)=\sum_{j=1}^{\ell}\beta_{j}\overline{\gamma}(y_{j}) \stackrel{\mathrm{def}}{=} \overline{L}(s)
\end{align*}
for some $t = (\lambda, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell})$ and $s=(\beta_{1}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell})$, both in $\partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell}).$ The goal is to show that
\begin{align}\label{zgvari1}
U^{z}(t) \stackrel{\mathrm{def}}{=} \sum_{j=1}^{\ell}\lambda_{j} \gamma_{n+1}(x_{j})+(1-\sum_{j=1}^{\ell}\lambda_{j})\gamma_{n+1}(1)=\sum_{j=1}^{\ell}\beta_{j}\gamma_{n+1}(y_{j})\stackrel{\mathrm{def}}{=} L^{z}(s).
\end{align}
We claim that (\ref{zgvari1}) follows from the second part of Lemma~\ref{gran1}. For this it suffices to show that any point $\overline{U}(t)$, $t \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$, can be written as $\overline{L}(s_{1})$ for some $s_{1}=(\beta'_{1}, \ldots, \beta'_{\ell}, y'_{1}, \ldots, y'_{\ell}) \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$. Indeed, as $t \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ several cases can happen. 1) If $\sum_{j=1}^{\ell}\lambda_{j}=1$, then choose $\beta'_{j}=\lambda_{j}$, $j=1, \ldots, \ell-1$, $\beta'_{\ell}=1-\sum_{j=1}^{\ell-1}\lambda_{j}$, and $y'_{j}=x_{j}$, $j=1, \ldots, \ell$. Then
\begin{align}\label{shemtxveva1}
L^{z}(s) = B^{\inf}(\overline{L}(s)) \stackrel{\mathrm{Lemma}~\ref{gran1}}{=} B^{\inf}(\overline{L}(s_{1})) =\sum_{j=1}^{\ell}\beta'_{j}\gamma_{n+1}(y'_{j}) = U^{z}(t).
\end{align}
Next, 2) if at least one $\lambda_{j}=0$, say $\lambda_{p}=0$ for some $p \in \{1, \ldots, \ell\}$, then take $\beta'_{1}=\lambda_{1}, \ldots, \beta'_{p-1}=\lambda_{p-1}, \beta_{p}=\lambda_{p+1}, \ldots, \beta'_{\ell-1}=\lambda_{\ell}, \beta_{\ell}=1-\sum_{j=1}^{\ell} \lambda_{j}$, and $y'_{1}=x_{1}, \ldots, y'_{p-1}=x_{p-1}, y'_{p}=x_{p+1}, \ldots, y'_{\ell-1}=x_{\ell}, y'_{\ell}=1$ and repeat (\ref{shemtxveva1}). Next 3) if $x_{\ell}=1$, choose $(\beta'_{j}, y'_{j})=(\lambda_{j}, x_{j})$ for $j=1, \ldots, \ell-1$, and $(\beta'_{\ell}, y'_{\ell})=(\sum_{j=1}^{\ell-1}\lambda_{j},1)$ and repeat (\ref{shemtxveva1}). 4) If $x_{p}=x_{p+1}$ for some $p\in \{1, \ldots, \ell-1\}$ then take $y'_{j}=x_{j}$ for $j=1, \ldots, p$; $y'_{j}=x_{j+1}$ for $j=p+1, \ldots, \ell-1$; $y'_{\ell}=1$; $\beta'_{1}=\lambda_{1}$, \ldots, $\beta'_{p}=\lambda_{p}+\lambda_{p+1}$, $\beta'_{p+1}=\lambda_{p+2}, \ldots, \beta'_{\ell-1}=\lambda_{\ell}$, $\beta'_{\ell}=1-\sum_{j=1}^{\ell}\lambda_{j}$ and repeat (\ref{shemtxveva1}). Finally, 5) if $x_{1}=0$ choose $\beta'_{j}=\lambda_{j+1}$, $j=1,\ldots, \ell-1$; $\beta'_{\ell}=1-\sum_{j=1}^{\ell}\lambda_{j}$; $y'_{j}=x_{j+1}$, $j=1, \ldots, \ell-1$; $y'_{\ell}=1$, and apply (\ref{shemtxveva1}).
Next, consider $n=2\ell-1$. Suppose
\begin{align*}
\overline{U}(t) \stackrel{\mathrm{def}}{=} \sum_{j=2}^{\ell}\beta_{j} \overline{\gamma}(x_{j}) + \beta_{1} \overline{\gamma}(1)=(1-\sum_{j=2}^{\ell}\lambda_{j})\overline{\gamma}(y_{1})+\sum_{j=2}^{\ell}\lambda_{j} \overline{\gamma}(y_{j}) \stackrel{\mathrm{def}}{=} \overline{L}(s)
\end{align*}
for some $t=(\beta_{1}, \ldots, \beta_{\ell},x_{2}, \ldots, x_{\ell}) \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$ and $s=(\lambda_{2}, \ldots, \lambda _{\ell}, y_{1}, \ldots, y_{\ell}) \in \partial (\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell})$. We would like to show
\begin{align}\label{shemxtveva2}
U^{z}(t) \stackrel{\mathrm{def}}{=} \sum_{j=2}^{\ell}\beta_{j} \gamma_{n+1}(x_{j})+ \beta_{1} \gamma_{n+1}(1)=(1-\sum_{j=2}^{\ell}\lambda_{j})\gamma_{n+1}(y_{1})+\sum_{j=2}^{\ell}\lambda_{j} \gamma_{n+1}(y_{j}) \stackrel{\mathrm{def}}{=} L^{z}(s).
\end{align}
As in the case $n=2\ell-1$ we claim that (\ref{shemxtveva2}) follows from Lemma~\ref{gran2}. It suffices to show that for any point $\overline{U}(t)$, $t \in \partial (\Delta_{c}^{\ell}\times \Delta_{*}^{\ell-1})$, there exists a point $s_{1} =(\lambda'_{2}, \ldots, \lambda'_{\ell}, y'_{1}, \ldots, \lambda'_{\ell}) \in \partial(\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell})$ such that $\overline{U}(t)=\overline{L}(s_{1})$. Several instances may happen. 1) if $\sum_{j=1}^{\ell}\beta_{j}=1$. Let
\begin{align*}
(\lambda'_{2}, \ldots, \lambda'_{\ell-1}, \lambda'_{\ell}, y'_{1}, \ldots, y'_{\ell-1}, y'_{\ell})=(\beta_{3}, \ldots, \beta_{\ell}, \beta_{1}, x_{2}, \ldots, x_{\ell}, 1).
\end{align*}
Notice that $1-\sum_{j=2}^{\ell}\lambda'_{j}=\beta_{2}$. 2) if $\beta_{p}=0$ for some $p \in \{1,\ldots, \ell-1\}$ then let
\begin{align*}
&(\lambda'_{2}, \ldots, \lambda'_{p-1}, \lambda'_{p}, \ldots, \lambda'_{\ell-1}, \lambda'_{\ell}, y'_{1}, y'_{2}, \ldots, y'_{p-1}, y'_{p}, \ldots, y'_{\ell-1}, y'_{\ell}) =\\
&(\beta_{2}, \ldots, \beta_{p-1}, \beta_{p+1}, \ldots, \beta_{\ell}, \beta_{1}, 0, x_{2}, \ldots, x_{p-1}, x_{p+1}, \ldots, x_{\ell}, 1).
\end{align*}
3) if $\beta_{1}=0$ then we choose $y'_{1}=0$ and
\begin{align*}
(\lambda'_{2}, \ldots, \lambda'_{\ell}, y'_{2}, \ldots, y'_{\ell})=(\beta_{2}, \ldots, \beta_{\ell}, x_{2}, \ldots, x_{\ell}).
\end{align*}
4) if $x_{2}=0$ then we choose $y_{1}=0$ and
\begin{align*}
(\lambda'_{2}, \ldots, \lambda'_{\ell-1}, \lambda'_{\ell}, y'_{2}, \ldots, y'_{\ell-1}, y'_{\ell})=(\beta_{3}, \ldots, \beta_{\ell}, \beta_{1}, x_{3}, \ldots, x_{\ell}, 1).
\end{align*}
5) if $x_{\ell}=1$ then let $y_{1}=0$ and
\begin{align*}
(\lambda'_{2}, \ldots, \lambda'_{\ell-1}, \lambda'_{\ell}, y'_{2}, \ldots, y'_{\ell-1}, y'_{\ell})=(\beta_{2}, \ldots, \beta_{\ell-1}, \beta_{\ell}+\beta_{1}, x_{2}, \ldots, x_{\ell-1}, 1).
\end{align*}
Finally, 6) if $x_{p}=x_{p+1}$ for some $p \in \{2, \ldots, \ell-1\}$ take $y_{1}=0$ and
\begin{align*}
&(\lambda'_{2}, \ldots, \lambda'_{p-1},\lambda'_{p}, \lambda'_{p+1}, \ldots, \lambda'_{\ell-1}, \lambda'_{\ell}, y'_{2}, \ldots, y'_{p}, y'_{p+1}, \ldots, y'_{\ell-1}, y'_{\ell})=\\
&(\beta_{2}, \ldots, \beta_{p-1},\beta_{p}+\beta_{p+1}, \beta_{p+2},\ldots, \beta_{\ell}, \beta_{1}, x_{2}, \ldots, x_{p}, x_{p+2}, \ldots x_{\ell}, 1).
\end{align*}
Under such choices we have
\begin{align*}
L^{z}(s) = B^{\inf}(\overline{L}(s)) \stackrel{\mathrm{Lemma}~\ref{gran2}}{=} B^{\inf}(\overline{L}(s_{1})) =(1-\sum_{j=2}^{\ell}\lambda'_{j})\gamma_{n+1}(y'_{1})+\sum_{j=2}^{\ell}\lambda'_{j} \gamma_{n+1}(y'_{j}) = U^{z}(t).
\end{align*}
This finishes the proof of (\ref{giff}).
\subsubsection{The proof of (\ref{union})} \label{unionsub}
The inclusion
\begin{align*}
\{(x,B^{\mathrm{sup}}(x)), x \in \mathrm{conv}(\bar{\gamma}([0,1]))\} \cup \{(x,B^{\mathrm{inf}}(x)), x \in \mathrm{conv}(\bar{\gamma}([0,1]))\} \subset \partial\, \mathrm{conv}(\gamma([0,1]))
\end{align*}
is trivial. Indeed, it follows from (\ref{be1}) that the point $(x,B^{\sup}(x))$ is a convex combination of some points of $\gamma([0,1])$, therefore, $(x,B^{\sup}(x)) \in \mathrm{conv}(\gamma([0,1]))$. On the other hand, no point of the form $(x,s)$, where $s>B^{\sup}(x)$ belongs to $\mathrm{conv}(\gamma([0,1]))$. Indeed, otherwise $(x,s) = \sum_{j=1}^{m}\lambda_{j} \gamma(t_{j})$ for some $t_{j} \in [0,1]$ and nonnegative $\lambda_{j}$ such that $\sum_{j=1}^{m} \lambda_{j}=1$. Then
\begin{align*}
B^{\sup}(x)=B^{\sup}\left( \sum \lambda_{j} \overline{\gamma}(t_{j})\right)\stackrel{(\ref{mincon1})}{\geq} \sum \lambda_{j} B^{\sup}(\overline{\gamma}(t_{j})) \stackrel{(\ref{mincon1})}{=} \sum \lambda_{j} \gamma_{n+1}(t_{j})=s
\end{align*}
gives a contradiction. Thus $(x,B^{\sup}(x)) \in \partial\, \mathrm{conv}(\gamma([0,1]))$. In a similar way we have $(x,B^{\inf}(x)) \in \partial\, \mathrm{conv}(\gamma([0,1]))$ for $x \in \mathrm{conv}(\bar{\gamma}([0,1]))$.
To verify the inclusion
\begin{align*}
\partial\, \mathrm{conv}(\gamma([0,1])) \subset \{(x,B^{\mathrm{sup}}(x)), x \in \mathrm{conv}(\bar{\gamma}([0,1]))\} \cup \{(x,B^{\mathrm{inf}}(x)), x \in \mathrm{conv}(\bar{\gamma}([0,1]))\}
\end{align*}
we pick a point $(x,t) \in \partial\, \mathrm{conv}(\gamma([0,1]))$ where $x \in \mathbb{R}^{n}$, i.e., $x \in \mathrm{conv}(\bar{\gamma}([0,1]))$. Clearly $B^{\inf}(x) \leq t \leq B^{\sup}(x)$. Assume contrary that $B^{\inf}(x) < t < B^{\sup}(x)$. If $x \in \partial\, \mathrm{conv}(\bar{\gamma}([0,1]))$ then by (\ref{giff}) we have $B^{\inf}(x) = B^{\sup}(x)$, therefore, we get a contradiction. If $x \in \mathrm{int}(\mathrm{conv}(\bar{\gamma}([0,1])))$ then (\ref{giff}) and continuity of $B^{\sup}$ and $B^{\inf}$ imply that there exists a ball $U_{\varepsilon}(x)$ or radius $\varepsilon>0$ centered at point $x$ such that $U_{\varepsilon}(x) \subset \mathrm{int}(\mathrm{conv}(\bar{\gamma}([0,1])))$ and $B^{\inf}(s)<t-\delta < t+\delta<B^{\sup}(s)$ for all $s \in U_{\varepsilon}(x)$ and some $\delta>0$. Then
\begin{align*}
&(x,t) \in U_{\min\{\varepsilon, \delta\}}((x,t))\subset \{(s,y)\, :\, B^{\inf}(s)\leq y\leq B^{\sup}(s), s \in U_{\min\{\varepsilon, \delta\}}(x)\}=\\
&\mathrm{conv}(\{(s,B^{\inf}(s)), \, s \in U_{\min\{\varepsilon, \delta\}}(x)\} \cup \{(s,B^{\sup}(s)), \, s \in U_{\min\{\varepsilon, \delta\}}(x)\}) \subset \mathrm{conv} (\gamma([0,1])),
\end{align*}
where $U_{\min\{\varepsilon, \delta\}}((x,t))$ is the ball in $\mathbb{R}^{n+1}$ centered at $(x,t)$ with radius $\min\{\varepsilon, \delta\}$. We obtain a contradiction with the assumption that $(x,t)$ belongs to the boundary of $\mathrm{conv} (\gamma([0,1]))$.
The proof of Theorem~\ref{mth010} is complete.
\end{proof}
\subsection{The proof of Proposition~\ref{sensitive}}
Take $\gamma(t) = (t,t^{4}, -t^{3})$ on $[-1,1]$. We have
\begin{align*}
(\gamma', \gamma'', \gamma''') = \begin{pmatrix}
1 & 0 & 0 \\
4t^{3} & 12t^{2} & 24 t\\
-3t^{2} & -6t & -6
\end{pmatrix}.
\end{align*}
All the leading principal minors of the matrix $(\gamma', \gamma'', \gamma''')$ are positive on $[-1,1]\setminus \{0\}$, and we notice that $2\times 2$ and $3\times 3$ the leading principal minors vanish at $t=0$. Assume contrary to Proposition~\ref{sensitive} that the map $B^{\sup}(x,y)$ defined on $\mathrm{conv}(\overline{\gamma}([-1,1]))$ by (\ref{vog}) is concave. We have
\begin{align}
B(\lambda (a,a^{4})+(1-\lambda)(1,1)) = -\lambda a^{3} -(1-\lambda), \lambda \in [0,1], a \in (-1,1).
\end{align}
In particular, $g(y):=B(0,y), y \in [0,1],$ must be concave. The restriction $\lambda a + (1-\lambda)=0$ implies $\lambda = \frac{1}{1-a}$. Therefore
\begin{align*}
\lambda a^{4} + (1-\lambda) = a^{3}+a^{2}+a \quad \text{and} \quad -\lambda a^{3} -(1-\lambda) = a^{2}+a.
\end{align*}
Since $-a^{3}-a^{2}-a = y \in [0,1]$ we must have $a \in [-1,0]$. Thus $g(-a^{3}-a^{2}-a) = a^{2}+a$ for $a \in [-1,0]$. differentiating both sides in $a$ two times we obtain
\begin{align*}
&g'(-a^{3}-a^{2}-a) = -\frac{2a+1}{3a^{2}+2a+1},\\
&g''(-a^{3}-a^{2}-a) = \frac{-6a(a+1)}{(3a^{2}+2a+1)^{3}} > 0 \quad \text{for} \quad a \in [-1,0).
\end{align*}
Thus $g''>0$ gives a contradiction.
\subsection{The proof of Theorem~\ref{mth1}}
We verify (\ref{extr01}). The verification of (\ref{extr02}) is similar. Denote
\begin{align*}
M^{\sup}(x) :=\sup_{a\leq Y\leq b} \{ \mathbb{E} \gamma_{n+1}(Y)\, :\, \mathbb{E} \overline{\gamma}(Y)=x\}, \quad x \in \mathrm{conv}(\overline{\gamma}([a,b])).
\end{align*}
First we show the inequality $M^{\sup} \leq B^{\sup}$ on $\mathrm{conv}(\overline{\gamma}([a,b]))$. Indeed, let $x \in \mathrm{conv}(\overline{\gamma}([a,b]))$. Pick an arbitrary random variable $Y$ with values in $[a,b]$, such that $\mathbb{E} \overline{\gamma}(Y)=x$. Then
\begin{align*}
\mathbb{E} \gamma_{n+1}(Y) \stackrel{(\ref{mincon1})}{=} \mathbb{E}B^{\sup}(\overline{\gamma}(Y)) \stackrel{(\ref{mincon1})+\mathrm{Jensen}}{\leq} B^{\sup}(\mathbb{E}\overline{\gamma}(Y))=B^{\sup}(x).
\end{align*}
Taking the supremum over all $Y$, $a\leq Y\leq b$, such that $\mathbb{E} \overline{\gamma}(Y)=x$, gives the inequality $M^{\sup}(x) \leq B^{\sup}(x)$.
To verify the reverse inequality $M^{\sup}(x) \geq B^{\sup}(x)$ it suffices to construct at least one random variable $Y=Y(x)$, $a\leq Y\leq b$, such that $\mathbb{E} \overline{\gamma}(Y)=x$ and $\mathbb{E} \gamma_{n+1}(Y)=B^{\sup}(x)$. Notice that $Y=\zeta(x)$, where $\zeta(x)$ is defined in Theorem~\ref{mth1}, satisfies $a\leq \zeta(x) \leq b$, $\mathbb{E} \overline{\gamma}(\zeta(x))=x$. It also follows from (\ref{vog}) that $\mathbb{E} \gamma_{n+1}(\zeta(x))=B^{\sup}(x)$.
\subsection{The proof of Corollary~\ref{nobel2}}. The moment curve $\gamma$ has totally positive torsion on $[0,1]$, hence, Theorem~\ref{mth010} applies.
First we work with $B^{\sup}(x)=x_{n+1}$. Consider the case $n=2\ell$. By Theorem~\ref{mth010} there exists a unique point $(\lambda_{1}, \ldots, \lambda_{\ell}, y_{1}, \ldots, y_{\ell}) \in \mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ such that $\sum_{j=1}^{\ell}\lambda_{j} \overline{\gamma}(y_{j})+(1-\sum_{j=1}^{\ell}\lambda_{j})\overline{\gamma}(1)=x$ then the value $x_{n+1}:=B^{\sup}(x)$ equals to $\sum_{j=1}^{\ell}\lambda_{j} y_{j}^{2\ell+1}+(1-\sum_{j=1}^{\ell}\lambda_{j})$. We would like to show that the linear equation
\begin{align}\label{gant01}
\det
\begin{pmatrix}a_{0} & a_{1} & \ldots & a_{\ell}\\
\vdots & & & \\
a_{\ell} & a_{\ell+1} & \ldots & a_{2\ell}\end{pmatrix}=0,
\end{align}
where $a_{k}:=x_{k}-x_{k+1}$, $k=0, \ldots, 2\ell$, $x_{0}:=1$, has a unique solution in $x_{n+1}$ which equals to $\sum_{j=1}^{\ell}\lambda_{j} y_{j}^{2\ell+1}+(1-\sum_{j=1}^{\ell}\lambda_{j})$. First we check why $x_{n+1}=\sum_{j=1}^{\ell}\lambda_{j} y_{j}^{2\ell+1}+(1-\sum_{j=1}^{\ell}\lambda_{j})$ solves (\ref{gant01}). Notice that $a_{k} = \langle y^{k}, \beta \rangle$, where $y^{k} := (y_{1}^{k}, \ldots, y_{\ell}^{k})$, and $\beta := (\lambda_{1}(1-y_{1}), \ldots, \lambda_{\ell}(1-y_{\ell}))$. The $j$'th column of the matrix in (\ref{gant01}), call it $w_{j}$, $j=0, \ldots, \ell$, we can write as $w_{j} = AD^{j}\beta^{T}$, where $A$ is $(\ell+1)\times \ell$ matrix with $m$'th column $(1, y_{m}, \ldots, y_{m}^{\ell})^{T}$, and $D$ is $\ell\times\ell$ diagonal matrix with diagonal entries $y_{1}, \ldots, y_{\ell}$. Since there exists a nonzero vector $(z_{0}, \ldots, z_{\ell})\in \mathbb{R}^{\ell+1}$ such that $z_{0}D^{0}+\ldots+z_{\ell}D^{\ell}=0$ (the number of variables $z_{j}$ is greater than the number of equations, i.e., $\ell$), it follows that the vectors $\{w_{0}, \ldots, w_{\ell}\}$ are linearly dependent, so (\ref{gant01}) holds true.
To show the uniqueness of the solution $x_{n+1}$ it suffices to show that the leading $\ell\times \ell$ principal minor $R$ of the matrix in $(\ref{gant01})$ has nonzero determinant. Notice that $R=\det(\tilde{w}_{0}, \ldots, \tilde{w}_{\ell-1})$, where $\tilde{w}_{j}=\tilde{A}D^{j}\beta^{T}$ and $\tilde{A}$ is obtained from $A$ by removing the last row. Assume contrary that $R=0$. Then there exists a nonzero vector $(z_{0}, \ldots, z_{\ell-1})\in \mathbb{R}^{\ell}$ such that $\tilde{A}(z_{0}D^{0}+\ldots+z_{\ell-1}D^{\ell-1})\beta^{T}=0$. As $\det(\tilde{A})\neq 0$ (Vandermonde matrix) we have $(z_{0}D^{0}+\ldots+z_{\ell-1}D^{\ell-1})\beta^{T}=0$. Since the entries of $\beta^{T}$ are nonzero and the matrix $(z_{0}D^{0}+\ldots+z_{\ell-1}D^{\ell-1})$ is diagonal we must have $z_{0}D^{0}+\ldots+z_{\ell-1}D^{\ell-1}=0$. The last equation rewrites as $\tilde{A}^{T}z^{T}=0$ where $z=(z_{0}, \ldots, z_{\ell-1})\neq 0$, which is a contradiction.
Next, consider $n=2\ell-1$. In this case $x = (1-\sum_{j=1}^{\ell}\lambda_{j})\overline{\gamma}(0)+\sum_{j=2}^{\ell}\lambda_{j}\overline{\gamma}(y_{j})+\lambda_{1}\overline{\gamma}(1)$ for a unique $(\lambda_{1}, \ldots, \lambda_{\ell}, y_{2}, \ldots, y_{\ell}) \in \mathrm{int}(\mathrm{conv}(\overline{\gamma}([0,1])))$, and the value $x_{n+1}:=B^{\sup}(x)$ is $(1-\sum_{j=1}^{\ell}\lambda_{j})\gamma_{n+1}(0)+\sum_{j=2}^{\ell}\lambda_{j}\gamma_{n+1}(y_{j})+\lambda_{1}\gamma_{n+1}(1)$. Set $b_{k}:=x_{k}-x_{k+1}$, $k=1, \ldots, 2\ell-1$. As before we would like to show that the linear equation
\begin{align}\label{gant02}
\det
\begin{pmatrix}b_{1} & b_{2} & \ldots & b_{\ell}\\
\vdots & & & \\
b_{\ell} & b_{\ell+1} & \ldots & b_{2\ell-1}\end{pmatrix}=0,
\end{align}
has a unique solution in $x_{n+1}$ which equals to $(1-\sum_{j=1}^{\ell}\lambda_{j})\gamma_{n+1}(0)+\sum_{j=2}^{\ell}\lambda_{j}\gamma_{n+1}(y_{j})+\lambda_{1}\gamma_{n+1}(1)$. To check that such a choice for $x_{n+1}$ solves (\ref{gant02}), notice that $b_{k} =\langle y^{k}, \beta \rangle$, where $y^{k} = (y_{2}^{k}, \ldots, y_{\ell}^{k})$ and $\beta = (\lambda_{2}(1-y_{2}), \ldots, \lambda_{\ell}(1-y_{\ell}))$.
The $j$'th column of the matrix in (\ref{gant02}), call it $w_{j}$, $j=1, \ldots, \ell$, we can write as $w_{j} = AD^{j}\beta^{T}$, where $A$ is $\ell\times(\ell-1)$ matrix with $m$'th column $(y_{m}, \ldots, y_{m}^{\ell})^{T}$, $m=2, \ldots, \ell$, and $D$ is $(\ell-1)\times(\ell-1)$ diagonal matrix with diagonal entries $y_{2}, \ldots, y_{\ell}$. Since there exists a nonzero vector $(z_{1}, \ldots, z_{\ell})\in \mathbb{R}^{\ell}$ such that $z_{1}D+\ldots+z_{\ell}D^{\ell}=0$ (the number of variables $z_{j}$ is greater than the number of equations, i.e., $\ell-1$), it follows that the vectors $\{w_{1}, \ldots, w_{\ell}\}$ are linearly dependent, so (\ref{gant02}) holds true.
To show the uniqueness of the solution $x_{n+1}$ it suffices to show that the leading $(\ell-1)\times (\ell-1)$ principal minor $R$ of the matrix in $(\ref{gant02})$ has nonzero determinant. Notice that $R=\det(\tilde{w}_{1}, \ldots, \tilde{w}_{\ell-1})$, where $\tilde{w}_{j}=\tilde{A}D^{j}\beta^{T}$, and $\tilde{A}$ is obtained from $A$ by removing the last row. Assume contrary that $R=0$. Then there exists nonzero vector $(z_{1}, \ldots, z_{\ell-1})\in \mathbb{R}^{\ell-1}$ such that $\tilde{A}(z_{1}D+\ldots+z_{\ell-1}D^{\ell-1})\beta^{T}=0$. As $\det(\tilde{A})\neq 0$ (Vandermonde matrix) we have $(z_{1}D+\ldots+z_{\ell-1}D^{\ell-1})\beta^{T}=0$. Since the entries of $\beta^{T}$ are nonzero and the matrix $(z_{1}D+\ldots+z_{\ell-1}D^{\ell-1})$ is diagonal we must have $z_{1}D+\ldots+z_{\ell-1}D^{\ell-1}=0$. The last equation rewrites as $\tilde{A}^{T}z^{T}=0$ where $z=(z_{1}, \ldots, z_{\ell-1})\neq 0$, which is a contradiction.
Next we work with $B^{\inf}(x)$. Consider $n=2\ell$. There is a unique point $(\lambda_{1}, \ldots, \lambda_{\ell}, y_{1}, \ldots, y_{\ell}) \in \mathrm{int}(\Delta_{c}^{\ell}\times\Delta_{*}^{\ell})$ such that $\sum_{j=1}^{\ell}\lambda_{j}\overline{\gamma}(y_{j})=x$. It suffices to show that the linear equation
\begin{align}\label{gant03}
\det
\begin{pmatrix}x_{1} & x_{2} & \ldots & x_{\ell+1}\\
\vdots & & & \\
x_{\ell+1} & x_{\ell+2} & \ldots & x_{2\ell+1}\end{pmatrix}=0,
\end{align}
has a unique solution $x_{2\ell+1}=\sum_{j=1}^{\ell}\lambda_{j}\gamma_{n+1}(y_{j})$. The $j$'th column of the matrix in (\ref{gant03}), call it $w_{j}$, $j=1, \ldots, \ell+1$, we can write as $w_{j} = AD^{j}\lambda^{T}$, where $A$ is $(\ell+1)\times\ell$ matrix with $m$'th column $(y_{m}, \ldots, y_{m}^{\ell+1})^{T}$, $m=1, \ldots, \ell$, $D$ is $\ell\times\ell$ diagonal matrix with diagonal entries $y_{1}, \ldots, y_{\ell}$, and $\lambda=(\lambda_{1}, \ldots, \lambda_{\ell})$. The rest of the reasoning (including the uniqueness of the solution $x_{n+1}$) is similar to the one we just discussed for $B^{\sup}$ and $n=2\ell$.
Finally, consider $n=2\ell-1$. There exists a unique point $(\beta_{2}, \ldots, \beta_{\ell}, y_{1}, \ldots, y_{\ell})\in \mathrm{int}(\Delta_{c}^{\ell-1}\times \Delta_{*}^{\ell})$ such that $\sum_{j=1}^{\ell}\beta_{j} \gamma(y_{j})=x$, where $\beta_{1}:=1-\sum_{j=2}^{\ell}\beta_{j}$. It suffices to show that the linear equation
\begin{align}\label{gant04}
\det
\begin{pmatrix}1 & x_{1} & \ldots & x_{\ell}\\
\vdots & & & \\
x_{\ell} & x_{\ell+1} & \ldots & x_{2\ell}\end{pmatrix}=0,
\end{align}
has a unique solution $x_{2\ell}=\sum_{j=1}^{\ell}\beta_{j}\gamma_{n+1}(y_{j})$. The $j$'th column of the matrix in (\ref{gant03}), call it $w_{j}$, $j=1, \ldots, \ell+1$, we can write as $w_{j} = AD^{j-1}\beta^{T}$, where $A$ is $(\ell+1)\times\ell$ matrix with $m$'th column $(1, y_{m}, \ldots, y_{m}^{\ell})^{T}$, $m=1, \ldots, \ell$, $D$ is $\ell\times\ell$ diagonal matrix with diagonal entries $y_{1}, \ldots, y_{\ell}$, and $\beta=(\beta_{1}, \ldots, \beta_{\ell})$. The rest of the reasoning (including the uniqueness of the solution $x_{n+1}$) is similar to the one we just discussed for $B^{\sup}$ and $n=2\ell$.
\subsection{The proof of Corollary~\ref{karatecor}}
Assume contrary that there exist $n+1$ points, $\gamma(t_{1}), \ldots, \gamma(t_{n+1})$, where $a\leq t_{1}<\ldots <t_{n+1}\leq b$, which lie in a single affine hyperplane. In particular, we have
\begin{align}\label{lies}
\det(\gamma(t_{2})-\gamma(t_{1}), \gamma(t_{3})-\gamma(t_{1}), \ldots, \gamma(t_{n+1})-\gamma(t_{1}))=0.
\end{align}
On the other hand, we have
\begin{align*}
&\det(\gamma(t_{2})-\gamma(t_{1}), \gamma(t_{3})-\gamma(t_{1}), \ldots, \gamma(t_{n+1})-\gamma(t_{1})) = \\
&\det(\gamma(t_{2})-\gamma(t_{1}), \gamma(t_{3})-\gamma(t_{2}), \ldots, \gamma(t_{n+1})-\gamma(t_{n})) = \\
&\int_{t_{n}}^{t_{n+1}}\ldots\int_{t_{2}}^{t_{3}}\int_{t_{1}}^{t_{2}} \det(\gamma'(s_{1}),\gamma'(s_{2}) \ldots, \gamma'(s_{n})) ds_{1}ds_{2}\ldots ds_{n} >0
\end{align*}
by Lemma~\ref{klasika}. Thus we have a contradiction with (\ref{lies}).
\subsection{The proof of Corollary~\ref{provolume}}
To prove the formulas for the volume we apply Theorem~\ref{mth010}, where $\gamma$ in Corollary~\ref{provolume} will be used as $\overline{\gamma}$ in Theorem~\ref{mth010}. Let $n=2\ell$. To verify
\begin{align}
\mathrm{Vol}(\mathrm{conv}(\gamma([a,b])))& \label{moculoba}\\
=\frac{(-1)^{\frac{\ell(\ell-1)}{2}}}{(2\ell)!}& \int_{a\leq x_{1}\leq \ldots \leq x_{\ell} \leq b} \mathrm{det}(\gamma(x_{1})-\gamma(a), \ldots, \gamma(x_{\ell})-\gamma(a), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell})) dx, \nonumber
\end{align}
notice that according to Theorem~\ref{mth010} the map $U:=U_{2\ell}$, where
\begin{align*}
U_{2\ell} : \Delta_{c}^{\ell}\times \Delta_{*}^{\ell} \ni (\lambda_{1}, \ldots, \lambda_{\ell}, x_{1}, \ldots, x_{\ell}) \mapsto (1-\sum_{j=1}^{\ell}\lambda_{j})\gamma(a)+\sum_{j=1}^{\ell}\lambda_{j} \gamma(x_{j}),
\end{align*}
is diffeomorphism between $\mathrm{int}(\Delta_{c}^{\ell}\times \Delta_{*}^{\ell})$ and $\mathrm{int}(\mathrm{conv}(\gamma([a,b])))$. In particular, by change of variables formula, we have
\begin{align*}
&\mathrm{Vol}(\mathrm{conv}(\gamma([a,b]))) = \int_{\Delta_{c}^{\ell}} \int_{\Delta_{*}^{\ell}} |\det(U_{\lambda_{1}}, \ldots, U_{\lambda_{\ell}}, U_{x_{1}}, \ldots, U_{x_{\ell}})| d\lambda\, dx=\\
&\int_{\Delta_{c}^{\ell}}\lambda_{1}\ldots \lambda_{\ell} d\lambda \, \int_{\Delta_{*}^{\ell}} |\mathrm{det}(\gamma(x_{1})-\gamma(a), \ldots, \gamma(x_{\ell})-\gamma(a), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell}))|dx.
\end{align*}
Next, using the identity
\begin{align}\label{distr}
\int_{\Delta_{c}^{\ell}}\lambda^{p_{1}-1}_{1}\ldots \lambda_{\ell}^{p_{\ell-1}}(1-\sum_{j=1}^{\ell}\lambda_{j})^{p_{0}-1} d\lambda = \frac{\prod_{j=0}^{\ell} \Gamma(p_{j})}{\Gamma(\sum_{j=0}^{\ell} p_{j})}
\end{align}
valid for all $p_{0}, \ldots, p_{\ell}>0$ (see Dirichlet distribution in \cite{book1}), and the property
\begin{align*}
&|\mathrm{det}(\gamma(x_{1})-\gamma(a), \ldots, \gamma(x_{\ell})-\gamma(a), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell}))| \\ &=(-1)^{\frac{\ell(\ell-1)}{2}}\mathrm{det}(\gamma(x_{1})-\gamma(a), \ldots, \gamma(x_{\ell})-\gamma(a), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell}))
\end{align*}
whenever $a<x_{1}<\ldots <x_{\ell}<b$, see (\ref{nishani1}), we recover (\ref{moculoba}). The other three identities in Corollary ~\ref{provolume} are obtained in the same way by repeating the computations with $L_{2\ell}$, and in the case of odd dimensions with $U_{2\ell-1}$ and $L_{2\ell-1}$.
\subsection{The proof of Corollary~\ref{area1}}
Let $n=2\ell$ (the case $n=2\ell-1$ is similar and will be omitted), and let us verify the identity
\begin{align*}
\mathrm{Area}(\partial \; \mathrm{conv}(\gamma([a,b]))) = \frac{1}{n!} \int_{a\leq x_{1}\leq \ldots \leq x_{\ell}\leq b} \left( \sqrt{\det S_{a}^{\mathrm{Tr}}S_{a}} +\sqrt{\det S_{b}^{\mathrm{Tr}}S_{b}} \right) dx,
\end{align*}
where $S_{r} = (\gamma(x_{1})-\gamma(r), \ldots, \gamma(x_{\ell})-\gamma(r), \gamma'(x_{1}), \ldots, \gamma'(x_{\ell}))$. By (\ref{union}) we have
\begin{align*}
\partial\, \mathrm{conv}(\gamma([a,b]))=\{(x,B^{\mathrm{sup}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\} \cup \{(x,B^{\mathrm{inf}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\}.
\end{align*}
On the other hand, by (\ref{giff}) and (\ref{b2l}) the set $\{(x,B^{\mathrm{sup}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\} \cap \{(x,B^{\mathrm{inf}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\}$ is contained in the image of $C^{1}$ map of the set $\partial (\Delta^{\ell}_{c}\times \Delta^{\ell}_{*})$ which has zero $n$ dimensional Lebesgue measure. Therefore, it follows from (\ref{diff2lu}) and (\ref{diff2ll}) that
\begin{align*}
&\mathrm{Area}(\partial \; \mathrm{conv}(\gamma([a,b]))) = \\
&\mathrm{Area}(\{(x,B^{\mathrm{sup}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\}) + \mathrm{Area}(\{(x,B^{\mathrm{inf}}(x)), x \in \mathrm{conv}(\bar{\gamma}([a,b]))\}) = \\
&\int_{\Delta_{c}^{\ell}\times \Delta_{*}^{\ell}}\sqrt{ \det{ A^{\mathrm{Tr}} A}} \, dx d\lambda + \int_{\Delta_{c}^{\ell}\times \Delta_{*}^{\ell}}\sqrt{ \det{ C^{\mathrm{Tr}} C}}\, dx d\lambda,
\end{align*}
where $A = (U_{\lambda_{1}}, \ldots, U_{\lambda_{\ell}}, U_{x_{1}}, \ldots, U_{x_{\ell}})$ with $U:=U_{n}$, and $C=(L_{\lambda_{1}}, \ldots, L_{\lambda_{\ell}}, L_{x_{1}}, \ldots, L_{x_{\ell}})$ with $L:=L_{n}$. Notice that $A^{\mathrm{Tr}}A =RS^{\mathrm{Tr}}_{b}S_{b}R$ where $R$ is $2\ell \times 2\ell$ diagonal matrix with diagonal entries $r_{1}=\ldots=r_{\ell}=1$, and $r_{\ell+1}=\lambda_{1}, \ldots, r_{\ell+\ell}=\lambda_{\ell}$. Similarly $C^{\mathrm{Tr}}C = RS_{a}^{\mathrm{Tr}}S_{a}R$. Therefore,
\begin{align*}
&\int_{\Delta_{c}^{\ell}\times \Delta_{*}^{\ell}}\sqrt{ \det{ A^{\mathrm{Tr}} A}} \, dx d\lambda + \int_{\Delta_{c}^{\ell}\times \Delta_{*}^{\ell}}\sqrt{ \det{ C^{\mathrm{Tr}} C}}\, dx d\lambda =\\
&\int_{\Delta_{c}^{\ell}} \lambda_{1}\cdots \lambda_{\ell} d\lambda \int_{\Delta_{*}^{\ell}}\sqrt{\det S^{\mathrm{Tr}}_{b}S_{b}}dx + \int_{\Delta_{c}^{\ell}} \lambda_{1}\cdots \lambda_{\ell} d\lambda \int_{\Delta_{*}^{\ell}}\sqrt{\det S^{\mathrm{Tr}}_{a}S_{a}}dx \stackrel{(\ref{distr})}{=}\\
&\frac{1}{(2\ell)!}\int_{\Delta_{*}^{\ell}} \left(\sqrt{\det S^{\mathrm{Tr}}_{b}S_{b}} +\sqrt{\det S^{\mathrm{Tr}}_{a}S_{a}} \right) \, dx.
\end{align*}
This finishes the proof of Corollary~\ref{area1}.
| {
"timestamp": "2023-01-05T02:12:15",
"yymm": "2201",
"arxiv_id": "2201.12932",
"language": "en",
"url": "https://arxiv.org/abs/2201.12932",
"abstract": "We give new proofs of the description convex hulls of space curves $\\gamma : [a,b] \\mapsto \\mathbb{R}^{d}$ having totally positive torsion. These are curves such that all the leading principal minors of $d\\times d$ matrix $(\\gamma', \\gamma'', \\ldots, \\gamma^{(d)})$ are positive. In particular, we recover parametric representation of the boundary of the convex hull, different formulas for its surface area and the volume of the convex hull, and the solution to a general moment problem corresponding to $\\gamma$.",
"subjects": "Probability (math.PR); Classical Analysis and ODEs (math.CA); Differential Geometry (math.DG); Optimization and Control (math.OC)",
"title": "A new proof of the description of the convex hull of space curves with totally positive torsion",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683484150417,
"lm_q2_score": 0.800691997339971,
"lm_q1q2_score": 0.7907380734021762
} |
https://arxiv.org/abs/2003.01812 | The case for algebraic biology: from research to education | Though it goes without saying that linear algebra is fundamental to mathematical biology, polynomial algebra is less visible. In this article, we will give a brief tour of four diverse biological problems where multivariate polynomials play a central role -- a subfield that is sometimes called "algebraic biology." Namely, these topics include biochemical reaction networks, Boolean models of gene regulatory networks, algebraic statistics and genomics, and place fields in neuroscience. After that, we will summarize the history of discrete and algebraic structures in mathematical biology, from their early appearances in the late 1960s to the current day. Finally, we will discuss the role of algebraic biology in the modern classroom and curriculum, including resources in the literature and relevant software. Our goal is to make this article widely accessible, reaching the mathematical biologist who knows no algebra, the algebraist who knows no biology, and especially the interested student who is curious about the synergy between these two seemingly unrelated fields. | \section{Introduction}
Nobody would dispute the fundamental role that linear algebra plays in applied fields such as mathematical biology. Systems of linear equations arise both as models of natural phenomena and as approximations of nonlinear models. As such, it is not hard to surmise that systems of nonlinear polynomials can also arise from biological problems. Despite this, is still may come as a surprise to mathematicians and biologists alike when they first hear the term ``\emph{Algebraic Biology}.'' The reaction may be that of inquisitive curiosity, skepticism, or cynicism, as mathematicians have earned a reputation for occasionally making questionable abstractions and constructing frameworks that are perhaps too detached from reality to be more than just amusement. Some people who work in this field avoid the term ``algebraic biology'' for precisely this reason. Ironically, it actually might seem more reasonable to biologists, since most people without a degree in mathematics consider ``algebra'' to be a topic learned in middle or high school, and not a 400-level course involving abstract structures such as groups, rings, and fields. With this viewpoint, it would be only natural to ask ``\emph{If calculus can be an effective tool for tackling biological problems, why not algebra?}'' Regardless of a mathematician's reaction to the concept of algebraic biology, the aforementioned element of surprise can be attributed to the simple fact that abstract algebra is almost never taught in biology or biomathematics courses, nor are applications to biology usually presented in algebra courses. This is partially a statement about aspects of contemporary mathematical biology in the undergraduate and graduate curricula, and partially due to the actual nature of this multifaceted field.
Though curricular reforms and improvements involving mathematical biology are certainly possible, we are not going to make the case in this article that algebra necessarily belongs as a fundamental pillar in a mathematical biology course. More realistically, problems from algebraic biology can make for fun, enriching, and interesting examples that can supplement such a course, and excite students who might like the topic but want to pursue something with direct applications. A good analogy of this is the RSA cryptosystem \cite{rivest1978method}. An entire number theory course can be taught without giving any applications. However, RSA is a fantastic self-contained topic that is not only captivating on its own, but gives a glimpse into how a traditional pure mathematical field such as number theory can be applied to real world problems. Being exposed to such an application will only broaden the appeal of the the mathematics behind the scenes, and it can motivate students to study it who might otherwise not give it a second thought.
This is the role that we think algebraic biology should play in an undergraduate curriculum, though many may disagree. There are fundamental differences between the scope of algebraic techniques in biology and the ubiquity of those involving e.g., statistics or differential equations, which are more widespread, and for good reason. However, just as RSA can add spice to an ordinary number theory class, algebraic techniques for analyzing problems in the biological sciences can enhance both a mathematical biology class and an algebra class. In the process, it can give students a wider view of topics that they may want to pursue in their graduate studies.
In this article, we will explore this further and dispel some myths. The first myth, which we have already alluded to, is that algebraic techniques are too detached from applications to be of any use. The second myth is that one needs a strong algebra background to study or teach these topics. While this may be true at the research level, it is absolutely false for both general undergraduate math majors and non-algebraically minded faculty alike.
Our first task is to give a short summary of \emph{how} algebra actually arises in mathematical biology. Some readers may already know at least one of these topics somewhat well, but we will not assume this. It is also a hard truth in the biological world that ideas move quickly and the most modern methods all have a shelf life. Textbooks and papers quickly become antiquated as new technologies, theories, and computational power, become available. Not all, but certainly many decade-old books are as out-of-date to a biologist as a book on Visual BASIC or Pentium processors is to a computer engineer. One of the daunting goals in this paper is to give a snapshot of algebraic biology today in 2020, that will stand the test of time and remain relevant in 2030, and beyond. Of course, it is impossible to guarantee that this will actually happen, but it is nevertheless our objective. In the next section, we will present a few examples from algebraic biology at a high level -- only enough to convey the main ideas and framework to the reader, while making it clear how and why algebra is involved, with an assumption of only a minimal knowledge of algebra.
\section{Four algebraic problems arising from biological models}
The term ``algebra'' is quite broad and involves a plethora of structures, including groups, rings, fields, modules, vector spaces, and many more. Most of these have little to nothing to do with biology. The common algebraic theme in the problems that we will introduce are nonlinear \emph{multivariate polynomials}, which live in commutative rings. The branch of algebra that involves solving systems of such polynomials is called \emph{algebraic geometry}. In the problems we will encounter, computational techniques are particularly relevant. Polynomials arise in models of biological systems across a variety of frameworks, from classical differential equations, to Boolean networks, to statistical models in phylogenetics and genomics, to topics in neuroscience. The goal of this section is \emph{not} to provide a survey of these topics, or anything remotely close. However, we will briefly introduce them to give the reader the flavor of the biological questions and how (nonlinear) algebra arises. Our choice of these topics is motivated by several factors, including both the diversity of biological application, and their prominence in the mathematical biological community as of the writing of this article. Each of these topics should be thought of as a teaser. Like RSA over the past several decades, all of these topics have the potential to be the theme of a lecture or series of lectures on a creative application, aimed at undergraduates. This can be done in a course on algebra, mathematical biology, or even in a general-audience math club talk. We will provide and discuss introductory references for each topic for the reader who wants to learn more.
\subsection{Biochemical reaction networks}
Consider the following simple biochemical reaction, where $A$, $B$, and $C$ are molecular species:
\[
\ce{$A+B$ <=>[$k_1$][$k_2$] $C$},\qquad \ce{$A$ ->[$k_3$] $2B$}.
\]
The constants $k_1$, $k_2$, and $k_3$ represent reaction rates. Let $x_1(t)$, $x_2(t)$, and $x_3(t)$ denote the concentrations of $A$, $B$, and $C$, respectively, where $t\in\mathbb{R}$. Without going into the details, the assumption of the fundamental laws of mass-action kinetics leads to the following systems of ordinary differential equations (ODEs)
\begin{align*}x_1'&=-k_1x_1x_2-k_3x_1+k_2x_3 \\
x_2'&=-k_1x_1x_2+k_2x_3+2k_3x_1 \\
x_3'&=k_1x_1x_2-k_2x_3.
\end{align*}
One of the most basic questions to ask when given a system of ODEs is: \emph{what are the steady-states?} Naturally, these can be found by setting each $x_i'=0$ and solving the resulting system of polynomial equations. From a biological perspective, we are really only interested in solutions that lie in the non-negative orthant of $\mathbb{R}^3$. However, wearing our mathematical hats, polynomials are usually easier to study over the complex numbers. In the language of algebraic geometry, for each fixed choice of parameters, the solutions to the system above form an \emph{algebraic variety} in $\mathbb{C}^3$. This can be found by defining the ideal
\[
I=\big\<-k_1x_1x_2-k_3x_1+k_2x_3,\;-k_1x_1x_2+k_2x_3+2k_3x_1,\;k_1x_1x_2-k_2x_3\big\>
\]
and using a computer algebra package such as Macaulay2 \cite{M2} or Singular \cite{singular} to compute a Gr\"obner basis. One does not need to know what an ideal, Gr\"obner basis, or algebraic variety is to be able to carry out these steps. Though this may seem unsettling at first, scientists routinely treat solvers as black boxes when they solve an ODE numerically, use a linear model to analyze data, or solve a multiobjective optimization problem. Though the mathematics behind the scenes for most of these is beyond the undergraduate level, the utility and accessibility of the methods is usually not. That said, it is still nice to have some high-level idea of what the black box is doing, even if purely for consolation. A reader who is unfamiliar with the algebra can think of our particular black box as doing what Gaussian elimination does to solve systems of linear equations, but instead for systems of polynomials. The terms ideal, Gr\"obner basis, and algebraic variety are loosely analogous to the concepts of a vector space, a special vector space basis, and the solution space.
Of course, this is just the tip of the mathematical iceberg. There are plenty of research projects available for people more interested in theoretical aspects than the actual application to specific biological systems. The 2019 book \emph{Foundations of Chemical Reaction Network Theory} by Martin Feinberg, one of the pioneers of this field, is a great starting point for readers interested in learning more \cite{feinberg2019foundations}.
\subsection{Boolean models of molecular networks}
Our second example of algebra arising in a biological problem comes from modeling molecular networks with Boolean variables. As an alternative to quantifying the concentrations of the relevant gene products and modeling them with a system of nonlinear differential equations, it has become popular to qualitatively describe the variables as high vs. low (or present vs. absent), and describe their interactions with Boolean logic. Recall that $\wedge$, $\vee$, and $\neg$ represent logical AND, OR, and NOT, respectively. For example, suppose that for gene $A$ to be transcribed, enzyme $B$ must be present, and repressor protein $C$ must be absent. This can be expressed as $A(t+1)=B(t)\wedge\neg C(t)$. The following Boolean model of the lactose (\emph{lac}) operon in \emph{E. coli} is from \cite[Chapter 1]{robeva2013mathematical}:
\begin{align*}
x_1(t+1)&=\neg{G_e}\wedge(x_3(t)\vee L_e) \\
x_2(t+1)&=x_1 \\
x_3(t+1)&=\neg{G_e}\wedge[(L_e\wedge x_2(t))\vee(x_3(t)\wedge\neg{x_2(t)})].
\end{align*}
The \emph{lac} operon was the first, and remains one of the most well-studied gene networks in molecular biology, and the scientists who discovered it won a Nobel Prize for their work \cite{jacob1961genetic}. Here, time is assumed to be discretized, and $x_1(t)$, $x_2(t)$, and $x_3(t)$ are Boolean functions that represent the presence or absence of intracellular mRNA, translated proteins, and lactose, respectively. There are also two parameters, $L_e$ and $G_e$, representing extracellular lactose and glucose, respectively. These can be thought of as constants, because they change on a much larger time-scale than the three variables do. Like we did with our system of ODEs from a biochemical reaction network, we can ask about the steady states of this model. These can be found by setting $x_i(t+1)=x_i(t)$, and solving the resulting system. This is easiest by first converting the Boolean expressions into polynomials: $x\wedge y$ is $xy$, $x\vee y$ is $x+y+xy$, and $\neg{x}$ is $1+x$. This gives the following system of polynomials over $\mathbb{F}_2=\{0,1\}$:
\begin{align*}
(1+G_e)(x_3L_e+x_3+L_3)+x_1&=0 \\
x_1+x_2&=0 \\
(1+G_e)(x_2L_e+x_3(1+x_2))+x_3&=0.
\end{align*}
For each choice of parameters, a computer algebra package can find the solution by computing a Gr\"obner basis of the ideal
\[
I=\big\<(1+G_e)(x_3L_e+x_3+L_e)+x_1,\;
x_1+x_2,\;(1+G_e)(x_2L_e+x_3(1+x_2))+x_3\big\>.
\]
Naturally, this time everything needs to be done over the finite field $\mathbb{F}_2$. For more details on this at an undergraduate level, see the first several chapters of the 2013 book \emph{Mathematical Concepts and Methods in Modern Biology} by Robeva and Hodge \cite{robeva2013mathematical}. It should be noted that in some cases, the Boolean assumption is too restrictive, and modelers use more than just two states; sometimes these are called \emph{logical models} \cite{thomas2006biological}, \emph{local models} \cite[Chapter 4]{robeva2018algebraic}, or \emph{algebraic models} \cite{macauley2020algebraic}.
We chose the aforementioned examples -- one from an ODE model of a biochemical reaction network, and another from a toy model of the \emph{lac} operon -- because they are quite simple but still illustrate the main concepts. Published models from the literature are typically much larger and more complex. A Boolean model of the \emph{lac} operon in \cite{stigler2012regulatory} has 10 variables and 3 parameters, but the fundamental mathematical ideas are still the same. Most biochemical reaction networks in the literature also contain more than three species. For example, the authors in \cite{gross2016algebraic} study a model of the Wnt signaling pathway, which leads them to system of 19 ODEs with 31 rate constants, and 5 additional parameters from conservation laws. For a fixed choice of these parameters, the steady-state variety lives in $\mathbb{C}^{19}$. In the generic case, this variety lives in the algebraic closure of a rational function field, i.e., $K=\overline{\mathbb{Q}(k_1,\dots,k_{31},c_1,\dots,c_5)}$. They show that it has nine zeros (fixed points). In addition to shedding light on the biological questions, these problems generate many interesting mathematical problems. As Bernd Sturmfels has prominently said, \emph{biology can lead to new theorems} \cite{sturmfels2005can}.
\subsection{Algebraic statistics and genomics}
The field of algebraic statistics emerged around the turn of the 21st century \cite{diaconis1998algebraic}, and the term itself was coined in the year 2000 with the publication of a book bearing its name \cite{pistone2000algebraic}. Since then, it has blossomed into an active field of research. The basic idea is to use problems from algebraic geometry to study certain problems in statistics. Just as most biological problems are not well-suited for algebraic tools, most statistical problems are not either. Those that are typically involve discrete random variables, and the algebra comes into play when these depend on parameters in a polynomial fashion. The most common examples of this involve genomics and phylogenetics, where the nucleic acid bases (A, C, G, T) can be treated as discrete random variables. Biological questions that one might ask are how to identify regions in a genome with a higher concentration of CpG dinucleotides (often an indicator for coding regions), what is the most likely sequence of mutations between species between two samples, or what phylogenetic tree best represents the evolution of a collection of species.
For a toy example that illustrates the connection to algebra, consider a simple evolutionary relationship of two species and their most common ancestor, and fix a particular base in the genome at a site that all three species share in a mutual alignment. The Jukes-Cantor model of evolution \cite{jukes1969evolution} assumes that the probability of a mutation at that site is $\alpha$, and therefore the probability the base not changing is $1-3\alpha$. We can express this with a tree, where the ancestor is the root, and the descendants are at the leaves. The Jukes-Cantor constants might be different for each species, so we will denote them by $\alpha$ and $\beta$, respectively. If we assume that A, C, G, and T are equally likely at this site in the ancestor genome, then we can compute the probability of all 16 cases for the leaves. The following example illustrates this.
\begin{minipage}{0.25\linewidth}
\begin{center}
\begin{tikzpicture}[level distance=1.5cm,
level 1/.style={sibling distance=2.3cm},
level 2/.style={sibling distance=1.5cm}]
\node {ancestor}
child {node {human} edge from parent node[left,draw=none] {$\alpha$\,}
}
child {node {chimp} edge from parent node[right,draw=none] {\;$\beta$}
};
\end{tikzpicture}
\end{center}
\end{minipage}%
\hfill%
\begin{minipage}{0.7\linewidth}
\begin{align*}
P(AC)&=P\bigg(\hspace{-1mm}\begin{tikzpicture}[baseline={([yshift=-20pt]current bounding box.north)},shorten >= -2pt, shorten <= -2pt]
\node (l) at (0,0) {\footnotesize $A$};
\node (r) at (.9,0) {\footnotesize $C$};
\node (rt) at (.45,.75) {\footnotesize $A$};
\draw (rt) to (l); \draw (rt) to (r);
\end{tikzpicture}\hspace{-1mm}\bigg)
+P\bigg(\hspace{-1mm}\begin{tikzpicture}[baseline={([yshift=-20pt]current bounding box.north)},shorten >= -2pt, shorten <= -2pt]
\node (l) at (0,0) {\footnotesize $A$};
\node (r) at (.9,0) {\footnotesize $C$};
\node (rt) at (.45,.75) {\footnotesize $G$};
\draw (rt) to (l); \draw (rt) to (r);
\end{tikzpicture}\hspace{-1mm}\bigg)
+P\bigg(\hspace{-1mm}\begin{tikzpicture}[baseline={([yshift=-20pt]current bounding box.north)},shorten >= -2pt, shorten <= -2pt]
\node (l) at (0,0) {\footnotesize $A$};
\node (r) at (.9,0) {\footnotesize $C$};
\node (rt) at (.45,.75) {\footnotesize $C$};
\draw (rt) to (l); \draw (rt) to (r);
\end{tikzpicture}\hspace{-1mm}\bigg)
+P\bigg(\hspace{-1mm}\begin{tikzpicture}[baseline={([yshift=-20pt]current bounding box.north)},shorten >= -2pt, shorten <= -2pt]
\node (l) at (0,0) {\footnotesize $A$};
\node (r) at (.9,0) {\footnotesize $C$};
\node (rt) at (.45,.75) {\footnotesize $T$};
\draw (rt) to (l); \draw (rt) to (r);
\end{tikzpicture}\hspace{-1mm}\bigg) \\
&=\frac{1}{4}(1-3\alpha)\beta+\frac{1}{4}\alpha\beta+\frac{1}{4}\alpha(1-3\beta)+\frac{1}{4}\alpha\beta
=\frac{1}{4}(\alpha+\beta-\alpha\beta).
\end{align*}
\end{minipage}
\vspace{5mm}
It is easy to verify that similarly, $P(AA)=\frac{1}{4}(1-3\alpha)(1-3\beta)+\frac{3}{4}\alpha\beta=3\alpha\beta+\frac{1}{4}(1-3\alpha-3\beta)$. The space of possible probabilities can be described by a mapping
\[
\varphi\colon\mathbb{R}^2\longrightarrow\mathbb{R}^{16},\qquad\varphi\colon(\alpha,\beta)\longmapsto\big(P(AA),P(AC),\dots,P(TT)
\big).
\]
If we have an $n$-leaf tree, then things get complicated quickly. Besides the numbers rapidly increasing, we also have to consider the different tree topologies. If we fix an $n$-leaf binary tree $T$, which has $m=2n-2$ edges and hence $m$ probability parameters, we get a mapping $\varphi\colon\mathbb{R}^m\to\mathbb{R}^{4^n}$. The image of this map, intersected with the $d=4^n-1$ dimensional simplex $\Delta_d$ (because we require the coordinates to represent probabilities) is called the \emph{phylogenetic model}, $\mathcal{M}_T\subseteq\mathbb{R}^{4^n}$.
The skeptical reader may wonder about the feasibility of this approach, given how quickly the size of $4^n$ grows. Fortunately, work has been done on analyzing the case of trees with $n=4$ leaves, and then patching them together to learn about the general case. Given the set of points $\mathcal{M}_T\subseteq\mathbb{R}^{4^n}$,
a typical object for an algebraic geometer to look at is the set (ideal) of polynomials $f$ in $\mathbb{R}[x_1,\dots,x_{4^n}]$ such that $f(p)=0$ for all $p\in\mathcal{M}_T$. This is called the \emph{ideal of phylogenetic invariants}, $I_T$. Given any ideal, it is natural to look at the corresponding variety, which is the set of points $p\in\mathbb{R}^{4^n}$ such that $f(p)=0$, for all polynomials $f\in I_T$. This is called the \emph{phylogenetic variety} of $T$, denoted $V_T$.
Hopefully the reader can now see how polynomials arise in this type of work. In fact, many of the advanced statistical models used by computational biologists, such as Markov models and graphical models, can be viewed as algebraic varieties \cite{pachter2007mathematics}. To learn more about these topics the interested reader can consult books such as Allman/Rhodes (basic undergraduate introduction to phylogenetics) \cite{allman2004mathematical}, Pachter/Sturmfels \cite{pachter2005algebraic}, or Sullivant \cite{sullivant2018algebraic} (both graduate-level algebraic statistics).
\subsection{Place fields in neuroscience}
Our last topic comes from a problem in the vast field of neuroscience. Individuals are able to perceive sensory input such as light, color, motion, and location via neurons in the brain that respond to specific stimuli. Experiments have shown that certain neurons called \emph{place cells} fire based on the location of an animal in its environment \cite{okeefe1971hippocampus,yartsev2013representation}. As an animal moves around to different locations, different subsets of neurons fire. The region that causes a specific neuron to fire is called its \emph{place field}, and a single environment will generally exhibit many overlapping place fields. A cartoon of such place fields in two dimensions is shown in Figure~\ref{fig:place-fields}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=.35\textwidth]{FiveNeurons}
\end{center}
\caption{An example of five place fields. The shaded region is described by the codeword $\mathbf {c}=10100$.}\label{fig:place-fields}
\end{figure}
Notice that each region in the ``Venn diagram'' of place fields determines a binary string indicating which neurons should fire therein, called its associated \emph{codeword}. The collection of all strings possible within the arrangement is the \emph{code} of the set of place fields. It is a simple matter to construct the code from the place fields. However, the \emph{inverse problem} of deducing an appropriate set of regions corresponding to the firing of neurons given experimental data is not only much more difficult, but more scientifically relevant, because the structure of the place fields are not known \emph{a priori}. Not all codes correspond to a natural collection of place fields. For example, after a few minutes of trying, it is not hard to convince oneself that the code $\mathcal{C}=\{000,100,010,101,110\}$ \emph{cannot} be realized by a collection of convex open place fields. This fails even in higher dimensions, as the place field for Neuron 3 would need to be split between that for Neuron 1 and Neuron 2 \cite{curto2013neural}.
A natural approach for this problem involves algebraic geometry. To motivate this, consider a codeword $\mathbf{c}=010$, which can be encoded by its characteristic polynomial $p_{\mathbf{c}}=(x_1-1)x_2(x_3-1)$; note that $p_{\mathbf{c}}(\mathbf{c})=1$, and $p_{\mathbf{c}}(\mathbf{c}')=0$ for all other length-3 codewords $\mathbf{c}'$. This polynomial is called a \emph{pseudomonomial} because by viewing $x_1-1=x_1+1$ over $\mathbb{F}_2$ as ``NOT $x_1$'' in the Boolean sense, it can be written as a ``monomial with bars allowed'', e.g., $p_{\mathbf{c}}=\overline{x_1}x_2\overline{x_3}$. One can define the so-called \emph{pseudomonomial ideal}
\[
I_{\mathcal{C}}:=\big\{f\in\mathbb{F}_2[x_1,\dots,x_n]\mid f(\mathbf{c})=0\;\text{for all }\mathbf{c}\in\mathcal{C}\big\},
\]
and the \emph{neural ring} is simply the quotient of $\mathbb{F}_2[x_1,\dots,x_n]$ by this ideal. Another object of interest is the \emph{neural ideal} $J_\mathcal{C}$, which is generated by the characteristic functions of the non-code words, i.e., words \emph{not} in $\mathcal{C}$. These ideals describe the neural code completely, and understanding the generators of such an ideal can give insight into both the possibility of building an arrangement of place fields which exhibits the code, and what is the proper dimension in which to do so \cite{curto2013neural, curto2019algebraic,garcia2018grobner}. Once again, this is an example of new mathematical problems and theorems inspired by biology. However, as this work originated from a biological problem, there are also many questions about how to deal with noisy data, and robustness of these techniques -- questions that may not have arisen if these problems had originated organically within the algebra community. Of the four topics summarized in this paper, this is the newest, and it was only first published in 2013 \cite{curto2013neural}. However, it has exploded in popularity since, especially due to all of the unexplored mathematical problems that have arisen. Readers wishing to learn more about this should consult the survey article published in \cite[Chapter 7]{robeva2018algebraic}, and the references therein. Also, in a forthcoming paper, the first author and Raina Robeva will write a more thorough survey about pseudomonomials in algebraic biology, and how they appear in various different topics -- not only in the analysis of combinatorial neural codes, but also involving one of our other four topics, algebraic models of gene networks.
\section{A brief history of algebraic biology}
Methods and models in mathematical biology that involve networks, discrete mathematics, and algebraic techniques often fall under different overlapping umbrellas -- systems biology, bioinformatics, discrete mathematical biology, algebraic biology, algebraic systems biology, and so on. We tend to be rather loose with the term ``algebraic biology'' and what it encompasses because there is a large gray area. For example, one can build a Boolean model of a gene regulatory network, and analyze it without bringing algebra into the picture. In fact, the vast majority of Boolean network research does not involve algebra at all -- much of it comes from disciplines such as computer science \cite{crama2010boolean}, engineering \cite{cui2010complex}, or physics \cite{drossel2009random,wang2012boolean}. However, algebra still lies behind the scenes, whether or not it is utilized. We tend to err on the inclusive side of what fits under the big tent of algebraic biology.
To understand where algebraic biology may be going and in what context it belongs in the classroom, it is necessary to look back and see how we got to where we are today. In the late 1960s and early 1970s, modeling biological systems with Boolean functions was pioneered independently by two theoretical biologists, Stuart Kauffman in North America, and Ren\'e Thomas in Europe. These ideas percolated through countless collaborators, especially their students and postdocs, through multiple academic generations, and they remain a popular research topic globally today. However, it was not until the turn of the century that these ideas took a more algebraic fork, much of which was inspired by Reinhard Laubenbacher and his large academic tree. An anecdote of this shift is described in the 2009 American Mathematical Monthly article \emph{Computer algebra and systems biology} by Laubenbacher and Sturmfels \cite{laubenbacher2009computer}.
The theory of chemical reaction networks emerged around the same time as Boolean networks, as it was pioneered by Martin Feinberg, Friedrich Horn, and Roy Jackson in the early 1970s. The use of computational algebra was instigated by Karin Gatermann's 2001 book titled ``Computer algebra methods for equivariant dynamical systems'' \cite{gatermann2007computer}, and it remains an active area of research today, especially among the academic family trees and collaborators of Feinberg, Sturmfels, and Alicia Dickenstein, among others. Feinberg published an introductory book \emph{Foundations of chemical reaction network theory} in 2019 \cite{feinberg2019foundations}.
Algebraic techniques have sporadically been used in statistical problems since at least the 1940s \cite{wilks1946sample,votaw1948testing}, but the term ``\emph{Algebraic Statistics}'' was not coined until 2000, when it appeared in the title of the book \emph{Algebraic Statistics: Computational Commutative Algebra in Statistics} by Giovanni Pistone, Eva Riccomagno, and Henry Wynn \cite{pistone2000algebraic}. It is up for debate as to how much a catchy name catalyzed interest in this new exciting budding field, but it nevertheless has taken off since then. A graduate topics class at UC Berkeley led to the popular 2005 ``purple book'' \emph{Algebraic Statistics for Computational Biology} by Sturmfels and Lior Pachter, with a number of chapters contributed by graduate students and postdocs. New journals sprang up, such as the \emph{Journal of Algebraic Statistics} (2010), and the \emph{SIAM Journal on Applied Algebra and Geometry} (2017). In 2018, Seth Sullivant published the first broad introductory (graduate-level) book on algebraic statistics \cite{sullivant2018algebraic}.
Finally, neural rings first appeared in the literature in 2013, in a series of papers by Carina Curto, Vladimir Itskov, and their students, postdocs, and collaborators \cite{curto2013combinatorial,curto2013neural,youngs2014neural}. It has since taken off and spurred an active area of research, because aside from the original motivation of neuroscience, the neural rings and pseudomonomial ideals were new mathematical objects with a theory waiting to be developed, involving questions from combinatorial algebra, geometry, and topology.
Though several of these four areas date back to the late 1960s or early 1970s, the strong connection to algebraic geometry did not really take off until the turn of the century. As such, some of the foundational books on these topics are only now starting to appear. As with many new fields, compilations of loosely related book chapters precede broad introductory texts such as \cite{feinberg2019foundations, sullivant2018algebraic}. The purple book of Pachter and Strumfels is one such example in algebraic statistics. Raina Robeva has edited a series of three books on discrete and algebraic methods in mathematical biology, published in 2013, 2015, and 2018, respectively \cite{robeva2013mathematical,robeva2013mathematical,robeva2018algebraic}. They include 36 chapters that vary in difficulty from the introductory undergraduate level to more advanced graduate levels. Since the chapters are written by many different authors, there are at times overlaps and varying notations. However, all of these are good starting points for teaching some of these topics at the undergraduate or graduate levels, and a number of faculty members have used parts of these books for classes, especially the first one. Other good introductory chapters can be found in edited volumes on related topics, both in mathematical biology \cite{jonoska2013discrete} and in applications of subjects such as discrete mathematics \cite{harrington2017algebraic} and polynomial systems \cite{cox2020applications}.
As previously stated, our choice of four topics to highlight in this article was driven by the fact that they have remained active areas of research. This is by no means a comprehensive list of past or current topics in algebraic biology. A series of conferences titled \emph{Algebraic Biology} were held in 2005, 2007, 2008, and 2010, with the last one called \emph{Algebraic and Numeric Biology}. In the early 2000s, theoretical chemist and professor emeritus Michael Barnett published several articles on computer algebra in the life sciences, foreseeing the relevance to computational algebra and symbolic computation that was only starting to blossom. The large field of phylogenetics employs some algebraic tools and methods, though much of this can be considered part of algebraic statistics. Phylogenetics was one of the catalysts of the now thriving 21st century field of tropical geometry, a ``piecewise linear'' variant of algebraic geometry, with a number of diverse applications \cite{maclagan2015introduction}. The relatively new area of \emph{topological data analysis} has emerged over the past decade, and this involves using algebraic topology -- homology groups, in particular -- to analyze the shape of clouds of data \cite{wasserman2018topological}. A number of these applications are problems in the biological sciences, and algebra is fundamental to the topological methods \cite{rabadan2019topological}. As such, this field certainly belongs under the broad umbrella of algebraic biology. Heather Harrington at Oxford leads a large research group branded as \emph{algebraic systems biology}, and much of their work involves algebraic statistics, chemical reaction networks, and topological data analysis.
Though the footprint of algebraic biology is large and established, not everyone is in agreement on what to call it, or whether it the broad topics should even be given a collective name. The authors of this article like the term \emph{Algebraic Biology} not only because it is catchy, but because it is both informative to mathematicians about the scope while not being off-putting to non-mathematicians, due the word ``algebra'' being fairly innocuous. Sometimes, depending on the scope, it may be desirable to use additional terms, and get names such as \emph{Computational Algebraic Biology}, or \emph{Algebraic Systems Biology}. Having a memorable title can help unify and strengthen an emerging field, like what was done with algebraic statistics and topological data analysis around the turn of the century, energizing researchers and prospective students alike.
\section{Algebraic biology in education}
In contrast to the many people who go to graduate school with plans to study mathematical biology, most people who work in algebraic biology seem to end up there by accident. Some came from pure math and stumbled upon applications; others ended up there because they found an advisor they liked who worked in that area. As these areas grow, this is perhaps gradually changing, but slowly.
There are many reasons why most of the mathematical biology content that students see involves classical techniques such as ordinary differential equations (ODEs). For one, the field is older and more developed. It is also broader, as there are many natural examples to put in an ODE course, such as logistic growth, the SIR model, and nonlinear models such as competing species or predator-prey. Certain fundamentals are ubiquitous throughout the biological sciences -- the need to analyze data, relationships between rates of change, and the need to approximate relationships with linear functions. Thus, it should come as no surprise to see statistics, calculus, differential equations, and linear algebra dominate the mathematical biology community. Algebraic techniques, models, and methods comprise a niche in biology, much like they do in statistics. This makes it less likely to see algebraic biology taught as a full semester class, though it certainly can be and has been done. Entire topics classes have been taught on the topics we have discussed -- Boolean models, biochemical reaction networks, algebraic statistics, and phylogenetics. Often, such classes are at the graduate level, but many are still very accessible to undergraduates. Additionally, as previously stated, select topics on their own can make a great supplement to both a traditional mathematical biology, modeling, or even abstract algebra class. It is also a good source of open-ended problems that are accessible for undergraduate research.
One challenge to teaching these materials in the classroom is the lack of standard textbooks, though this is changing with the recent publication of introductory texts on fields such as biochemical reaction networks \cite{feinberg2019foundations} and algebraic statistics \cite{sullivant2018algebraic}. The three aforementioned books on various discrete and algebraic methods in mathematical biology \cite{robeva2013mathematical,robeva2015algebraic,robeva2018algebraic} edited or co-edited by Robeva are all freely available online with a ScienceDirect subscription, which many institutions have. Three out of the four topics covered in this article are contained in at least one chapter, with the exception being algebraic statistics. This is arguably the most advanced topic of the four, because algebraic geometry plays a central role. The first author of this article regularly teaches a course on mathematical modeling and divides it roughly into thirds: continuous, discrete, and stochastic methods, with algebra arising in these last two sections. He and Robeva have co-organized three workshops about teaching discrete and algebraic methods in mathematical biology to undergraduates, and the last two of Robeva's books partially came out of these workshops. A number of participants in these workshops have experimented with teaching topics in their own classes. The second author of this paper has used the most recent of these texts \cite{ robeva2018algebraic} to generate a self-contained unit of biological applications in a first abstract algebra course.
Another challenge to introducing algebraic methods into the classroom is the lack of established software tools available for analysis. Corporate packages such as Matlab, Maple, and Mathematica do not have the need or incentive to implement relevant algorithms. The result is that most software tools arise out of research groups at academic institutions, which are fluid and sometimes dependent on grant funding. One of the most successful and well-known such software packages for Boolean models is the freely available Gene Interaction Network simulation (GINsim) software \cite{ginsim}. This package debuted in the mid-2000s from academic descendants of Ren\'e Thomas, and it still remains popular today. However, if a faculty member in charge of a less established software package moves, funding dries up, or group members get interested in another research topic, these non-commercial software tools can disappear at worst, remain immortalized online at an old GitHub repository, or in the best case scenario, get added as libraries to open-source platforms such as Sage (general), R (statistical), and Macaulay2 or Singular (computational algebraic). Algorithms for neural ideals have recently been written for Sage \cite{petersen2018neural} and Matlab \cite{youngs2015neural}. We will refrain from providing a list of citations to software packages on broad topics such as Boolean networks or phylogenetics, and instead encourage the interested reader to do a simple Google search, as the results are enormous.
In addition to the question about textbooks, another challenge with teaching certain topics in algebraic biology, especially those involving Boolean models, is the abundance of frameworks and the lack of a standard notation. For example, a research group in an electrical engineering department might view a gene regulatory network as a series of logical gates defined by truth tables, and write their code accordingly, whereas others might represent their functions as polynomials over $\mathbb{F}_2$. Some modelers might update their functions synchronously, whereas others use some sort of asynchronous update. Many others use a hybrid scheme, block-sequential update, or introduce some amount of stochasticity or varying time delays. Of course, the other side of the coin to this is that it means that these methods have grown in popularity and are being used by researchers from a diverse range of fields, such as math, computer science, engineering, biology, and physics.
\section{Concluding remarks}
In order to highlight, promote, and legitimize the relatively unknown field of algebraic biology, we have given a peek at four biological problems whose analysis is amenable to algebraic methods. Most of the thrust of this field has come since the turn of the century, and interest shows no signs of dying down. One of the main questions we addressed in this article is what role it can and should have in the classroom. It is unrealistic to expect that many institutions will offer an entire class on this topic, and one can make a strong case that there are much more appropriate topics for a mathematical biology or modeling class. However, topics from algebraic biology wrapped up in self-contained modules can inspire students not only in traditional modeling or mathematical biology courses, but also in modern algebra courses. Just as questions from physics and biology play a motivating role in calculus courses, biological questions can join the ranks of applications shown to students in their first abstract algebra course. It is not uncommon for mathematicians to motivate a discussion of group theory using questions of symmetry, or to showcase cryptography or coding theory as an application of finite fields. One could also introduce Boolean models or place fields in neuroscience as motivators for understanding polynomial ideals. Computational algebraic techniques such as Gr\"obner bases can be motivated by solving systems of equations from a biological problem -- perhaps a Boolean model of a molecular network or a nonlinear system of differential equations from a biochemical reaction network.
We will conclude by emphasizing once again that this article is certainly an incomplete and biased glimpse into the world of algebraic biology. There are many topics, individuals, and papers that were not included. Some of this was simply for space limitations -- the number of citations easily could have been an order of magnitude greater, but at some point we had to draw a sensible line. Beyond that, we are just two individuals who have our own natural implicit biases and blind spots, so there are certainly unintentional omissions as well. We encourage readers to reach out and inform us of any such oversights, tangentially related work, or new developments going forward. If enough feedback is given, we may update this article on the arXiv as we see fit. We would love to hear anecdotes from readers about opportunities, experiences, and lessons learned from introducing algebraic biology into their classrooms.
\section*{Acknowledgements}
The authors would like to thank Elena Dimitrova, Heather Harrington, Reinhard Laubenbacher, and Raina Robeva for their feedback on an earlier draft of this article.
\bibliographystyle{abbrv}
| {
"timestamp": "2020-03-05T02:03:58",
"yymm": "2003",
"arxiv_id": "2003.01812",
"language": "en",
"url": "https://arxiv.org/abs/2003.01812",
"abstract": "Though it goes without saying that linear algebra is fundamental to mathematical biology, polynomial algebra is less visible. In this article, we will give a brief tour of four diverse biological problems where multivariate polynomials play a central role -- a subfield that is sometimes called \"algebraic biology.\" Namely, these topics include biochemical reaction networks, Boolean models of gene regulatory networks, algebraic statistics and genomics, and place fields in neuroscience. After that, we will summarize the history of discrete and algebraic structures in mathematical biology, from their early appearances in the late 1960s to the current day. Finally, we will discuss the role of algebraic biology in the modern classroom and curriculum, including resources in the literature and relevant software. Our goal is to make this article widely accessible, reaching the mathematical biologist who knows no algebra, the algebraist who knows no biology, and especially the interested student who is curious about the synergy between these two seemingly unrelated fields.",
"subjects": "Neurons and Cognition (q-bio.NC); Other Quantitative Biology (q-bio.OT)",
"title": "The case for algebraic biology: from research to education",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534333179648,
"lm_q2_score": 0.8056321936479701,
"lm_q1q2_score": 0.7906904824472837
} |
https://arxiv.org/abs/1210.5329 | The optimal division between sample and background measurement time for photon counting experiments | Usually, equal time is given to measuring the background and the sample, or even a longer background measurement is taken as it has so few counts. While this seems the right thing to do, the relative error after background subtraction improves when more time is spent counting the measurement with the highest amount of scattering. As the available measurement time is always limited, a good division must be found between measuring the background and sample, so that the uncertainty of the background-subtracted intensity is as low as possible.Herein outlined is the method to determine how best to divide measurement time between a sample and the background, in order to minimize the relative uncertainty. Also given is the relative reduction in uncertainty to be gained from the considered division. It is particularly useful in the case of scanning diffractometers, including the likes of Bonse-Hart cameras, where the measurement time division for each point can be optimized depending on the signal-to-noise ratio. | \section{Outline}
\emph{Note by the author: It was found after this derivation, that the work presented here had been derived differently but with similar conclusions by Steinhart and Plestil \cite{Steinhart-1993}. We nevertheless believe that this simplified derivation may be of use to some readers.}
Usually, equal time is given to measuring the background and the sample, or even a longer background measurement is taken as it has so few counts. While this seems the right thing to do, the relative error after background subtraction improves when more time is spent counting the measurement with the highest amount of scattering. As the available measurement time is always limited, a good division must be found between measuring the background and sample, so that the uncertainty of the background-subtracted intensity is as low as possible.
Herein outlined is the method to determine how best to divide measurement time between a sample and the background, in order to minimize the relative uncertainty. Also given is the relative reduction in uncertainty to be gained from the considered division. It is particularly useful in the case of scanning diffractometers, including the likes of Bonse-Hart cameras, where the measurement time division for each point can be optimized depending on the signal-to-noise ratio.
The optimum setting for machines with photon-counting two-dimensional detectors has to be further evaluated, but the intention is to include that in this note.
\section{The calculation}
We assume that the number of background intensity photons $I_b$, measured for a time $t_{b}$ is subtracted from the sample measurement photon count $I_s$ which was measured for a time $t_s$, to result in the background subtracted count rate $C_{bs}$:
\begin{equation}\label{eq:sIbgs}
C_{bs}=\frac{I_s}{t_s}-\frac{I_b}{t_b}
\end{equation}
Defining the sample uncertainty as $\Delta I_s=\sqrt{I_s}$ and the background uncertainty similarly as $\Delta I_b=\sqrt{I_b}$, the uncertainty $\Delta C_{bs}$ would then be:
\begin{equation}\label{eq:sIerr}
\Delta C_{bs}=\sqrt{\left(\frac{\Delta I_s}{t_s}\right)^2+\left(\frac{\Delta I_b}{t_b}\right)^2 }=\sqrt{\frac{I_s}{t_s^2}+\frac{I_b}{t_b^2}}
\end{equation}
Defining the number of counted photons $I$ to be a multiplication of the count rate $C$ and the measurement time, we get $I_b=C_b t_b$ and $I_s=C_s t_s$. Further defining the signal-to-noise ratio $g=\frac{C_s}{C_b}$, the total time $t_t=t_b+t_s$ and the fraction of time spent measuring the sample $f=\frac{t_s}{t_t}$, we can define our relative uncertainty in terms of signal-to-noise ratio and time fraction:
\begin{equation}\label{eq:sErel}
\frac{\Delta C_{bs}}{C_{bs}}=\sqrt{\frac{\frac{C_s}{t_s}+\frac{C_b}{t_b}}{(C_s-C_b)^2}}=\sqrt{\frac{1}{C_b t_t}}\sqrt{\frac{\frac{g}{f}+\frac{1}{(1-f)}}{(g-1)^2}}
\end{equation}
We then can try to find the optimum by locating the value for $f$ where the derivative of equation \ref{eq:sErel2} is zero:
\begin{equation}\label{eq:sd1}
\frac{\partial \frac{\Delta C_{bs}}{C_{bs}}}{\partial f}=\frac{\partial}{\partial f}\sqrt{\frac{1}{C_b t_t}}\sqrt{\frac{\frac{g}{f}+\frac{1}{(1-f)}}{(g-1)^2}}=0
\end{equation}
\begin{equation}\label{eq:sd2}
0=\frac{\partial}{\partial f}\sqrt{\frac{\frac{g}{f}+\frac{1}{(1-f)}}{(g-1)^2}}
\end{equation}
which, given $0<f<1$, is true for
\begin{equation}\label{eq:finalf}
f=\frac{g-\sqrt{g}}{g-1}
\end{equation}
We can calculate the relative reduction in uncertainty compared to the 50/50 case (i.e. equal time spent on background and sample measurements) as:
\begin{equation}\label{eq:finalopt}
\frac{\frac{\Delta C_{bs}}{C_{bs}}\mid_\mathrm{50/50}-\frac{\Delta C_{bs}}{C_{bs}}\mid_\mathrm{optimal}}{\frac{\Delta C_{bs}}{C_{bs}}\mid_\mathrm{50/50}}=\frac{\sqrt{\frac{2g+2}{(g-1)^2}} - \sqrt{ \frac{ \frac{g^2-g}{g-\sqrt{g}}+\frac{g-1}{\sqrt{g}-1}}{(g-1)^2}} }{\sqrt{\frac{2g+2}{(g-1)^2}}}
\end{equation}
\begin{figure}
\centering
\includegraphics[angle=0, width=0.75\textwidth]{optimal_ratio.eps}
\caption{The optimal fraction of time $f$ spent measuring the sample as opposed to measuring the background, as a function of the signal-to-noise ratio $g$.}
\label{fg:finalf}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0, width=0.75\textwidth]{reduction_in_error.eps}
\caption{The reduction in error that can be obtained by dividing the time optimally between sample and background measurement, as a function of the signal-to-noise ratio $g$.}
\label{fg:finalopt}
\end{figure}
\section{Conclusions}
Figures \ref{fg:finalf} and \ref{fg:finalopt} show the optimal division of time between sample and background, and the reduction in uncertainty obtained through this optimization, respectively. These figures clarify that the reduction in uncertainty may be worth the trouble of a quick determination of the signal-to-noise ratio, especially in areas where this ratio strongly deviates from unity.
A quick scan of sample and background may be used to automatically select the most optimal use of measurement time, in particular for scanning (small- and wide-angle) diffractometers (including oddities such as Bonse-Hart cameras) where the measurement time \emph{per point} can be freely tuned.
| {
"timestamp": "2012-10-22T02:01:08",
"yymm": "1210",
"arxiv_id": "1210.5329",
"language": "en",
"url": "https://arxiv.org/abs/1210.5329",
"abstract": "Usually, equal time is given to measuring the background and the sample, or even a longer background measurement is taken as it has so few counts. While this seems the right thing to do, the relative error after background subtraction improves when more time is spent counting the measurement with the highest amount of scattering. As the available measurement time is always limited, a good division must be found between measuring the background and sample, so that the uncertainty of the background-subtracted intensity is as low as possible.Herein outlined is the method to determine how best to divide measurement time between a sample and the background, in order to minimize the relative uncertainty. Also given is the relative reduction in uncertainty to be gained from the considered division. It is particularly useful in the case of scanning diffractometers, including the likes of Bonse-Hart cameras, where the measurement time division for each point can be optimized depending on the signal-to-noise ratio.",
"subjects": "Data Analysis, Statistics and Probability (physics.data-an); Instrumentation and Methods for Astrophysics (astro-ph.IM)",
"title": "The optimal division between sample and background measurement time for photon counting experiments",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.975576912786245,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.7906845160648777
} |
https://arxiv.org/abs/2302.12152 | Elastic snap-through instabilities are governed by geometric symmetries | Many elastic structures exhibit rapid shape transitions between two possible equilibrium states: umbrellas become inverted in strong wind and hopper popper toys jump when turned inside-out. This snap-through is a general motif for the storage and rapid release of elastic energy, and it is exploited by many biological and engineered systems from the Venus flytrap to mechanical metamaterials. Shape transitions are known to be related to the type of bifurcation the system undergoes, however, to date, there is no general understanding of the mechanisms that select these bifurcations. Here we analyze numerically and analytically two systems proposed in recent literature in which an elastic strip, initially in a buckled state, is driven through shape transitions by either rotating or translating its boundaries. We show that the two systems are mathematically equivalent, and identify three cases that illustrate the entire range of transitions described by previous authors. Importantly, using reduction order methods, we establish the nature of the underlying bifurcations and explain how these bifurcations can be predicted from geometric symmetries and symmetry-breaking mechanisms, thus providing universal design rules for elastic shape transitions. |
\section{Symmetry-breaking of a pitchfork bifurcation: a canonical example}\label{sec:symmetryPitchfork}
We review the canonical example of symmetry-breaking in a system with pitchfork bifurcation. It is well-known that starting from a system with pitchfork bifurcation symmetry, the introduction of an additional parameter that breaks the left-right symmetry could turn the pitchfork bifurcation into a saddle-node-like bifurcation.
\bigskip
\par\noindent
\textbf{Canonical saddle node bifurcation.} The canonical form of a saddle-node bifurcation for a one-degree-of-freedom system (say $x_s$) is given by
\begin{equation}
\dot{x}_s=\Delta\mu_s +x_s^2.
\label{eq:saddleNode1stOrder}
\end{equation}
Here, the dot stands for the first order derivative with respect to time and $\Delta \mu_s$ is the bifurcation parameter. This system admits a potential function $V(x_s)$ such that $\dot{x}_s=-dV(x_s)/dx_s$,
\begin{equation}
V(x_s)=-\Delta\mu x_s-\frac{1}{3}x_s^3.
\label{eq:saddleNode1stOrderPotential}
\end{equation}
For $\Delta \mu < 0$, \eqref{eq:saddleNode1stOrderPotential} admits one potential well and one energy barrier (Fig. \ref{fig:symmetryPitchfork}C), reflecting one stable equilibrium and one unstable equilibrium of~\eqref{eq:saddleNode1stOrder}. The two equilibria merge and disappear in $\Delta \mu = 0$, as pictured on the bifurcation diagram (Fig. \ref{fig:symmetryPitchfork}F).
\bigskip
\par\noindent
\textbf{Canonical pitchfork bifurcation.} The canonical form of a supercritical pitchfork bifurcation is given by
\begin{equation}
\dot{x}=-\Delta\mu x-x^3.
\label{eq:pitchfork1stOrder}
\end{equation}
This system admits a potential function,
\begin{equation}
V(x)=\frac{1}{2}\Delta\mu x^2+\frac{1}{4}x^4.
\label{eq:pitchfork1stOrderPotential}
\end{equation}
Pitchfork bifurcations possess a left-right symmetry. Here, this is reflected by the invariance of \eqref{eq:pitchfork1stOrder} and \eqref{eq:pitchfork1stOrderPotential} through a transformation $x\rightarrow-x$.
For $\Delta\mu<0$, the potential $V(x)$ exhibits two potential wells that are symmetrically distributed around the energy barrier at the origin $x=0$ (Fig. S.1A). As $\Delta\mu$ increases, the potential landscape gets deformed symmetrically until, at $\Delta\mu=0$, the two potential wells merge with the energy barrier at the origin. For $\Delta\mu>0$, only a single potential well exists at the origin.
\bigskip
\par\noindent
\textbf{Canonical symmetry-breaking of a pitchfork bifurcation.}
When an asymmetry is introduced in~\eqref{eq:pitchfork1stOrder},
it turns it into an imperfect pitchfork,
\begin{equation}
\dot{x}=-\Delta\mu x-x^3+h,
\label{eq:imperfectPitchfork}
\end{equation}
for which the potential function is given by
\begin{equation}
V(x)=\frac{1}{2}\Delta\mu x^2+\frac{1}{4}x^4-hx.
\end{equation}
Here, the parameter $h>0$ represents the asymmetry. This changes the nature of the bifurcation dramatically. Even for $|h|\ll 1$, the loss of symmetry makes one of the potential wells reach the energy barrier before the other, leading to a sudden disappearance of these two equilibria -- the stable equilibrium at the potential well and the unstable equilibrium at the energy barrier (Fig. \ref{fig:symmetryPitchfork}B). Locally, at the bifurcation point where these two equilibria merge
before disappearing, the bifurcation has the form of a saddle-node \cite{strogatz1994} (see Fig. \ref{fig:symmetryPitchfork}C and \ref{fig:symmetryPitchfork}F).
\begin{figure}[t]
\centering
\includegraphics[width =\textwidth]{figsSupplemental/Fig1.pdf}
\caption{{\textbf{Pitchfork bifurcation and symmetries} Pitchfork bifurcations are known to arise in systems with binary symmetries (i.e left-right, top-bottom, etc). Breaking this symmetry alters the nature of the transition into a saddle-node bifurcation. (First row) Potential landscape (green lines) associated with a \textbf{A.} supercritical pitchfork, \textbf{B.} imperfect supercritical pitchfork, \textbf{C.} saddle-node bifurcation for different values of the bifurcation parameter $\Delta\mu$. Equilibria (dot symbols) are highlighted for each system. (Second row) Bifurcation diagrams representing the evolution of these equilibria as a function of the bifurcation parameter (full lines for stable equilibria and dashed lines for unstable) for \textbf{D.} the system with supercritical pitchfork, \textbf{E.} the system with broken symmetry exhibiting an imperfect pitchfork bifurcation, and \textbf{F.} the system with saddle-node bifurcation. Note the local equivalence (see grey boxes in panels B and E) between the imperfect pitchfork and saddle-node bifurcations.}}
\label{fig:symmetryPitchfork}
\end{figure}
This is formally demonstrated as follows. We remark that this bifurcation occurs when the local minima of the RHS $f(x, \Delta\mu)=-\Delta\mu x-x^3+h$ in \eqref{eq:imperfectPitchfork} hits the zero axis. We solve for $x^*$ such that $f'(x^*, \Delta\mu)=0$ and $\Delta\mu^*$ such that $f(x^*, \Delta\mu^*)=0$ and find the coordinates of the bifurcation point (see Fig. \ref{fig:symmetryPitchfork}E):
\begin{equation}
\Delta\mu^{*}=-3\left(\frac{h}{2}\right)^{2/3},\qquad x^*=-\sqrt{-\frac{\Delta\mu^{*}}{3}}.
\end{equation}
We then seek for an asymptotic approximation of \eqref{eq:imperfectPitchfork} in the vicinity of this bifurcation point. We set $x_s=x-x^*$ and $\Delta\mu_s=\Delta\mu-\Delta\mu^*$ so that the bifurcation occurs in $(\Delta\mu_s=0, x_s=0)$. We introduce these new variables in \eqref{eq:imperfectPitchfork} and write an asymptotic expansion in the limit $x_s \ll 1$, $\Delta\mu_s \ll 1$. We get:
\begin{equation}
\dot{x}_s=|x^*|(\Delta\mu_s+3x_s^2)+O(\Delta\mu^{3/2}),
\label{eq:saddleImperfect}
\end{equation}
which is precisely (up to numerical constants) the normal form of a saddle-node (see \eqref{eq:saddleNode1stOrder}). The bifurcation diagram associated with this asymptotic approximation (black lines) is compared to the bifurcation diagram of the imperfect pitchfork in Fig. \ref{fig:symmetryPitchfork}E and compares well with it in the very vicinity of the bifurcation point.
We note that the symmetry breaking mechanism, reviewed here, that turns a pitchfork into a saddle-node bifurcation also stands for a subcritical pitchfork bifurcation, whose normal form is simply obtained by applying a transformation $(x\rightarrow-x,\ t\rightarrow-t)$ to \eqref{eq:pitchfork1stOrder}.
\begin{table}[!b]
\caption{Strip's geometric and material parameters}
\begin{equation*}
\begin{array}{|c | c | c|}
\hline
\multicolumn{3}{|c|}{\textbf{Numerics}} \\
\hline
\textbf{Parameter} & \textbf{Value} & \textbf{units}\\
\hline
L & 0.156 & \text{meters (m)}\\
\hline
b & 2\times 10^{-2} & \text{meters (m)}\\
\hline
h & 1\times 10^{-3} & \text{meters (m)}\\
\hline
\rho & 1.35\times 10^{3} & \text{kilograms per meter cube (Kg$\cdot$m$^{-3}$)}\\
\hline
E & 3.7\times 10^{9} & \text{Pascals (Pa)}\\
\hline
G & 3.7\times 10^{9} & \text{Pascals (Pa) }\\
\hline
S_1=S_2=S_3 & 1\times 10^{5} & \text{Newtons (N)}\\
\hline
\end{array} \qquad
\begin{array}{|l|}
\hline
\multicolumn{1}{|c|}{\textbf{Experiments of Gomez et al.~\cite{gomez2017}}} \\
\hline
\multicolumn{1}{|c|}{\text{Polyethylene terephthalate (PET)}} \\
\hline
\rho = 1.337\times 10^{3} \quad \text{Kg$\cdot$m$^{-3}$}\\
E = 5.707\times 10^{9} \quad \text{Pa}\\
L \in \{0.240, 0.290, 0.430\} \ \text{m} \\
\hline
\multicolumn{1}{|c|}{\text{Stainless steel}} \\
\hline
\rho = 7.881\times 10^{3}\quad \text{Kg$\cdot$m$^{-3}$}\\
E = 203.8\times 10^{9} \quad \text{Pa}\\
L \in \{0.140, 0.280\} \ \text{m} \\
\hline
\end{array}
\label{eq:cosseratRodParameters}
\end{equation*}
\label{tab:param}
\end{table}
\section{Mathematical methods}
\label{sec:math}
\par\noindent
\textbf{A. Numerical simulations based on the Cosserat rod theory.} We consider an elastic strip of length $L$ and rectangular cross-section of width $b$ and thickness $h$; see, e.g., Fig.~\ref{fig:towardInitialCondition}. We numerically integrate the discrete Cosserat rod equations governing the dynamics of the strip's centerline $\mathbf{r}(s,t)$ and orientation tensor $\mathbf{Q}(s,t)$ using our own implementation of the method described in \cite{gazzola2018}.
Here, $s$ is arclength and $t$ is time. We integrate the discrete equations forward in time to obtain the equilibrium configuration of the Euler-buckled strip, as well as the equilibrium configuration of the buckled elastic strip under further translational and rotational boundary actuation.
In all numerical simulations, we use the set of dimensional parameters listed in Table~\ref{tab:param}. This set of parameters corresponds physically to strips made from plastic sheets as in the experiments of \cite{sano2018}. Throughout the study,
we set $\Delta L=L/100$, except for the snapshots showing the 3D rendering where we used $\Delta L=L/20$ for illustration purposes.
\bigskip
\par\noindent
\textbf{B. Euler beam model.}
We also analyze the equilibria of the elastic strip and their stability using the Euler beam model, which affords semi-analytical results. In the small deflection limit, the non-dimensional transverse displacement $W$ of the elastic strip in the $y$-direction is described by the linear Euler beam equation \cite{timoshenko2009}, which in non dimensional form is given by
\begin{equation}
\frac{\partial^2 W}{\partial T^2}+\frac{\partial^4W}{\partial X ^4}+\Lambda^2 \frac{\partial^2 W}{\partial X^2}=0.
\label{eq:beam_equation_nodim}
\end{equation}
Here, $\Lambda^2$ is the non-dimensional longitudinal compression applied to the elastic strip, $X\in[-1/2,1/2]$ is the non dimensional longitudinal coordinate and $T$ the non-dimensional time.
This partial differential equation is complemented by the non-linear geometric (incompressibility) constraint \cite{pandey2014, gomez2017},
\begin{equation}
\int_{-1/2}^{1/2}\left(\frac{\partial W}{\partial X}\right)^2dX=2 .
\label{eq:geometrical_constraint_nodim}
\end{equation}
The system formed by \eqref{eq:beam_equation_nodim} and \eqref{eq:geometrical_constraint_nodim} must be complemented by a set of four appropriate boundary conditions. These boundary conditions depend on the different type of boundary actuation as summarized in table \ref{tab:dimensional_boundary_conditions}.
\bigskip
\par\noindent
\textbf{C. Static equilibria.}
The static equilibria $W_\textrm{eq}(X)$ of the elastic strip are solutions of the steady counterpart of~\eqref{eq:beam_equation_nodim},
\begin{equation}
\frac{d^4 W_\textrm{eq}}{d X ^4}+\Lambda_\textrm{eq}^2 \frac{d^2 W_\textrm{eq}}{d X^2}=0,
\label{eq:beam_equation_nodim_static}
\end{equation}
whose general solution is of the form
\begin{equation}
W_\textrm{eq}(X)=A\sin(\Lambda_\textrm{eq} X)+B\cos(\Lambda_\textrm{eq} X)+ CX+D.
\label{eq:sol_general_static}
\end{equation}
Here, $A$, $B$, $C$, $D$ are 4 unknown constants that must be chosen so that \eqref{eq:sol_general_static} satisfies the appropriate boundary conditions given in table \ref{tab:dimensional_boundary_conditions}.
Writing the boundary conditions of the elastic strip yields a system of equations of the form,
\begin{equation}
\mathbf{M}\mathbf{v}=\mathbf{b},
\label{eq:linearSystem}
\end{equation}
where $\mathbf{v}=(A,B,C,D)$. The geometric constraint
\eqref{eq:geometrical_constraint_nodim} implies that the equilibrium configurations must also satisfy
\begin{equation}
\int_{-1/2}^{1/2} \left(\dfrac{\partial W_\textrm{eq}}{\partial X}\right)^2 dX = 2.
\label{eq:geometrical_constraint_equilibrium}
\end{equation}
The system of equations in \eqref{eq:linearSystem} and \eqref{eq:geometrical_constraint_equilibrium} determines the eigenvalue $\Lambda_{\textrm{eq}}$ and eigenfunction $W_\textrm{eq}(X)$ by providing conditions to solve for $\Lambda_\textrm{eq}$ and $(A,B,C,D)$, as discussed in detail in sections~\ref{sec:buckling}, \ref{sec:translation}, and~\ref{sec:rotation}.
\section{Euler Buckling}
\label{sec:buckling}
An initially straight elastic strip subject to a compression force undergoes a buckling instability through a supercritical pitchfork bifurcation beyond a given threshold (see, e.g.,~\cite{nayfeh2008, howell2009}). The threshold at which this instability occurs depends on the boundary conditions. Four types of boundary conditions are typically discussed in classic texts (see, e.g.,~\cite{timoshenko2009}): clamped-clamped (CC), hinged-hinged (HH), clamped-hinged (CH), and clamped-free. The clamped-free case has the lowest threshold, to this buckling instability and the clamped-clamped the highest. Here, we are interested in the first three boundary conditions: CC, HH, and CH.
\bigskip
\par\noindent
\textbf{A. Equilibria.}
For each set of boundary conditions, the buckled strip admits an infinite hierarchy of static equilibria that come in pairs of increasing value of bending energy $\mathcal{E}_b$. The pair of static equilibria that share the smallest energy level have a U like shape, and thus we label them U\textsubscript{A} and U\textsubscript{B}. The pair of equilibria at the next energy level has an S like shape and are labeled S\textsubscript{A} and S\textsubscript{B} and the pair of equilibria at the third energy level has a W like shape and is labeled W\textsubscript{A} and W\textsubscript{B}. These static equilibria are obtained by two methods: numerically using the Cosserat rod theory (Fig. \ref{fig:towardInitialCondition}) and semi-analytically using the Euler beam model. The semi-analytic solutions are listed in Table \ref{tab:dimensional_boundary_conditions}. Contrary to the referential used in the paper, in Table \ref{tab:dimensional_boundary_conditions}, $x$ is measured from the left end of the strip.
\bigskip
\par\noindent
\textbf{B. Stability.}
Linear stability analysis of the static equilibria of the Euler-buckled strip in the CC, HH, and CH cases shows that only the fundamental harmonic pair U\textsubscript{A,B} is stable; all subsequent, even and odd, harmonic pairs are unstable, including S\textsubscript{A,B} and W\textsubscript{A,B}.
\bigskip
\par\noindent
\textbf{C. Invariance and symmetries in Euler buckling.}
The equations of motion governing the elastic strip are invariant under the following three transformations: the top-down transformation $w\rightarrow-w$, the left-right transformation $x\rightarrow-x$, and the $\pi$-rotational transformation $w\rightarrow-w$ and $x\rightarrow-x$. This invariance is not unique to the Euler beam model; it is also a characteristic of the 3D Cosserat rod theory.
The CC and HH boundary conditions respect the invariance of the governing equations under all three transformations, while the CH boundary conditions respect only invariance under the top-down transformation; see Fig.~\ref{fig:towardInitialCondition} and Table~\ref{tab:dimensional_boundary_conditions}.
\begin{figure}[!t]
\centering
\includegraphics[width =\textwidth]{figsSupplemental/Fig2.pdf}
\caption{{\textbf{Euler buckling and symmetries} When the two ends of a strip are pushed towards each other, the strip passes from a straight configuration (left column of \textbf{A,C,E}) to a buckled configuration following a standard Euler-buckling instability. This is true for clamped (\textbf{A,B}), hinged (\textbf{C,D}), or mixed boundary conditions (\textbf{E,F}). In each case, the system is bistable, with two stable buckled states U\textsubscript{A} and U\textsubscript{B}. The original straight beam admits different symmetries depending on the type of boundary conditions (shown in black in the top panels in the left column of \textbf{A,C,E}). When buckling occurs, the buckled configuration conserves some of these symmetries (shown in green in the middle left column of \textbf{A,C,E}) and breaks others (shown in red in the middle right column of \textbf{A,C,E}). Conserved symmetries map a solution to itself and broken symmetries map a solution to its twin solution (see Fig.~\ref{fig:eulerBucklingSymmetries}).
(\textbf{B, D, F}) Numerical solutions (green lines) are compared to analytical solutions based on the Euler beam model (black lines).
Higher order modes of buckling S\textsubscript{A} and W\textsubscript{A} (black dashed lines) the Euler beam model are superimposed.}}
\label{fig:towardInitialCondition}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width =\textwidth]{figsSupplemental/Fig3.pdf}
\caption{{\textbf{Twin symmetries of the Euler-buckled strip.} The transformation corresponding to a conserved symmetry (green) maps a buckled configuration to itself. The transformation corresponding to a broken symmetry (red) maps a buckled configuration to its twin. We call the $\pi$-rotational symmetry that maps the U-shapes to one another the \textit{U-twin symmetry} and the left-right symmetry that maps the U-shapes to one another the \textit{S-twin symmetry}.}}
\label{fig:eulerBucklingSymmetries}
\end{figure}
In Euler buckling, the transition from the straight strip equilibrium to any of the buckling harmonics is associated with a spontaneous loss of geometric symmetry of the solution, but not with a loss of invariance of the governing equations of motion.
The invariance of \eqref{eq:beam_equation_nodim}, \eqref{eq:geometrical_constraint_nodim}, and the CC or HH boundary conditions (Table \ref{tab:dimensional_boundary_conditions}) under the three transformations discussed here guarantees that the image of a solution under any of these transformations is also a solution.
In Fig. \ref{fig:eulerBucklingSymmetries}, we consider the equilibrium U\textsubscript{A} (black line) to which we apply all the three transformations. The image of U\textsubscript{A} under the \textit{top-bottom} reflection or \textit{$\pi$-rotation} transformation is the twin solution U\textsubscript{B} and vice-versa, while the \textit{top-bottom} reflection maps U\textsubscript{A} to itself and U\textsubscript{B} to itself.
Similarly, we show in Fig. \ref{fig:eulerBucklingSymmetries} that the image of S\textsubscript{A} under the \textit{top-bottom} reflection or \textit{left-right} reflection is the twin solution S\textsubscript{B} and vice-versa, while the \textit{$\pi$-rotation} transformation maps S\textsubscript{A} to itself and S\textsubscript{B} to itself.
More generally, the stable U-shapes, as well as the twin shapes of all even harmonics, are related by a $\pi$-rotation about the midpoint of the line connecting the endpoints of the strip. The unstable S-shapes, as well as the twin-shapes of all odd harmonics, are related by a left-right reflection about the line orthogonal to the line connecting the endpoints at its midpoint. We refer to the $\pi$-rotation transformation as the U-twin symmetry and left-right reflection as the S-twin symmetry. The U-twin and S-twin symmetries play an important role in studying shape transitions of the buckled strip under boundary actuation.
For the CH boundary condition, the \textit{top-bottom symmetry} is the only symmetry at play, and it is lost for any harmonic of buckling. The two other symmetries in the CC and HH cases do not exist here, as the boundary conditions break invariance of the governing system under the left-right reflection (see Table \ref{tab:dimensional_boundary_conditions}). There is no \textit{twin transformation} with this set of boundary conditions.
\section{Translational boundary actuation of the buckled strip}
\label{sec:translation}
The buckled strip is now driven through shape transition using the transverse translation of one boundary, which was realized experimentally in Sano \& Wada \cite{sano2018}.
Here, we reproduce and expand the results of~\cite{sano2018} numerically using the Cosserat rod theory and semi-analytically using the Euler beam model.
\bigskip
\par\noindent
\textbf{A. Numerical simulations based on the Cosserat rod theory.}
For each set of boundary conditions, CC, HH, and CH, starting from the stable configuration U\textsubscript{A} of the Euler-buckled strip, we translate the left boundary of the elastic strip in the transverse direction $\mathbf{e}_y$ by an amount $d$ until a maximum value $d_{\textrm{max}}$, allowing the elastic strip to reach mechanical equilibrium at each value $d \in [0,d_{\textrm{max}}]$; see Fig. \ref{fig:forwardBackwardSW} (first row).
We repeat the same process starting from the other buckled configuration U\textsubscript{B}, see Fig. \ref{fig:forwardBackwardSW} (second row). In all three cases (CC, HH, and CH), bistability is lost beyond a critical value $d^\ast$. The values at which we observe the transition are given in table \ref{tab:muStarNumRot} in terms of the dimensionless bifurcation parameter $\mu = \alpha \sqrt{L/\Delta L}$ discussed below.
\bigskip
\par\noindent
\textbf{B. Equilibria based on Euler beam model.} The boundary conditions due to the transverse misalignment $d$ between the two boundaries
can be written in the context of the non-dimensional Euler beam model as (see Table~\ref{tab:dimensional_boundary_conditions})
\begin{equation}
\left. W\right|_{X=-1/2}=\frac{d}{\sqrt{L\Delta L}}\equiv \mu_d, \qquad \left. W\right|_{X=1/2}=0.
\label{eq:adim_misalignment}
\end{equation}
Here, we introduced the non-dimensional parameter $\mu_d = {d}/{\sqrt{L\Delta L}}$, which balances the vertical position $d$ imposed at the boundary with the natural vertical position $\sqrt{L\Delta L}$ induced by the end-to-end shortening.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figsSupplemental/Fig4.pdf}
\caption{{\textbf{Equilibrium configurations of the Euler-buckled strip under translational actuation.} Starting from the equilibrium shapes U\textsubscript{A,B} of the Euler-buckled, these equilibria morph into different shapes as as a misalignment $d$ is introduced between the two boundaries. \textbf{A-C.} Evolution of the U\textsubscript{A} configuration for different $d$ values for the CC, HH, and CH.
\textbf{D-F.} Evolution of the U\textsubscript{B} configuration for different $d$.
In the insets in \textbf{A-F}, we compare the equilibrium shapes obtained numerically (green lines show the centerline of the Cosserat rod) to the equilibrium shapes obtained analytically (black lines). \textbf{G-I.} Evolution of the non-dimensional midpoint deflection of the strip as a function of the non-dimensional misalignment parameter $\mu_d$. The green symbols represent data from numerical Cosserat simulations and the lines data from the Euler beam analysis (full lines for stable equilibrium and dashed lines for unstable ones).}}
\label{fig:forwardBackwardSW}
\end{figure}
The static equilibria corresponding to the buckled strip driven into translational boundary actuation are obtained from the eigenvalue problem (\ref{eq:linearSystem},\ref{eq:geometrical_constraint_equilibrium}) associated with the non-homogeneous version of~\eqref{eq:linearSystem} accounting for the translational boundary actuation. The three sets of CC, HH, and CH boundary conditions are listed in Table~\ref{tab:dimensional_boundary_conditions}. The CC, HH, and CH boundary conditions give rise to distinct forms of the matrix $\mathbf{M}$, but the translational actuation $\mu_d$ gives rise to the same vector $\mathbf{b}$ on the right hand of~\eqref{eq:linearSystem}. The eigensolutions of the resulting system for each set of boundary conditions are listed in Table~\ref{tab:eigenTranslational}. We note that the form of the solutions in Table~\ref{tab:eigenTranslational} are given for $X\in [0,1]$ as they express in a more compact way than on the interval $X\in [-1/2,1/2]$ adopted in the main paper.
These equilibrium solutions are compared to the equilibria obtained numerically in Fig. \ref{fig:forwardBackwardSW}. In each case, this analysis corroborate our numerical results: there is loss of bistability above a certain threshold $\mu_d=\mu_d^*$. The values of $\mu_d^*$ obtained from the Euler beam analysis are summarized in table \ref{tab:muStarNumRot} along with those obtained from the numerical simulations.
\begin{table}
\caption{\textbf{Bifurcation values} of the control parameters $\mu_d$ and $\mu$ obtained numerically using discrete Cosserat simulations for $\Delta L/L=10^{-2}$ and (semi)-analytically using the Euler beam model. Values without decimal are analytically exact while values with decimals are approximate.}
\begin{tabular}{l|c|c}
\toprule
\multicolumn{3}{c}{$\textbf{TRANSLATIONAL ACTUATION}$}\\
\toprule
& Numerical &
Analytical \\
& $\mu_d^\ast$ & $\mu_d^\ast$ \\\toprule
$\textbf{Clamped-Hinged:}$ & 1.15 & 1.17\\[1.5ex]
$\textbf{Hinged-Hinged:}$ & 1.40 & $\sqrt{2}$\\[1.5ex]
$\textbf{Clamped-Clamped:}$ & 1.14 & $\sqrt{4/3}$\\[1.5ex]
\hline
\end{tabular}
\hspace{0.5in}
\begin{tabular}{l|c|c}
\toprule
\multicolumn{3}{c}{$\textbf{ROTATIONAL ACTUATION}$}
\\
\toprule
& Numerical &
Analytical \\
& $\mu^\ast$ & $\mu^\ast$ \\\toprule
$\textbf{Asymmetric:}$ & 1.763 & 1.782 \\[1.5ex]
$\textbf{Symmetric:}$ & 1.973 & 2 \\[1.5ex]
$\textbf{Antisymmetric:}$ & 1.967 & 2 \\[1.5ex]
\hline
\end{tabular}
\label{tab:muStarNumRot}
\end{table}
\section{Rotational boundary actuation of the buckled elastic strip}
\label{sec:rotation}
We numerically and analytically reproduce and expand the results of~\cite{gomez2017} for a clamped-clamped buckled strip driven into rotational boundary actuation. Starting from the clamped-clamped buckled strip, we actuate the strip by rotating one or both ends.
Specifically, we consider three types of rotational boundary actuation: asymmetric where only one end is rotated by an angle $\alpha$, symmetric where both ends are rotated by the same angle $\alpha$ in two opposite directions, and antisymmetric where the two ends are rotated by the same angle $\alpha$ in the same direction (see Fig.~\ref{fig:forwardBackwardGMV}).
\bigskip
\par\noindent
\textbf{A. Numerical simulations based on the Cosserat rod theory.}
For each type of boundary actuation, we start from the initial configuration U\textsubscript{A} and we increase $\alpha$ by small increments $\Delta\alpha$ until a maximum $\alpha_{\textrm{max}}$ is reached, allowing the elastic strip to reach an equilibrium configuration at each value $\alpha \in [0,\alpha_{\textrm{max}}]$. We repeat the same process starting from the initial configuration U\textsubscript{B}. Representative equilibrium configurations obtained for select $\alpha$ values are shown in Fig.~\ref{fig:forwardBackwardGMV}D,~\ref{fig:forwardBackwardGMV}E,~\ref{fig:forwardBackwardGMV}F.
In all three types of boundary actuation, bistability is lost after at a critical value $\alpha^\ast$. The values at which we observe the transition are given in table \ref{tab:muStarNumRot} in terms of the dimensionless bifurcation parameter $\mu = \alpha \sqrt{L/\Delta L}$ discussed below.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figsSupplemental/Fig5.pdf}
\caption{\textbf{Equilibrium configurations of the Euler-buckled strip under rotational actuation.} One or both ends of the CC strip are (quasi-statically) rotated by an angle $\alpha$, leading to loss of bistability as $\alpha$ increases. \textbf{A,B.} Asymmetric actuation (one end is rotated) and symmetric actuation (both ends are rotated by the same amount in opposite direction) lead to violent snap-through. \textbf{C.} Antisymmetric actuation (both ends are rotated by the same amount in the same direction) leads to a smooth transition. \textbf{D-F.} Evolution of the shapes U\textsubscript{A}, U\textsubscript{B} and S\textsubscript{B} for different values of $\mu$. The green lines represent the centerline of the Cosserat rod. The black lines represent the equilibrium shapes obtained from the static analysis of the Euler beam model. \textbf{G-I.} Midpoint deflection of the strip as a function of the bifurcation parameter $\mu = \alpha \sqrt{L/\Delta L}$. Green squares represent data obtained from numerical simulations based on the Cosserat rod theory. Solid and dashed lines represent, respectively, stable and unstable branches obtained from the static analysis of the Euler beam model.}
\label{fig:forwardBackwardGMV}
\end{figure}
\bigskip
\par\noindent
\textbf{B. Equilibria based on Euler beam model.}
The boundary conditions for asymmetric, symmetric, and antisymmetric actuation are given in Table~\ref{tab:dimensional_boundary_conditions} in the limit of small angle $\alpha \ll 1$. Specifically, the boundary conditions in terms of the dimensionless parameter $\mu = \alpha\sqrt{L/\Delta L}$ of the strip at the rotated end(s) are
\begin{equation}
\textrm{asymmetric:} \left.\frac{\partial W}{\partial X}\right|_{X=-1/2}=
\mu,
\qquad
\textrm{symmetric:} \left.\frac{\partial W}{\partial X}\right|_{X=-1/2,1/2}= \pm\mu,
\qquad
\textrm{antisymmetric:} \left.\frac{\partial W}{\partial X}\right|_{X=-1/2,1/2}= \mu.
\label{eq:boundary_smallslope_adim}
\end{equation}
The non-dimensional bifurcation parameter $\mu = \alpha \sqrt{{L}/{\Delta L}}$, first introduced by Gomez et al. \cite{gomez2017,gomez2018},
balances the slope $\alpha$ imposed at the boundary with the natural slope $\sqrt{\Delta L/L}$ adopted by the buckled strip.
If $\alpha$ remains small compared to $\sqrt{\Delta L/L}$ ($\mu\ll1$), the angle imposed at the boundary has a small influence on the overall shape of the strip compared to the influence of the longitudinal geometrical constraint. If on the other hand $\mu\sim O(1)$, the shape of the strip will be influenced both by the end-to-end shortening and by the angle imposed at the boundaries.
The static equilibria corresponding to the buckled strip driven into rotational boundary actuation are obtained from the eigenvalue problem associated with the non-homogeneous version of~\eqref{eq:linearSystem} accounting for the boundary conditions in~\eqref{eq:boundary_smallslope_adim}.
Each set of boundary conditions (asymmetric, symmetric, and antisymmetric) in Table~\ref{tab:dimensional_boundary_conditions} gives rise to a vector $\mathbf{b}$ on the right hand of~\eqref{eq:linearSystem}. (Semi)-analytic solutions to this eigenvalue problem are summarized in Table \ref{tab:eigenRotational}.
These equilibrium solutions are compared to the equilibria obtained numerically in Fig. \ref{fig:forwardBackwardGMV}. In each case, this analysis corroborate our numerical results: there is loss of bistability above a certain threshold $\mu=\mu^*$. The values of $\mu^*$ obtained from the Euler beam analysis are summarized in table \ref{tab:muStarNumRot} along with those obtained from the numerical simulations.
\section{Geometric symmetry breaking and its effect on the eigenvalue problem}
\label{sec:eigenstructure}
The families of equilibrium solutions obtained under translational and rotational actuation of the strip's boundaries can be compared to the families of equilibria of the Euler-buckled strip. Of particular interest is the role of the translational and rotational boundary actuation in breaking the twin symmetries identified in the Euler-buckled strip (see \S\ref{sec:buckling}). Four categories emerge from this comparison.
Consider first the translational actuation and antisymmetric rotational actuation of the clamped-clamped strip. In these cases, the non-homogeneous system admits two families of solutions: one is inherited from the even harmonics found in the homogeneous system, the other is obtained by inverting \eqref{eq:linearSystem}. This means that, for these two sets of boundary conditions, the right hand side $\mathbf{b}$ lies in the subspace described by the family of even harmonics of the homogeneous problem. This is a mathematical illustration of the fact that the $\pi$\textit{-rotational twin symmetry} associated with the even buckling harmonics is satisfied by this actuation. There is no family of solution inherited from the odd harmonics of the homogeneous system because the right-hand side $\mathbf{b}$ in \eqref{eq:linearSystem} breaks the \textit{left-right twin symmetry} associated with this family of solutions.
Consider now the translational actuation of the hinged-hinged strip. The non-homogeneous system admits solutions for all values of $\Lambda$ for which $\det(\mathbf{M})=0$. This means that $\mathbf{b}$ lies in the subspace described by the even and odd harmonics of buckling of the homogeneous system. This is a mathematical illustration of the fact that this actuation satisfies the twin symmetry of all buckled equilibria associated with the homogeneous problem. Indeed as the strip is hinged on both sides, the two boundaries remain always aligned with the line joining the two boundaries and therefore satisfy a reflection symmetry about the line connecting the strip's endpoints.
Next consider the case of symmetric rotational actuation in which the non-homogeneous system admits two families of solutions: one is obtained by inverting \eqref{eq:linearSystem} and the other is inherited from the odd harmonics of buckling of the homogeneous system. This means that the right hand side $\mathbf{b}$ lies in the subspace described by odd harmonics of the the homogeneous system. This is a mathematical illustration of the fact that this set of boundary conditions does not break the \textit{left-right twin symmetry} associated with the odd harmonics of buckling. The family corresponding to the even harmonics of the homogeneous system is not inherited because the right-hand side $\mathbf{b}$ breaks the $\pi$\textit{-rotational twin symmetry} associated with this family of solutions.
Lastly, consider the case of translational actuation of the clamped-hinged strip and the asymmetric rotational actuation of the clamped-clamped strip. Both these configurations admit only one family of solution obtained by inverting \eqref{eq:linearSystem}. There is no family of solution inherited from the family of solutions of the homogeneous system. This means that the solution corresponding to values of $\Lambda$ for which $\det(\mathbf{M})$ vanishes are no longer solutions to the non-homogeneous system in \eqref{eq:linearSystem}. That is, the right hand side $\mathbf{b}$ does not lie in the subspace described by the solutions of the homogeneous system. This is a mathematical illustration of the fact that this set of boundary condition breaks all the twin symmetries of the Euler buckling problem.
\section{Force measurements in the case of translational boundary actuation}
In this section, we carry a discussion about the force measured by Sano \& Wada~\cite{sano2018} for the translationally actuated strip, and we reinterpret their observations based on the symmetry breaking mechanism introduced in the main text. Sano \& Wada~\cite{sano2018} measured the transverse force (in the $y$ direction) applied by the beam on the left clamped boundary (at $X=0$). They observed that in the clamped-hinged case, the force plotted as a function of the bifurcation parameter $\mu_d$ exhibits a hysteresis depending on whether the beam is in the U\textsubscript{A} or U\textsubscript{B} configuration. Surprisingly, this hysteresis is not observed in the clamped-clamped and hinged-hinged cases. This hysteresis is actually a consequence of the symmetry breaking mechanism unravelled in the main paper.
\begin{figure}[!t]
\centering
\includegraphics[width =\textwidth]{figsSupplemental/Fig8.pdf}
\caption{{\textbf{Transverse force applied by the strip on the left boundary.} Evolution of the non-dimensional transverse force applied on the left clamped boundary of the strip under translational actuation (see Fig.~\ref{fig:forwardBackwardSW}) in term of the non-dimensional bifurcation parameter $\mu_d$. The force is obtained analytically from the Euler beam model (full line) and numerically from the discrete Cosserat equations (symbols).}}
\label{fig:internalForce}
\end{figure}
Here, we exploit both our numerical simulations and the Euler beam model to compute this force.
In our simulations, the force is obtained by "measuring" the force applied by the beam on the boundary, as done experimentally in \cite{sano2018}. The obtained values are reported as dots on Fig. \ref{fig:internalForce}.
In the Euler-beam model, the non-dimensional transverse force $F_{\textrm{int}}(X)$ applied at abscissa $X$ along the strip is given by
\begin{equation}
F_{\textrm{int}}(X)=-\frac{\partial^3 W}{\partial X^3}-\Lambda^2\frac{\partial W}{\partial X}.
\label{eq:internal_force}
\end{equation}
where the first term in the right hand side corresponds to bending forces and the second term to compression forces. We substitute the analytical shape of the strip at equilibrium (see Table~\ref{tab:eigenTranslational}) into~\eqref{eq:internal_force} to calculate the transverse force applied at the left end of the strip for each of the three cases: CH, HH, and CC.
\bigskip
\par\noindent
\textbf{A. Clamped-Hinged case.}
We compute the transverse force for the case $d=0$ and $d \neq 0$ separately.
When the misalignment is zero ($d=0$), the transverse force \eqref{eq:internal_force} is computed using the expression of $W(X)$ in Table~\ref{tab:eigenEulerBuckling} for U\textsubscript{A} and U\textsubscript{B}
\begin{equation}
\begin{split}
F_{\textrm{int}}^{\textrm{U}_\textrm{A}}=-F_{\textrm{int}}^{\textrm{U}_\textrm{B}} = \displaystyle{\frac{2 \sqrt{2} \Lambda^3}{\sqrt{\Lambda \left(2 \Lambda^3-\Lambda^2 \sin (2 \Lambda)-8 \sin (\Lambda)+\sin (2 \Lambda)+8 \Lambda \cos (\Lambda)-2 \Lambda \cos (2 \Lambda)\right)}}},
\end{split}
\end{equation}
Here, $F_{\textrm{int}}^{\textrm{U}_\textrm{A}}$ is the transverse force applied by the strip on the left boundary when the strip is in the U\textsubscript{A} configuration. Similarly, $F_{\textrm{int}}^{\textrm{U}_\textrm{B}}$ is the transverse force when the strip is in the U\textsubscript{B} configuration.
The force is non-zero for $d=0$ and depends on which side the strip buckles, U\textsubscript{A} or U\textsubscript{B}. This is a consequence of the asymmetry between the clamped and hinged boundary conditions. The moment applied on the strip by the left clamped boundary induces a bending moment in the strip that cannot be balanced by other means than by applying a transverse force at the right hinged end (because there is no moment at a hinged end) that in turn must be balanced by a transverse force at the left end in order to guarantee mechanical equilibrium of the system.
For $d\neq 0$, the transverse force \eqref{eq:internal_force} is computed from the equilibrium expressions for U\textsubscript{A} and U\textsubscript{B} given in Table~\ref{tab:eigenTranslational}. We obtain the same expression for both U\textsubscript{A} and U\textsubscript{B}:
\begin{equation}
F_{\textrm{int}}^{\textrm{U}_A}=F_{\textrm{int}}^{\textrm{U}_B}=\frac{\Lambda^3 \cos (\Lambda)}{\Lambda \cos (\Lambda)-\sin (\Lambda)}\frac{d}{\sqrt{L\Delta L}}.
\label{eq:internalForceCH}
\end{equation}
This means that the transverse force $F_{\textrm{int}}^{\textrm{U}_A}$ and $F_{\textrm{int}}^{\textrm{U}_B}$ would be the same only if the eigenvalue $\Lambda$ associated with U\textsubscript{A} is the same as the one associated with U\textsubscript{B}. This is not the case because of the symmetry breaking mechanism identified in the main paper. As soon as $d\neq0$, U\textsubscript{A} and U\textsubscript{B} are no longer energetically equivalent (see main text) and their eigenvalue $\Lambda$ is no longer equal. This explains the strong hysteresis observed in \cite{sano2018}.
In Fig.~\ref{fig:internalForce}A, we plot the analytically obtained transverse force for the U\textsubscript{A}, U\textsubscript{B} and S\textsubscript{B} configurations (solid lines) on top of the numerical values obtained from the Cosserat model (green symbols). The analytical branches (solid lines) compare well with the numerical values (green symbols). Our data confirm the strong hysteresis observed experimentally by Sano \& Wada~\cite{sano2018}.
\bigskip
\par\noindent
\textbf{B. Hinged-Hinged Case.}
We calculate the transverse force from \eqref{eq:internal_force} using the equilibrium shapes U\textsubscript{A} and U\textsubscript{B} for the hinged-hinged setup (see Table~\ref{tab:eigenTranslational}),
\begin{equation}
F_{\textrm{int}}^{\textrm{U}_\textrm{A}}=F_{\textrm{int}}^{\textrm{U}_\textrm{B}}=\Lambda^2\frac{d}{\sqrt{L\Delta L}}.
\label{eq:internal_force_HH1}
\end{equation}
In contrast to the clamped-hinged case, the expression for the transverse force is independent of shape. When $d=0$, the transverse force associated with bending (first term in the rhs of \eqref{eq:internal_force}) is balanced by the transverse force due to tension in the beam (second term in the rhs of \eqref{eq:internal_force}) such that the resultant is zero. When $d \neq 0$ the part of the transverse force that is not balanced by the bending force is the part due to the compression force $\Lambda^2$ acting in the direction of the line connecting the two hinged boundaries. The projection of this force along $\mathbf{e}_y$ yields \eqref{eq:internal_force_HH1}.
Thus, there is no hysteresis because the transverse force measured along $\mathbf{e}_y$ in $X=0$ is not shape dependent but only due to the compression force $\Lambda^2$ that remains aligned with the line connecting the two boundaries and is thus no longer perpendicular to $\mathbf{e}_y$.
In Fig. \ref{fig:internalForce}B, we plot the force in \eqref{eq:internal_force_HH1} as a function of the bifurcation parameter $\mu_d$ (solid lines) and we compare it to the value of the force at the left boundary obtained numerically (green symbols) for both U\textsubscript{A} and U\textsubscript{B}. The transverse force obtained numerically compares well with the analytical branches and they are both exactly the same for U\textsubscript{A} and U\textsubscript{B}, confirming the results obtained by Sano \& Wada~\cite{sano2018}.
\bigskip
\par\noindent
\textbf{C. Clamped-Clamped Case.}
As $d$ increases, the strip transitions from the U\textsubscript{A, B} configuration to the S\textsubscript{B} configuration. We compute the transverse force associated with U\textsubscript{A, B} and S\textsubscript{B} separately.
Consider the expression $W(X)$ of the U\textsubscript{A} and U\textsubscript{B} configurations given in Table~\ref{tab:eigenTranslational}, and substitute this expression in \eqref{eq:internal_force}. We obtain the expression for the transverse force,
\begin{equation}
F_{\textrm{int}}^{\textrm{U}_\textrm{A}}=F_{\textrm{int}}^{\textrm{U}_\textrm{B}}=\Lambda^2\frac{d}{\sqrt{L\Delta L}}.
\label{eq:internal_force_CC1}
\end{equation}
Here, we have analytical evidence of the symmetric property of the transverse force, observed in \cite{sano2018} in a numerical experiment. Similarly to the Hinged-Hinged case, it is surprising that, although the shape is reversed, the force applied by the beam on the boundary is the same in the U\textsubscript{A} and U\textsubscript{B} configuration. When the two clamped boundaries are aligned ($d=0$), the transverse force is zero. The forces associated with the bending moments in the beam (first term in \eqref{eq:internal_force}) exactly equals the forces induced by the compression force (second term in \eqref{eq:internal_force}). When a misalignment is applied ($d\neq0$) between the two boundaries, the compression force $\Lambda^2$ acting in the direction of the line connecting the two clamped boundaries is no longer perpendicular to $\mathbf{e}_y$ and its projection on $\mathbf{e}_y$, yields \eqref{eq:internal_force_CC1}. Therefore, the transverse force in the beam is independent of the shape of the beam and only depends on the misalignment between the two boundaries. This explains why, for a given $\mu_d$ value, the forces are exactly the same in the U\textsubscript{A} and U\textsubscript{B} configurations.
For the S\textsubscript{B} configuration, the transverse force takes the form
\begin{equation}
F_{\textrm{int}}=\frac{\Lambda^3\cos(\Lambda/2)}{\Lambda\cos(\Lambda/2)-2\sin(\Lambda/2)}\frac{d}{\sqrt{L\Delta L}}.
\label{eq:internal_force_CC2}
\end{equation}
In Fig. \ref{fig:internalForce}C, we show the two branches of solution for the U\textsubscript{A, B} and S\textsubscript{B} configurations and we compare these results to values of the force applied by the beam on the left boundary obtained numerically. The analytical branches compare well with the numerical values. These forces are exactly the same for U\textsubscript{A} and U\textsubscript{B}, confirming the result obtained by Sano \& Wada~\cite{sano2018}.
\section{Equivalence between translational and rotational boundary actuation}
\label{sec:equiv}
We show that translational actuation of the Euler-buckled strip, implemented experimentally in~\cite{sano2018} and analyzed in detail in section~\ref{sec:translation}, is equivalent to the rotational actuation of the Euler-buckled strip, studied experimentally and analytically in~\cite{gomez2017} and analyzed in detail in section~\ref{sec:rotation}.
\begin{figure}[!t]
\centering
\includegraphics[width =\textwidth]{figsSupplemental/Fig6.pdf}
\caption{{\textbf{Lagrangian frame of reference} If we observe the system from a frame attached to the line that connects the two ends of the strip, the translational actuation becomes a rotational actuation. \textbf{A.} 3D rendering of the two equilibria U\textsubscript{A} and U\textsubscript{B} obtained numerically with the t-CC actuation and observed from the Eulerian frame $(\mathbf{e}_x, \mathbf{e}_y, \mathbf{e}_z)$. \textbf{B.} Analytical equilibria obtained for the t-CC actuation, depicted in the $(\mathbf{e}_x, \mathbf{e}_y, \mathbf{e}_z)$ Eulerian frame. \textbf{C.} Analytical equilibria obtained for the t-CC actuation, depicted in the $(\mathbf{b}_x, \mathbf{b}_y, \mathbf{b}_z)$ Lagrangian frame. \textbf{D.} Bifurcation diagram obtained analytically for the t-CC case. \textbf{E.} Comparison of the bifurcation diagram obtained for the t-CC case (green lines) and expressed in the Lagrangian frame $(\mathbf{b}_x, \mathbf{b}_y, \mathbf{b}_z)$ with the bifurcation diagram obtained for the antisymmetric case (black lines).}}
\label{fig:referentialOfTheBeam}
\end{figure}
We introduce a Lagrangian frame of reference $(\mathbf{b}_x, \mathbf{b}_y, \mathbf{b}_z)$ whose $x$ axis remains aligned with the line joining the two ends of the strip (Fig. \ref{fig:referentialOfTheBeam}B). For the translational actuation, a mapping from the Eulerian frame $(\mathbf{e}_x, \mathbf{e}_y, \mathbf{e}_z)$ to this new frame of reference corresponds to a rigid body rotation by an angle $\beta=-\arctan(\Delta y / (L-\Delta L))$ about the $\mathbf{e}_z$ axis. The dimensional coordinates of any point $\mathbf{r}$ in space can thus be transformed from the Eulerian frame to the Lagrangian frame using the rigid-body rotation defined by
\begin{equation}
\mathbf{R}=\begin{pmatrix}
\cos\beta & \sin\beta & 0\\
-\sin\beta & \cos\beta & 0\\
0&0&1
\end{pmatrix}
\label{eq:matrixChangeOfFrame}
\end{equation}
Consider the case of translational actuation of the clamped-clamped strip. In the Eulerian frame $(\mathbf{e}_x, \mathbf{e}_y, \mathbf{e}_z)$, the boundary conditions in dimensional form are given by (see Fig. \ref{fig:referentialOfTheBeam}B)
\begin{equation}
\left.w\right|_{x=-\frac{1}{2}(L-\Delta L)}=\frac{\Delta y}{2},
\qquad
\left.\frac{\partial w}{\partial x}\right|_{x=-\frac{1}{2}(L-\Delta L)}=0,
\qquad
\left.w\right|_{x=\frac{1}{2}(L-\Delta L)}=-\frac{\Delta y}{2},
\qquad
\left.\frac{\partial w}{\partial x}\right|_{x=\frac{1}{2}(L-\Delta L)}=0.
\label{eq:bcsEulerian}
\end{equation}
The boundary conditions in the Lagrangian frame $(\mathbf{b}_x, \mathbf{b}_y, \mathbf{b}_z)$, see Fig. \ref{fig:referentialOfTheBeam}C, are obtained by applying the rigid-body rotation in~\eqref{eq:matrixChangeOfFrame} to~\eqref{eq:bcsEulerian},
\begin{equation}
\left.w\right|_{x=-H/2}=0,
\qquad
\left.\frac{\partial w}{\partial x}\right|_{x=-H/2}=\tan\beta,
\qquad
\left.w\right|_{x=H/2}=0,
\qquad
\left.\frac{\partial w}{\partial x}\right|_{x=H/2}=\tan\beta.
\label{eq:bcsLagrangian}
\end{equation}
Here, we introduced $H=\sqrt{\Delta y^2+(L-\Delta L)^2}$. The new set of boundary conditions \eqref{eq:bcsLagrangian} shows that, once observed in the proper frame of reference, the translational actuation is equivalent to a rotation of the boundaries by an angle $\beta$ while pulling the boundaries away from each others.
To find the corresponding non-dimensional angle applied at the boundaries, we need to identify the natural horizontal and vertical length scales in the new frame of reference. According to \eqref{eq:bcsLagrangian}, we take $H$ to be the horizontal length scale. Following \cite{gomez2017}, the natural vertical length scale is obtained by expressing the horizontal confinement.
\begin{equation}
\int_{-L/2}^{L/2}\cos\left(\theta(s)\right)ds=H
\label{eq:horizontalConfinement}
\end{equation}
where $\theta(s)$ is the local angle made by the beam with $\mathbf{b}_x$ at the curvilinear coordinate $s$ along the beam. Writing \eqref{eq:horizontalConfinement} in the limit $\theta\ll 1$ we find the natural vertical length scale $w\sim\sqrt{H(L-H)}$.
Thus, once in the Lagrangian frame of reference, we non-dimensionalized the quantities $w$ and $x$ using
\begin{equation}
X=\frac{x}{H}\qquad W=\frac{w}{\sqrt{H(L-H)}},
\label{eq:adimQuantitiesNewRef}
\end{equation}
and obtain the corresponding non-dimensional angle imposed at the boundaries
\begin{equation}
\mu=\frac{\Delta y}{L-\Delta L}\sqrt{\frac{H}{L-H}}.
\label{eq:adimAngleNewRef}
\end{equation}
The transformation \eqref{eq:matrixChangeOfFrame} and the non-dimensionalization \eqref{eq:adimQuantitiesNewRef} allow us to map the bifurcation diagram associated with the translational actuation (Fig. \ref{fig:referentialOfTheBeam}D) into a new bifurcation diagram corresponding to a rotational actuation in the new frame of reference (Fig. \ref{fig:referentialOfTheBeam}E green lines). The comparison of this new bifurcation diagram with the bifurcation diagram obtained in the case of antisymmetric rotational actuation (Fig. \ref{fig:referentialOfTheBeam}E black lines) shows that translational actuation of the clamped-clamped strip is equivalent to antisymmetric rotational actuation of the strip. The only difference being that, in the case of antisymmetric rotational actuation, $\mu$ is increased by varying the angle imposed at the boundaries whereas, according to \eqref{eq:adimAngleNewRef}, in the translational actuation, $\mu$ is increased by increasing the angle $\beta$ imposed at the boundaries while decreasing the end-to-end confinement.
We now apply the change of frame of reference in~\eqref{eq:matrixChangeOfFrame} and the non-dimensionalization in~\eqref{eq:adimQuantitiesNewRef} to the case of translational actuation of the hinged-hinged strip. With hinged boundaries, there is no torque applied by the boundaries at the endpoints and the boundary conditions in the Lagrangian frame become
\begin{equation}
\left.w\right|_{x=-H/2}=0,
\qquad
\left.\frac{\partial^2 w}{\partial x^2}\right|_{x=-H/2}=0,
\qquad
\left.w\right|_{x=H/2}=0,
\qquad
\left.\frac{\partial^2 w}{\partial x^2}\right|_{x=H/2}=0.
\label{eq:bcsLagrangiantHH}
\end{equation}
Thus, the problem is homogeneous in this reference frame. This corresponds to the Euler buckling problem studied in \ref{sec:buckling}, except that here, the two boundaries are being pulled apart (instead of being pushed towards each others) until the strip gets back to its straight configuration. This actuation, therefore satisfies all the twin symmetries of the Euler buckling problem. This explains that the non-homogeneous system solved in \ref{sec:buckling} for this case admits solutions for all the eigenvalues obtained in the homogeneous case, as discussed in section \ref{sec:eigenstructure}.
Lastly, we apply the change of frame of reference in~\eqref{eq:matrixChangeOfFrame} and the non-dimensionalization in~\eqref{eq:adimQuantitiesNewRef} to the case of translational actuation of the clamped-hinged strip. We obtain the following set of boundary conditions
\begin{equation}
\left.w\right|_{x=-H/2}=0,
\qquad
\left.\frac{\partial w}{\partial x}\right|_{x=-H/2}=\tan\beta,
\qquad
\left.w\right|_{x=H/2}=0,
\qquad\left.\frac{\partial^2 w}{\partial x^2}\right|_{x=H/2}=0.
\label{eq:bcsLagrangiantCH}
\end{equation}
This case corresponds to a clamped-hinged strip actuated by rotating the left boundary by an angle $\beta$ while pulling the two boundaries away from each others. Therefore, this actuation breaks the twin symmetry of both the U and S shape and exhibit a saddle-node bifurcation as the asymmetric case studied in \cite{gomez2017} and in the main paper.
\section{Methods for plotting bifurcation diagrams}\label{sec:bifurcationDiagrams}
In Figs.~1, 2 and 4 of the main text, as well as in Figs.~\ref{fig:forwardBackwardSW}, ~\ref{fig:forwardBackwardGMV} and ~\ref{fig:referentialOfTheBeam} of this document, we plot diagrams that illustrate the evolution of the static equilibria of the strip as a function of boundary actuation. We quantify changes in the strip's equilibrium states by tracking the transverse deflection of a single vertex of the infinite-dimensional strip: to highlight the evolution of the initially-stable pair of equilibria U\textsubscript{A,B} and the initially-unstable pair S\textsubscript{A,B}, we plot the midpoint deflection as a function of the bifurcation parameter $\mu_d$ for translational actuation and as a function of $\mu$ for rotational actuation.
On each diagram, we show two sets of results: results obtained based on numerical simulations of the nonlinear Cosserat rod theory and results based on the quasilinear Euler beam model.
To identify without doubt the nature of the bifurcation in each system, we conduct rigorous asymptotic analysis (see companion paper~\cite{radisson2022PRE}) that leads to reduced normal forms (eqs. (4) and (5) in the main text) describing the nature of the underlying bifurcation. These equations reduce the strip dynamics to a one degree of freedom system in the vicinity of the bifurcation. In Fig 2 of the main text, the bifurcation diagrams associated with the reduced forms Eqns. (4) and (5) of the main text are compared to: (i) data obtained from the analysis of the Euler beam model, (ii) numerical data obtained from solving the discrete Cosserat equations, and (iii) experimental data obtained in \cite{gomez2017}. In the following, we describe the method we employed to plot these diagrams.
\bigskip
\par\noindent
\textbf{A. Reduced forms.}
The equilibrium points $A_\textrm{eq}$ are obtained by solving the stationnary version of Eqns. (4) and (5) of the main text. Results are plotted as a function of $\Delta \mu$ in Fig. 2G-I of the main text. Their stability is assessed through standard stability analysis of the equilibria $A_\textrm{eq}$ according to the dynamics described by the reduced equations. Stable branches are depicted in black solid lines and unstable branches in black dashed lines. See \cite{radisson2022PRE} for details.
\bigskip
\par\noindent
\textbf{B. Euler beam model.} These bifurcation diagrams are compared to the equilibrium solutions obtained from the static analysis of the geometrically constrained Euler beam model. In the case of asymmetric boundary actuation, the equilibrium amplitudes $A_{\textrm{eq}_1}$ and $A_{\textrm{eq}_2}$ of (4) are to be compared to
amplitudes associated with the S\textsubscript{B} and U\textsubscript{A} shapes of the elastic strip.
In the case of symmetric boundary actuation, the equilibrium amplitudes $A_{\textrm{eq}_1}$, $A_{\textrm{eq}_2}$, and $A_{\textrm{eq}_3}$ of (5) are to be compared to amplitudes associated with the U\textsubscript{A}, S\textsubscript{B} and S\textsubscript{A} shapes of the elastic strip. Lastly, for the antisymmetric boundary actuation, the amplitudes $A_{\textrm{eq}_1}$, $A_{\textrm{eq}_2}$, and $A_{\textrm{eq}_3}$ of (5) are to compared to the
amplitudes associated with S\textsubscript{B}, U\textsubscript{B} and U\textsubscript{A} shapes
of the elastic strip. To calculate the amplitude associated with the equilibria of the elastic strip, we use the approximation
\begin{equation}
A_{\textrm{eq}}\approx\frac{W_{\textrm{eq}}(X)-W_{\textrm{eq}}^*(X)}{\Phi_0(X)}
\label{eq:A_approx}
\end{equation}
For each equilibrium (except U\textsubscript{A} in the symmetric case and S\textsubscript{B} in the antisymmetric case (see below)), this amplitude is plotted for the mid-point of the strip ($X=0$) in the case of the asymmetric and antisymmetric cases and for the quarter-point ($X=1/4$) in the symmetric case. They converge in each case to the equilibria of (4) and (5) (black lines) in the limit $\Delta\mu\ll 1$. We note that as $\Phi_0$ is the mode that the strip follows at leading order to go away from its bifurcation shape, this amplitude can be plotted for any value of $X$ and will always converge to the equilibria of the amplitude equations in the very vicinity of the bifurcation.
For the U\textsubscript{A} shape in the symmetric case and the S\textsubscript{B} shape in the antisymmetric case, it is shown by a scaling analysis (see \cite{radisson2022PRE}) that the largest component in the way these modes go away from the shape at bifurcation is of order $O(\Delta\mu)$. Therefore, for $\Delta\mu\ll1$, there is nothing as large as $O(\Delta\mu^{1/2})$ (leading order) in these data and $A$ is simply set to zero.
All the obtained branches are plotted in Fig. 2 in green for U\textsubscript{A} and U\textsubscript{B} and brown for S\textsubscript{A} and S\textsubscript{B} with dashed (respectively full) lines for unstable (respectively stable) equilibria.
\bigskip
\par\noindent
\textbf{C. Numerical simulations based on the 3D Cosserat rod theory.} The reduced amplitude $A_\textrm{eq}$ of the equilibrium shapes that are computed numerically is obtained following the exact same procedure as the one described in the previous section although obviously only the stable solutions are obtained in these numerical simulation. The obtained branches of static solutions are plotted in Fig. 2 using green square symbols.
\bigskip
\par\noindent
\textbf{D. Experimental data of Gomez et al.}
We compare the bifurcation diagram in the case of asymmetric boundary actuation in Fig. 2G to experimental data obtained by Gomez et al. \cite{gomez2017}. The amplitude $A_{\textrm{eq}}$ from their data is obtained from the truncated expansion of the equilibrium shapes $W_\textrm{eq}(X)$ as done for the data based on the Euler beam model and Cosserat rod theory.
\bigskip
\par\noindent
\textbf{E. Comment on the location of the bifurcation point.}
In plotting data from the Euler beam model, from numerical simulations of the 3D Cosserat rod theory, and from the experiments of Gomez et al. \cite{gomez2017}, we shifted the data in each case to place the corresponding bifurcation point at $\Delta\mu=0$. Indeed, each set of data predicts a slightly different value of the bifurcation threshold $\mu^\ast$. In~\cite{gomez2017}, the actual bifurcation point associated with each set of measurements was taken from a parabolic fit of their measurement in the vicinity of the transition (see \cite{gomez2017} Supplemental Material).
Shifting the data in Fig. 2 of the main text so that the corresponding bifurcation point is at $\Delta\mu=0$ allows us to focus on the way the strip goes away from its configuration at the bifurcation point and not on the quality of the prediction of the bifurcation point itself. It is obvious that the Euler beam framework is valid only in the small deflection limit and that the prediction of the bifurcation point will therefore not be valid for large $\Delta L$ values. Our numerical simulations show that, although the actual bifurcation point goes away from that predicted by the Euler beam model when $\Delta L$ is increased, the behavior predicted in the vicinity of the bifurcation is robust as long as the distance $\Delta\mu$ is taken from the corresponding bifurcation point. This is also true for the experimental data of \cite{gomez2017}. Although they observe variations of the position of the bifurcation point (see Fig. S2 in their supplemental document), the way the strip goes away from its bifurcation shape seems to be in good agreement with the predictions based on the Euler beam model and Cosserat rod theory. Indeed, although the prediction of the exact value of the bifurcation point may not be reliable as $\Delta L$ is increased, we expect the nature of the bifurcation predicted by our analysis, and therefore the time and spatial scaling around the bifurcation, to be robust as they are dictated by rules of symmetry only (see main text). For a reliable prediction of the bifurcation point a fully non-linear analysis as the one carried out by Sano \& Wada \cite{sano2018} should be used instead (Fig. 3B in their paper shows how the prediction obtained in the small deflection limit goes away from the actual value when the longitudinal strain is increased).
\section{Methods for plotting energy landscapes}
\label{sec:energyplot}
The energy landscape plotted in Fig. 4 of the main text is a simplified version of the actual energy landscape of the system at $\mu=0$. It exhibits two potential wells corresponding to the two fundamental Euler buckling modes U\textsubscript{A} and U\textsubscript{B} and two lowest energy barriers corresponding to the unstable equilibria S\textsubscript{A} and S\textsubscript{B} that prevent the system from switching freely from U\textsubscript{A} to U\textsubscript{B} and vice versa.
To obtain this simplified energy landscape, the bending energy $\mathcal{E}_b=EI/2\int_{-1/2}^{1/2}w_{xx}^2dx$ of each static equilibrium obtained from the static analysis of the beam equation are plotted in a 2D space spanned by $W_S(0)$ the midpoint deflection of the symmetric modes (U, W) and $W_A(-1/4)$ the deflection of the antisymmetric modes of buckling (S) at $X=-1/4$. By convention, the antisymmetric component of the symmetric modes is set to zero and vice versa. The two fundamental modes U\textsubscript{A} and U\textsubscript{B} have opposite values in $W(0)$ this is due to the fact that they are the symmetric of each other by the transformation $X\rightarrow -X$ $W\rightarrow-W$. On the energy landscape they are therefore symmetrically distributed around the direction $W_A(1/4)$ and have the same bending energy. In the same way, S\textsubscript{A} and S\textsubscript{B} have opposite values in $W(1/4)$ this is because they are the symmetric of each other by the transformation $X\rightarrow -X$. They are also energetically equivalent but their bending energy is higher than U\textsubscript{A} and U\textsubscript{B}. Finally W\textsubscript{A} and W\textsubscript{B} are symmetric and have therefore a zero antisymmetric component ($W_A(1/4)=0$). In addition, their midpoint deflection is actually zero so their component $W_S(0)$ is also zero and they both lie at the origin of this 2D space. Their bending energy is even higher than the two S modes. Even, in the simple space adopted here, the energy landscape is not as simple because the higher harmonics of buckling constitutes other energy bumps on both axis depending on whether they are symmetric or antisymmetric. Here we have represented only the three first harmonics. However, all the harmonics that are not represented correspond to higher bending energy states and the two $S$ shapes constitutes therefore the two lowest energy barriers preventing the system to transit from U\textsubscript{A} to U\textsubscript{B} or vice versa.
\begin{figure}[!t]
\centering
\includegraphics[width =\textwidth]{figsSupplemental/Fig7.pdf}
\caption{{\textbf{Energy landscapes.} Plot of the potential landscape $V(A)$ associated with \eqref{eq:saddleNodeFinalForm}, \eqref{eq:subcriticalPitchforkFinalForm} and \eqref{eq:supercriticalPitchforkFinalForm} for each actuation. In each case, the potential landscape is represented for three values of $\Delta\mu$ with one right before the bifurcation ($\Delta\mu<0$), one at the bifurcation ($\Delta\mu=0$), and one after the bifurcation ($\Delta\mu>0$)}. Equilibria are indicated. The evolution of the potential landscape corresponds in each case to the one plotted schematically on Fig.~4 of the main paper.}
\label{fig:potentialLandscapesAsymptotic}
\end{figure}
When the boundaries of the strip are rotated, this standard energy landscape is reshaped until one (or both) of the two lowest energy barriers ``breaks," thus allowing the system to transition from one state to another. The deformed energy landscapes are plotted in Fig. 4 of the main paper for the three types of rotational actuation: asymmetric, symmetric, and antisymmetric. The representation of these energy landscapes in Fig. 4 is semi-schematic: we compute the energy values at the equilibria in a rigorous manner but the shape of the energy landscape between two consecutive equilibria is schematic. This semi-schematic representation is not completely ad-hoc. We know from our asymptotic analysis and the reduced normal forms that the energy landscape in the vicinity of the bifurcation has the actual shape of the potential landscape associated with a saddle-node, subcritical pitchfork and supercritical pitchfork for the asymmetric, symmetric and antisymmetric actuation, respectively. The potential landscapes associated with the reduced forms are plotted in Fig. \ref{fig:potentialLandscapesAsymptotic} for three distinct values of the bifurcation parameter $\Delta \mu = \mu - \mu^\ast$. They form the backbone based on which we built the complete, although simplified, energy landscapes in Fig. 4 of the main text.
\section{Tapered strip}
Our work provides tools to predict the type of shape transition an elastic structure is likely to undergo by examining the geometric symmetries of the system. This allows us to design systems that achieve a desired kind of shape transition. The goal of this section is to provide the reader with an example of how these findings can be applied.
Let's say we want to design a system where a buckled elastic strip clamped at both ends undergoes a non-linear snap-through, via a saddle-node bifurcation, while being actuated by antisymmetric rotation of its boundaries. To obtain a saddle-node bifurcation, the actuation needs to break the twin symmetries of both the fundamental U-modes and the S-modes which constitute the lowest energy barriers.
Our study showed that, for a geometrically homogeneous strip, antisymmetric actuation results in a supercritical pitchfork bifurcation because, although it breaks the twin symmetry of the S-modes, it satisfies the twin symmetry of the U-modes: antisymmetric boundary actuation breaks the S-twin symmetry (left-right transformation $x\rightarrow-x$) and maintains the U-twin symmetry ($\pi$-rotation $x\rightarrow-x$ and $w\rightarrow-w$). To achieve a saddle-node bifurcation with this actuation, we must break the U-twin symmetry. This can be achieved by simply using a tapered strip instead of a homogeneous strip. By tapering the strip cross-sectional area, the $x \rightarrow -x$ symmetry is suppressed and with it both the U-twin and S-twin symmetries are also suppressed.
We assess these predictions using numerical simulations based on the 3D Cosserat rod theory, and analysis based on the geometrically-constrained Euler beam model of an antisymmetrically actuated clamped-clamped strip of thickness $h$ that is linearly decreasing along its length. All remaining parameters of the strip are the same as the ones adopted throughout this study.
In Fig. 4 of the main text, we show the effect of the tapering on the two equilibrium shapes U\textsubscript{A} and U\textsubscript{B} when a zero slope ($\mu=0$) is imposed at the boundaries. Specifically, we decreased the thickness $h$ of the strip linearly from $h=2$ mm at the left end of the strip to $h=1$ mm at the right end of the strip. This strong tapering is chosen to make the left-right symmetry breaking on the equilibrium shapes visually obvious.
We now analyze the evolution of the equilibria U\textsubscript{A} and U\textsubscript{B} when the boundaries are rotated in an antisymmetric fashion. In the following, we study a strip with a tapering going from $h=2$ mm (left) to $h=1.6$ mm (right). This weaker tapering is chosen in order to see how a slight symmetry breaking affects the transition observed for the homogeneous strip. We apply the same methodology employed in this study: we increase quasi-statically the angle $\alpha$ applied at the boundaries until the strip starts to stretch along its length. Contrary to the homogeneous strip, the transition from a bistable system to a system with only one equilibrium is no longer smooth: when a certain value $\mu^\ast \approx 1.6098316$ of the non-dimensional angle applied at the boundaries is reached, the U\textsubscript{A} equilibrium suddenly snaps through to the U\textsubscript{B} equilibrium. This is shown on the bifurcation diagram in Fig. \ref{fig:taperedBeam}B where the data obtained numerically for the tapered strip (green symbols) are compared to the bifurcation diagram obtained analytically for the homogeneous strip (black lines).
In order to confirm the nature of this shape transition, we analyze how the dynamic slow-down when the strip approaches the transition. We measure the eigenfrequency of the fundamental mode of vibration around the equilibrium U\textsubscript{A} at different distance $\Delta\mu=\mu-\mu^\ast$ from the bifurcation point $\mu^\ast$. For each $\Delta \mu$ value, we start the simulation with the corresponding equilibrium configuration. At $t=0$, we apply a sudden 'kick' to the strip by applying a small point force on one of the vertex of the Cosserat rod. The resulting dynamics is then analyzed by performing a Fourier transform of the dynamic evolution of the midpoint deflection of the strip from which we extract the fundamental frequency (see \cite{radisson2022PRE} for methods). The obtained values are then plotted against $|\Delta\mu|$ in Fig. 4 of the main text (green dots). The analytical solution of the homogeneous strip obtained from the asymptotic analysis in the vicinity of the bifurcation is plotted on the same figure (black line) for comparison. The obtained slowing down follows the scaling $\sqrt{|\sigma^2|}=|\Delta\mu|^{1/4}$ which is to be compared with the scaling obtained for the homogeneous strip $\sqrt{|\sigma^2|}=|\Delta\mu|^{1/2}$ (Fig. 4 of the main text, black line). These two scalings are known to be the signature of second order in time saddle-node and pitchfork bifurcation, respectively (see main text). Their presence proves that the symmetry breaking due to the tapering of the strip has turned the supercritical pitchfork observed for this actuation in the case of a homogeneous strip into a saddle-node bifurcation.
| {
"timestamp": "2023-03-01T02:17:53",
"yymm": "2302",
"arxiv_id": "2302.12152",
"language": "en",
"url": "https://arxiv.org/abs/2302.12152",
"abstract": "Many elastic structures exhibit rapid shape transitions between two possible equilibrium states: umbrellas become inverted in strong wind and hopper popper toys jump when turned inside-out. This snap-through is a general motif for the storage and rapid release of elastic energy, and it is exploited by many biological and engineered systems from the Venus flytrap to mechanical metamaterials. Shape transitions are known to be related to the type of bifurcation the system undergoes, however, to date, there is no general understanding of the mechanisms that select these bifurcations. Here we analyze numerically and analytically two systems proposed in recent literature in which an elastic strip, initially in a buckled state, is driven through shape transitions by either rotating or translating its boundaries. We show that the two systems are mathematically equivalent, and identify three cases that illustrate the entire range of transitions described by previous authors. Importantly, using reduction order methods, we establish the nature of the underlying bifurcations and explain how these bifurcations can be predicted from geometric symmetries and symmetry-breaking mechanisms, thus providing universal design rules for elastic shape transitions.",
"subjects": "Soft Condensed Matter (cond-mat.soft)",
"title": "Elastic snap-through instabilities are governed by geometric symmetries",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9755769113660688,
"lm_q2_score": 0.8104789132480439,
"lm_q1q2_score": 0.7906845149138547
} |
https://arxiv.org/abs/2008.01669 | Eigenvalues of graph Laplacians via rank-one perturbations | We show how the spectrum of a graph Laplacian changes with respect to a certain type of rank-one perturbation. We apply our finding to give new short proofs of the spectral version of Kirchhoff's Matrix Tree Theorem and known derivations for the characteristic polynomials of the Laplacians for several well known families of graphs, including complete, complete multipartite, and threshold graphs. | \section{Introduction}
In this paper, we study finite simple graphs, i.e., graphs with finite vertex sets that do not contain loops or multiple edges. We use $V(G)$ and $E(G)$ to denote the vertex set and edge set of a graph $G$, respectively, and we take $V(G) = [n] := \{1,\ldots,n\}$ unless stated otherwise. Recall that a \textit{spanning tree} in a graph $G$ is a subgraph $T \subseteq G$ such that
\begin{enumerate}
\item $V(T) = V(G)$,
\item $T$ is connected, and
\item $T$ does not contain any cycles.
\end{enumerate}
We are interested in the number $\tau(G)$ of spanning trees in $G$.
Computing $\tau(G)$ for an arbitrary graph $G$ on $n$ vertices can be done reasonably efficiently\footnote{We will see that the number of spanning trees can be computed as the determinant of an $(n-1) \times (n-1)$ matrix, which can be done by na\"ive row reduction in $O(n^3)$ time.} with the help of its \textit{Laplacian matrix}, $L(G)$, which is the $n \times n$ matrix with entries
$$
L(G)(i,j) =
\begin{cases}
\deg(i) & \text{ if } i = j, \\
-1 & \text{ if } i \neq j \text{ and } \{i,j\} \in E(G), \\
0 & \text{otherwise.}
\end{cases}
$$
Note that the rows (and columns) of $L(G)$ sum to zero, meaning the all ones vector in $\mathbb{R}^n$, which we denote by $\mathbf{1}_n$, always lies in the nullspace of $L(G)$. Thus, we must account for the nullity of $L(G)$ when attempting to make any connection between $G$ and its Laplacian. For any $1 \leq i,j \leq n$, not necessarily distinct, let $L(G)_{i,j}$ be the matrix obtained by eliminating the $i$th row and $j$th column from $L(G)$. A celebrated result of Kirchhoff \cite{Kirchhoff} reveals a fundamental connection between the Laplacian matrix and the number of spanning trees in a graph.
\begin{mattreethm}[\cite{Kirchhoff}]
Let $G$ be a graph on $n$ vertices.
\begin{enumerate}[(i)]
\item For any vertices $i$ and $j$, not necessarily distinct,
$$
\tau(G) = (-1)^{i+j} \det(L(G)_{i,j}).
$$
\item If $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $L(G)$ with $\lambda_n = 0$, then
$$
\tau(G) = \cfrac{\lambda_1 \cdot \ldots \cdot \lambda_{n-1}}{n}.
$$
\end{enumerate}
\end{mattreethm}
The Matrix Tree Theorem is beautiful in its simplicity, but the requirement that one must choose a row and column to eliminate in part (i) is somewhat unsatisfying since some choices may be more convenient than others. We demonstrated a technique for easily counting spanning trees in many well-studied families of graphs by adding rank-one matrices to their Laplacian matrices in \cite{Klee-Stamps-unweighted}.
\begin{lemma}[\cite{Klee-Stamps-unweighted}]
\label{KS-rank-one-result}
Let $G$ be a graph on $n$ vertices with Laplacian matrix $L$, and let $\mathbf{u} = (u_i)_{i \in [n]}$ and $\mathbf{v} = (v_i)_{i \in [n]}$ be column vectors in $\mathbb{R}^n$. Then
\begin{equation}\label{eqn:KS-unweighted}
\det(L + \mathbf{u}\mathbf{v}^T) = \left( \sum_{i = 1}^n u_i \right) \left( \sum_{i = 1}^n v_i\right) \tau(G).
\end{equation}
\end{lemma}
This approach not only allows one to work with the full Laplacian matrix, but it transfers the choice of which row and column to eliminate when computing the determinant to a choice of which rank-one matrix to add. In many cases, this leads to simpler computations --- for example, if $K_n$ is the complete graph on $n$ vertices and $\mathbf{u} = \mathbf{v} = \mathbf{1}_n$, then $L(K_n) + \mathbf{u}\mathbf{v}^T = nI_n$, whose determinant is clearly $n^n$. Cayley's formula \cite{Cayley}, $\tau(K_n) = n^{n-2}$, follows immediately from Equation~\eqref{eqn:KS-unweighted}.
A limitation of Lemma \ref{KS-rank-one-result}, however, is that it only gives spanning tree counts; it does not, for instance, allow one to glean stronger information about the Laplacian eigenvalues. The purpose of this paper is to present an analogous result to Lemma~\ref{KS-rank-one-result} (see Theorem~\ref{thm:main}) for Laplacian eigenvalues and to demonstrate its applicability to several well known families of graphs.
Typically, the proof of part (ii) of the Matrix Tree Theorem requires some analysis relating the characteristic polynomial of $L(G)_{i,j}$ to that of $L(G)$. Instead, we present a direct proof as a consequence of our main theorem, which shows how the characteristic polynomial of a graph Laplacian matrix changes when we add a certain type of rank-one matrix.
\begin{theorem} \label{thm:main}
Let $G$ be a graph on $n$ vertices with Laplacian matrix $L = L(G)$, and let $\lambda_1,\ldots,\lambda_n$ be the eigenvalues of $L$ with $\lambda_n = 0$.
\begin{enumerate}[(i)]
\item There exist orthogonal eigenvectors $\mathbf{v}_1,\ldots,\mathbf{v}_n$ with $L\mathbf{v}_i = \lambda_i \mathbf{v}_i$ for all $i \in [n]$ and $\mathbf{v}_n = \mathbf{1}_n$.
\item Let $\mathbf{u} = (u_i)_{i \in [n]}$ be an arbitrary column vector in $\mathbb{R}^n$. The characteristic polynomial of the matrix $\overline{L} := L + \mathbf{u}\mathbf{1}_n^T$ is
\begin{align}
\label{eqn:u-update}
\det(\overline{L} - \lambda I_n ) &= (\lambda_1 - \lambda)\cdots(\lambda_{n-1}-\lambda)\left(\sum_{i=1}^n u_i - \lambda\right).
\end{align}
\end{enumerate}
\end{theorem}
We prove Theorem \ref{thm:main} in Section~\ref{section:mainproof}. As an immediate consequence, however, we obtain a short proof of part (ii) of the Matrix Tree Theorem.
\begin{corollary}
Let $G$ be a graph on $n$ vertices. If $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $L(G)$ with $\lambda_n = 0$, then
$$
\tau(G) = \cfrac{\lambda_1 \cdot \ldots \cdot \lambda_{n-1}}{n}.
$$
\end{corollary}
\begin{proof}
By taking $\mathbf{u} = \mathbf{1}_n$ in Theorem \ref{thm:main}, it follows that the eigenvalues of $L + \mathbf{1}_n\mathbf{1}_n^T$ are $\lambda_1,\ldots,\lambda_{n-1}, n$. Thus, by Equation~\eqref{eqn:KS-unweighted},
$$
n^2 \tau(G) = \det(L + \mathbf{1}_n\mathbf{1}_n^T) = \lambda_1 \cdot \ldots \cdot \lambda_{n-1} \cdot n.
$$
\end{proof}
The rest of the paper is structured as follows: In Section~\ref{section:mainproof}, we prove our main result, Theorem~\ref{thm:main}. In Section~\ref{section:applications}, we demonstrate how Theorem~\ref{thm:main} can be applied to derive the characteristic polynomials for Laplacians of several well known families of graphs, including complete graphs, complete multipartite graphs, complete bipartite graphs with perfect matchings removed, and threshold graphs.
\section{Proof of the Main Result} \label{section:mainproof}
\begin{proof}[Proof of Theorem \ref{thm:main}.]
For part (i), we note that since $L$ is a real symmetric matrix, the Spectral Theorem \cite{Halmos} implies $\mathbb{R}^n$ has an orthogonal basis of eigenvectors of $L$. Moreover, because this orthogonal basis of eigenvectors can be constructed inductively beginning with an arbitrary basis of eigenvectors, we may assume $\mathbf{v}_n = \mathbf{1}_n$ with corresponding eigenvalue $\lambda_n = 0$.
For part (ii), we begin with the observation that, for all $i<n$,
$$
\left( L + \mathbf{u}\mathbf{1}_n^T\right) \mathbf{v}_i = L \mathbf{v}_i + \mathbf{u} \left( \mathbf{1}_n^T \mathbf{v}_i\right) = \lambda_i \mathbf{v}_i,
$$
since $\mathbf{v}_i$ is orthogonal to $\mathbf{v}_n = \mathbf{1}_n$. This means $\mathbf{v}_1,\ldots,\mathbf{v}_{n-1}$ are also eigenvectors for $\overline{L}$ with corresponding eigenvalues $\lambda_1,\ldots,\lambda_{n-1}$.
Next, we use the fact that $\overline{L}$ and its transpose have the same eigenvalues to see that
\begin{align*}
\left(L + \mathbf{u}\mathbf{1}_n^T\right)^T\mathbf{1}_n &= \left(L^T + \mathbf{1}_n\mathbf{u}^T\right) \mathbf{1}_n \\
&\stackrel{(*)}{=} L\mathbf{1}_n + \mathbf{1}_n \left( \mathbf{u}^T \mathbf{1}_n \right) \\
&= \left(\sum_{i=1}^n u_i\right) \mathbf{1}_n.
\end{align*}
Note that $(*)$ follows from the fact that $L$ is symmetric. Thus, $\sum u_i$ is an eigenvalue for $\overline{L}$ with corresponding (left) eigenvector $\mathbf{1}_n$. It remains to ensure that, with multiplicity, $\sum u_i$ is not one of the eigenvalues counted among $\lambda_1,\ldots,\lambda_{n-1}$.
To see why this is the case, we consider the entries of $\mathbf{u}$ as indeterminates so that $\det(\lambda I_n - \overline{L})$ can be viewed as a polynomial in $\mathbb{R}[\lambda,u_1,\ldots,u_n]$. For a generic choice of $u_1,\ldots,u_n$, it must be the case that $\lambda_j \neq \sum u_i$ for any $1 \leq j \leq n-1$. Thus, $\sum u_i$ is an eigenvalue of $\overline{L}$ that is different from any of its other eigenvalues for generic $u_1,\ldots,u_n$. This means Equation~\eqref{eqn:u-update} holds when $\mathbf{u}$ is generic. Since both sides of Equation~\eqref{eqn:u-update} are polynomials in $\mathbb{R}[\lambda,u_1,\ldots,u_n]$ that agree almost everywhere, they must be equal for all $\lambda, u_1,\ldots,u_n$.
\end{proof}
\begin{remark}
We proved a generalization of Lemma~\ref{KS-rank-one-result} for weighted graphs in \cite{Klee-Stamps-weighted}, which raises the question about a weighted version of Theorem~\ref{thm:main}. Since its proof only relies on graph Laplacians being real symmetric matrices with rows (and columns) summing to zero, one can state Theorem~\ref{thm:main} in terms of weighted Laplacians without any additional work. We omit the weighted version here, however, since the requirement of $\mathbf{v} = \mathbf{1}_n$ in the rank-one matrix $\mathbf{u} \mathbf{v}^T$ in Theorem~\ref{thm:main} limits our ability to extend the applications in Section~\ref{section:applications} to their weighted analogs in a straightforward manner.
\end{remark}
\section{Applications}\label{section:applications}
In this section, we demonstrate several ways Theorem~\ref{thm:main} can be applied to derive the characteristic polynomials of the Laplacian matrices of certain graphs. The results we present here are not new, but the technique given by Theorem~\ref{thm:main} affords us direct and more straightforward proofs.
\subsection{Complete Graphs}
Let $K_n$ denote the \emph{complete graph} on $n$ vertices.
\begin{proposition}
The characteristic polynomial of $L(K_n)$ is $$\det(L(K_n) - \lambda I_n) = -\lambda (n-\lambda)^{n-1}.$$
\end{proposition}
\begin{proof}
Since $L(K_n) = nI_n - \mathbf{1}_n\mathbf{1}_n^T$, $$\det(L(K_n) + \mathbf{1}_n\mathbf{1}_n^T - \lambda I_n) = \det( nI_n - \lambda I_n) = (n-\lambda)^n.$$ The result follows from Theorem \ref{thm:main} by taking $\mathbf{u} = \mathbf{1}_n$.
\end{proof}
\subsection{Complete Multipartite Graphs}
For $n_1,\ldots,n_p \in \mathbb{N}$, the \textit{complete multipartite graph} $K_{n_1,\ldots,n_p}$ is the graph whose vertex set is partitioned as $V_1 \cup \cdots \cup V_p$ with $|V_i| = n_i$ for all $i \in [p]$, no edges between any vertices in the same set $V_i$, and all possible edges between vertices in different sets $V_i$ and $V_j$.
\begin{proposition}\label{prop:multipartite}
Let $G$ be the complete multipartite graph $K_{n_1,\ldots,n_p}$ and let $n = n_1 + \cdots + n_p$. The characteristic polynomial of $L(G)$ is
$$
\det(L(G) - \lambda I_n) = -\lambda (n - \lambda)^{p-1} \prod_{i=1}^p (n-n_i - \lambda)^{n_i-1}.
$$
\end{proposition}
Before we prove Proposition~\ref{prop:multipartite}, let us first establish the following lemma.
\begin{lemma}\label{rank-one-update-identity}
For any $a,b \in \mathbb{R}$ and $n \in \mathbb{N}$,
\begin{equation}
\label{eigenvalues-identity-update}
\det\left(aI_n + b\mathbf{1}_n\mathbf{1}_n^T - \lambda I_n\right) = (a-\lambda)^{n-1}\left(a+bn - \lambda\right).
\end{equation}
\end{lemma}
\begin{proof}
First note that $(aI_n + b\mathbf{1}_n\mathbf{1}_n^T)\mathbf{1}_n = (a+bn)\mathbf{1}_n$, so $\mathbf{1}_n$ is an eigenvector of $aI_n + b\mathbf{1}_n\mathbf{1}_n^T$ with corresponding eigenvalue $a+bn$. If $\mathbf{v}$ is any nonzero vector orthogonal to $\mathbf{1}_n$, then $(aI_n + b\mathbf{1}_n\mathbf{1}_n^T)\mathbf{v} = a\mathbf{v}$, so $\mathbf{v}$ is an eigenvector of $aI_n + b\mathbf{1}_n\mathbf{1}_n^T$ with corresponding eigenvalue $a$.
Therefore, the orthogonal complement to $\mathbf{1}_n$ is an $(n-1)$-dimensional eigenspace of $aI_n + b\mathbf{1}_n\mathbf{1}_n^T$ corresponding to the eigenvalue $\lambda = a$. So $\lambda = a$ has geometric (and hence algebraic) multiplicity $n-1$, while $\lambda = a+bn$ has multiplicity $1$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:multipartite}.]
Order the vertices of $G$ so that the first $n_1$ vertices are those in $V_1$, the next $n_2$ vertices are those in $V_2$, and so on. Then $L(G) + \mathbf{1}_n\mathbf{1}_n^T$ is a block diagonal matrix whose diagonal blocks have the form $(n-n_i)I_{n_i} + \mathbf{1}_{n_i}\mathbf{1}_{n_i}^T$ for each $i \in [p]$. Therefore, by Lemma \ref{rank-one-update-identity},
\begin{align*}
\det\left((n-n_i)I_{n_i} + \mathbf{1}_{n_i}\mathbf{1}_{n_i}^T - \lambda I_{n_i} \right) &=
\det\left((n-n_i - \lambda)I_{n_i} + \mathbf{1}_{n_i}\mathbf{1}_{n_i}^T\right) \\
&= (n-n_i - \lambda)^{n_i-1}(n - \lambda),
\end{align*} for each $i \in [p]$. It follows that $$\det(L + \mathbf{1}_n\mathbf{1}_n^T - \lambda I_n) = (n-\lambda)^p \prod_{i=1}^p (n-n_i - \lambda)^{n_i-1},$$
and hence, by Theorem \ref{thm:main},
$$
\det(L-\lambda I_n) = -\lambda (n-\lambda)^{p-1} \prod_{i=1}^p (n-n_i - \lambda)^{n_i-1}.
$$
\end{proof}
\subsection{Complete Bipartite Graphs with a Perfect Matching Removed}
We turn our attention to graphs of the form $K_{n,n} \setminus M$ where $M$ is a perfect matching on $K_{n,n}$. Figure~\ref{fig:bipartite-delete-matching} illustrates such a graph for $n=5$.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\foreach \x in {1,...,5}{
\foreach \y in {1,...,5}{
\ifthenelse{\x = \y}{}{\draw (\x,0) -- (\y,2);};
}
}
\foreach \x in {1,...,5}{
\draw[fill=black] (\x,2) circle (.1) node[above]{\x};
}
\foreach \x in {6,...,10}{
\draw[fill=black] (\x-5,0) circle (.1) node[below]{\x};
}
\end{tikzpicture}
\caption{The graph $K_{5,5} \setminus M$.}
\label{fig:bipartite-delete-matching}
\end{figure}
\begin{proposition}
Let $G = K_{n,n} \setminus M$ where $M$ is a perfect matching on $K_{n,n}$. The characteristic polynomial of $L(G)$ is
$$
\det(L(G) - \lambda I_{2n}) = -\lambda \cdot (n-2 - \lambda)^{n-1} \cdot (n-\lambda)^{n-1} \cdot (2n-2 - \lambda).
$$
\end{proposition}
\begin{proof}
Order the vertices of $G = K_{n,n} \setminus M$ so that vertex $i$ is adjacent to vertex $n+i$ for $1 \leq i \leq n$. Consider $\overline{L} = L(G) + \mathbf{1}_{2n}\mathbf{1}_{2n}^T$, which has a block form
$$
\overline{L} =
\begin{bmatrix}
A & I_n \\
I_n & A \\
\end{bmatrix},
$$
where $A = (n-1)I_n + \mathbf{1}_n \mathbf{1}_n^T$. Then,
\begin{eqnarray*}
\det(\overline{L} - \lambda I_{2n} ) &=&
\det\left(
\begin{bmatrix}
A-\lambda I_n & I_n \\
I_n & A - \lambda I_n \\
\end{bmatrix}
\right) \\
&\stackrel{(*)}{=}& \det((A - \lambda I_n)^2 - I_n ) \\
&=& \det(A - \lambda I_n - I_n)\det(A - \lambda I_n + I_n) \\
&=& \det((n-2 - \lambda)I_n + \mathbf{1}_n \mathbf{1}_n^T )\det((n-\lambda)I_n + \mathbf{1}_n \mathbf{1}_n^T ) \\
&\stackrel{(**)}{=}& (n-2 - \lambda)^{n-1}( 2n-2 - \lambda)(n - \lambda)^{n-1}(2n - \lambda).
\end{eqnarray*}
Note that (*) follows from the fact that $I_n$ commutes with $A - \lambda I_n$, and (**) follows from Lemma~\ref{rank-one-update-identity}. The desired result follows from Theorem~\ref{thm:main}.
\end{proof}
\subsection{Threshold graphs} \label{section:threshold}
A nonempty graph is called \textit{threshold} if its vertices can be ordered such that each vertex is adjacent to none or all of the vertices that come before it. The former are called \textit{isolated} vertices and the latter are called \textit{dominating} vertices. The first, or initial, vertex is neither isolated nor dominating. If $G$ is a threshold graph, we use $I(G)$ and $D(G)$ respectively to denote the sets of isolated and dominating vertices in $G$.
Merris \cite{Merris} showed that the eigenvalues of a threshold graph $G$ on $n$ vertices are given by $$\lambda_i = \#\{v : \deg(v) \geq i\},$$ for $1 \leq i \leq n$. Later, Hammer and Kelmans \cite{Hammer-Kelmans} showed that the multiset of eigenvalues of $G$ can be equivalently expressed as $$\{0\} \cup \{\deg(v) : v \in I(G)\} \cup \{\deg(v) + 1 : v \in D(G)\}.$$ From this perspective, we see that the initial vertex is the only vertex whose degree does not contribute to the spectrum of $L(G)$. We conclude this paper with an elementary proof for the description of the spectrum of the Laplacian matrix of a threshold graph in terms of its vertex degrees. Specifically, we show that with a convenient choice of rank-one perturbation, there is a matrix $\overline{L}(G)$ for which every vertex degree contributes to the spectrum; and moreover that the degree of the initial vertex can be replaced with the eigenvalue $\lambda = 0$ to obtain the spectrum of $L(G)$.
\begin{proposition}
Let $G$ be a threshold graph on $n$ vertices. The characteristic polynomial of $L(G)$ is
$$\det(L(G) - \lambda I_n) = -\lambda \prod_{j \in I(G)} (\deg(j) - \lambda) \prod_{j \in D(G)} (\deg(j)+1 - \lambda).
$$
\end{proposition}
\begin{proof}
Order the vertices of $G$ according to their positions in the isolated-dominating construction sequence and let $\mathbf{u} \in \mathbb{R}^n$ be the indicator vector of $D(G)$. Go et al. \cite{GKLS} note that for $i<j$ in this prescribed ordering, $\{i,j\}$ is an edge of $G$ if vertex $j$ is dominating and it is not an edge if vertex $j$ is isolated. Thus, $\overline{L}(G) = L(G) + \mathbf{u} \mathbf{1}_n^T$ is upper triangular with diagonal entries $\deg(j) + 1$ for each $j \in D(G)$ and $\deg(j)$ for each $j \in I(G) \cup \{1\}$, which means the characteristic polynomial of $\overline{L}(G)$ is $$\det( \overline{L}(G) - \lambda I_n) = (\deg(1) - \lambda) \prod_{j \in I(G)} (\deg(j) - \lambda) \prod_{j \in D(G)} (\deg(j)+1 - \lambda).$$ The desired result follows from Theorem~\ref{thm:main} by observing that the neighborhood of the initial vertex is precisely the set of dominating vertices in $G$, which implies $\sum_{i=1}^n u_i = \deg(1)$.
\end{proof}
\section*{Acknowledgments}
We are grateful to Mohamed Omar for helpful and encouraging conversations during the early stages of this project.
\bibliographystyle{plain}
| {
"timestamp": "2020-08-05T02:21:05",
"yymm": "2008",
"arxiv_id": "2008.01669",
"language": "en",
"url": "https://arxiv.org/abs/2008.01669",
"abstract": "We show how the spectrum of a graph Laplacian changes with respect to a certain type of rank-one perturbation. We apply our finding to give new short proofs of the spectral version of Kirchhoff's Matrix Tree Theorem and known derivations for the characteristic polynomials of the Laplacians for several well known families of graphs, including complete, complete multipartite, and threshold graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Eigenvalues of graph Laplacians via rank-one perturbations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9905874093008835,
"lm_q2_score": 0.7981867729389246,
"lm_q1q2_score": 0.7906737675438019
} |
https://arxiv.org/abs/1309.5600 | A Generalization of Fibonacci Far-Difference Representations and Gaussian Behavior | A natural generalization of base B expansions is Zeckendorf's Theorem: every integer can be uniquely written as a sum of non-consecutive Fibonacci numbers $\{F_n\}$, with $F_{n+1} = F_n + F_{n-1}$ and $F_1=1, F_2=2$. If instead we allow the coefficients of the Fibonacci numbers in the decomposition to be zero or $\pm 1$, the resulting expression is known as the far-difference representation. Alpert proved that a far-difference representation exists and is unique under certain restraints that generalize non-consecutiveness, specifically that two adjacent summands of the same sign must be at least 4 indices apart and those of opposite signs must be at least 3 indices apart. We prove that a far-difference representation can be created using sets of Skipponacci numbers, which are generated by recurrence relations of the form $S^{(k)}_{n+1} = S^{(k)}_{n} + S^{(k)}_{n-k}$ for $k \ge 0$. Every integer can be written uniquely as a sum of the $\pm S^{(k)}_n $'s such that every two terms of the same sign differ in index by at least 2k+2, and every two terms of opposite signs differ in index by at least k+2. Additionally, we prove that the number of positive and negative terms in given Skipponacci decompositions converges to a Gaussian, with a computable correlation coefficient that is a rational function of the smallest root of the characteristic polynomial of the recurrence. The proof uses recursion to obtain the generating function for having a fixed number of summands, which we prove converges to the generating function of a Gaussian. We next explore the distribution of gaps between summands, and show that for any k the probability of finding a gap of length $j \ge 2k+2$ decays geometrically, with decay ratio equal to the largest root of the given k-Skipponacci recurrence. We conclude by finding sequences that have an (s,d) far-difference representation for any positive integers s,d. | \section{Introduction}
In this paper we explore signed decompositions of integers by various sequences. After briefly reviewing the literature, we state our results about uniqueness of decomposition, number of summands, and gaps between summands. In the course of our analysis we find a new way to interpret an earlier result about far-difference representations, which leads to a new characterization of the Fibonacci numbers.
\subsection{Background}
Zeckendorf \cite{Ze} discovered an interesting property of the Fibonacci numbers $\{F_n\}$; he proved that every positive integer can be written uniquely as a sum of non-consecutive Fibonacci numbers\footnote{If we were to use the standard definition of $F_0 = 0$, $F_1 = 1$ then we would lose uniqueness.}, where $F_{n+2} = F_{n+1} + F_n$ and $F_1 = 1, F_2 = 2$. It turns out this is an alternative characterization of the Fibonacci numbers; they are the unique increasing sequence of positive integers such that any positive number can be written uniquely as a sum of non-consecutive terms.
Zeckendorf's theorem inspired many questions about the number of summands in these and other decompositions. Lekkerkerker \cite{Lek} proved that the average number of summands in the decomposition of an integer in $[F_n, F_{n+1})$ is $\frac{n}{\varphi^2+1} + O(1)$, where $\varphi = \frac{1+\sqrt{5}}{2}$ is the golden mean (which is the largest root of the characteristic polynomial associated with the Fibonacci recurrence). More is true; as $n\to\infty$, the distribution of the number of summands of $m \in [F_n, F_{n+1})$ converges to a Gaussian. This means that as $n\to\infty$ the fraction of $m \in [F_n, F_{n+1})$ such that the number of summands in $m$'s Zeckendorf decomposition is in $[\mu_n - a\sigma_n, \mu_n + b\sigma_n]$ converges to $\frac1{\sqrt{2\pi}} \int_a^b e^{-t^2/2}dt$, where $\mu_n = \frac{n}{\varphi^2+1} + O(1)$ is the mean number of summands for $m \in [F_n, F_{n+1})$ and $\sigma_n^2 = \frac{\varphi}{5(\varphi+2)}n-\frac{2}{25}$ is the variance (see \cite{KKMW} for the calculation of the variance). \emph{Henceforth in this paper whenever we say the distribution of the number of summand converges to a Gaussian, we mean in the above sense.} There are many proofs of this result; we follow the combinatorial approach used in \cite{KKMW}, which proved these results by converting the question of how many numbers have exactly $k$ summands to a combinatorial one.
These results hold for other recurrences as well. Most of the work in the field has focused on Positive Linear Recurrence Relations (PLRS), which are recurrence relations of the form $G_{n+1} = c_1G_n + \cdots + c_L G_{n+1-L}$ for non-negative integers $L,c_1,c_2,\dots,c_L$ with $L,c_1,$ and $c_n > 0$ (these are called $G$-ary digital expansions in \cite{St}). There is an extensive literature for this subject; see \cite{Al,BCCSW,Day,GT,Ha,Ho,Ke,Len,MW1,MW2} for results on uniqueness of decomposition and \cite{DG,FGNPT,GTNP,KKMW,Lek,LT,MW1,St} for Gaussian behavior.
Much less is known about signed decompositions, where we allow negative summands in our decompositions. This opens up a number of possibilities, as in this case we can overshoot the value we are trying to reach in a given decomposition, and then subtract terms to reach the desired positive integer. We formally define this idea below.
\begin{defn}[Far-difference representation]
A \emph{far-difference representation} of a positive integer $x$ by a sequence $\{a_n\}$ is a signed sum of terms from the sequence which equals $x$.
\end{defn}
The Fibonacci case was first considered by Alpert \cite{Al}, who proved the following analogue of Zeckendorf's theorem. Note that the restrictions on the gaps between adjacent indices in the decomposition is a generalization of the non-adjacency condition in the Zeckendorf decomposition.
\begin{thm} \label{thm:alpert}
Every $x \in \mathbb{Z}$ has a unique Fibonacci far-difference representation such that every two terms of the same sign differ in index by at least 4 and every two terms of opposite sign differ in index by at least 3.
\end{thm}
For example, 2014 can be decomposed as follows:
\be
2014 \ = \ 2584 - 610 + 55 - 13 - 2 \ = \ F_{17} - F_{14} + F_9 - F_6 - F_2.
\ee
Alpert's proof uses induction on a partition of the integers, and the method generalizes easily to other recurrences which we consider in this paper.
Given that there is a unique decomposition, it is natural to inquire if generalizations of Lekkerkerker's Theorem and Gaussian behavior hold as well. Miller and Wang \cite{MW1} proved that they do. We first set some notation, and then describe their results (our choice of notation is motivated by our generalizations in the next subsection).
First, let $R_4(n)$ denote the following summation
\begin{equation} \label{R4(n)}
R_4(n) \ := \
\begin{cases}
\sum_{0 < n-4i \le n} F_{n-4i} \ = \ F_n + F_{n-4} + F_{n-8} + \cdots & \text{ if } n > 0 \\
0 & \text{ otherwise.}
\end{cases}
\end{equation}
Using this notation, we state the motivating theorem from Miller-Wang.
\begin{thm}[Miller-Wang] \label{thm:MW1 result}
Let $\mathcal{K}_n$ and $\mathcal{L}_n$ be the corresponding random variables denoting the number of positive summands and the number of negative summands in the far-difference representation (using the signed Fibonacci numbers) for integers in $(R_4(n-1), R_4(n)]$. As $n$ tends to infinity, $\mathbb{E}[\mathcal{K}_n] = \frac{1}{10}n + \frac{371-113\sqrt{5}}{40} + o(1)$, and is $\frac{1+\sqrt{5}}{4} = \frac{\varphi}{2}$ greater than $\mathbb{E}[\mathcal{L}_n]$. The variance of both is $\frac{15 + 21\sqrt{5}}{1000}n + O(1)$. The standardized joint density of $\mathcal{K}_n$ and $\mathcal{L}_n$ converges to a bivariate Gaussian with negative correlation $\frac{10\sqrt{5}-121}{179} = -\frac{21-2\varphi}{29+2\varphi} \approx -0.551$, and $\mathcal{K}_n + \mathcal{L}_n$ and $\mathcal{K}_n - \mathcal{L}_n$ converge to independent random variables.
\end{thm}
Their proof used generating functions to show that the moments of the distribution of summands converge to those of a Gaussian. The main idea is to show that the conditions which imply Gaussianity for positive-term decompositions also hold for the Fibonacci far-difference representation. One of our main goals in this paper is to extend these arguments further to the more general signed decompositions. In the course of doing so, we find a simpler way to handle the resulting algebra.
We then consider an interesting question about the summands in a decomposition, namely \emph{how are the lengths of index gaps between adjacent summands distributed in a given integer decomposition?} Equivalently, how long must we wait after choosing a term from a sequence before the next term is chosen in a particular decomposition? In \cite{BBGILMT}, the authors solve this question for the Fibonacci far-difference representation, as well as other PLRS, provided that all the coefficients are positive. Note this restriction therefore excludes the $k$-Skipponaccis for $k \ge 2$.
\begin{thm}[\cite{BBGILMT}]\label{thm:skipgaps}
As $n \to \infty$, the probability $P(j)$ of a gap of length $j$ in a far-difference decomposition of integers in $(R_4(n-1), R_4(n)]$ converges to geometric decay for $j \ge 4$, with decay constant equal to the golden mean $\varphi$. Specifically, if $a_1 = \varphi / \sqrt{5}$ (which is the coefficient of the largest root of the recurrence polynomial in Binet's Formula\footnote{As our Fibonacci sequence is shifted by one index from the standard representation, for us Binet's Formula reads $F_n = \frac{\varphi}{\sqrt{5}} \varphi^n - \frac{1-\varphi}{\sqrt{5}} (1-\varphi)^n$. For any linear recurrence whose characteristic polynomial is of degree $d$ with $d$ distinct roots, the $n$\textsuperscript{{\rm th}} term is a linear combination of the $n$\textsuperscript{{\rm th}} powers of the $d$ roots; we always let $a_1$ denote the coefficient of the largest root.} expansion for $F_n$), then $P(j) = 0$ if $j \le 2$ and
\begin{equation} \label{thm:FibonacciGaps}
P(j) \ = \
\begin{cases}
\frac{10a_1\varphi}{\varphi^4-1}\varphi^{-j} & \text{ if } j \ge 4 \\
\frac{5a_1}{\varphi^2(\varphi^4-1)} & \text{ if } j = 3.
\end{cases}
\end{equation}
\end{thm}
\subsection{New Results}
In this paper, we study far-difference relations related to certain generalizations of the Fibonacci numbers, called the $k$-Skipponacci numbers.
\begin{defi}[$k$-Skipponacci Numbers] For any non-negative integer $k$, the $k$-Skipponaccis are the sequence of integers defined by $S^{(k)}_{n+1} = S^{(k)}_n + S^{(k)}_{n-k}$ for some $k \in \mathbb{N}$. We index the $k$-Skipponaccis such that the first few terms are $S^{(k)}_1 = 1$, $S^{(k)}_2 = 2$, ..., $S^{(k)}_{k+1} = k+1$, and $S^{(k)}_n = 0$ for all $n \le 0$. \end{defi}
Some common $k$-Skipponacci sequences are the 0-Skipponaccis (which are powers of 2, and lead to binary decompositions) and the 1-Skipponaccis (the Fibonaccis). Our first result is that a generalized Zeckendorf theorem holds for far-difference representations arising from the $k$-Skipponaccis.
\begin{thm}\label{Thm:Far-Diff}
Every $x \in \mathbb{Z}$ has a unique far-difference representation for the $k$-Skipponaccis such that every two terms of the same sign are at least $2k+2$ apart in index and every two terms of opposite sign are at least $k+2$ apart in index.
\end{thm}
Before stating our results on Gaussianity, we first need to set some new notation, which generalizes the summation in \eqref{R4(n)}. \begin{equation} \label{Rn}
R_k(n) \ := \
\begin{cases}
\sum_{0 < n-b(2k+2) \le n} S^{(k)}_{n-b(2k+2)} \ = \ S^{(k)}_n + S^{(k)}_{n-2k-2} + S^{(k)}_{n-4k-4} + \cdots & \text{ if } n > 0
\\
0 & \text{ otherwise,}
\end{cases}
\end{equation}
\begin{thm} \label{thm:Gaussianity[MW]} Fix a positive integer $k$. Let $\mathcal{K}_n$ and $\mathcal{L}_n$ be the corresponding random variables denoting the number of positive and the number of negative summands in the far-difference representation for integers in $(R_k(n-1),R_k(n)]$ from the $k$-Skipponaccis. As $n\to\infty$, expected values of $\mathcal{K}_n$ and $\mathcal{L}_n$ both grow linearly with $n$ and differ by a constant, as do their variances. The standardized joint density of $\mathcal{K}_n$ and $\mathcal{L}_n$ converges to a bivariate Gaussian with a computable correlation. More generally, for any non-negative numbers $a, b$ not both equal to $0$, the random variable $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ converges to a normal distribution as $n\to\infty$.
\end{thm}
\noindent This theorem is an analogue to Theorem \ref{thm:MW1 result} of \cite{MW1} for the case of Fibonacci numbers. Their proof, which is stated in Section 6 of \cite{MW1}, relies heavily on Section 5 of the same paper where the authors proved Gaussianity for a large subset of sequences whose generating function satisfies some specific constraints. In this paper we state a sufficient condition for Gaussianity in the following theorem, which we prove in \S\ref{sec:gaussianity}. We show that it applies in our case, yielding a significantly simpler proof of Gaussianity than the one in \cite{MW1}.
\begin{thm}\label{thm_generalGaussian}
Let $\kappa$ be a fixed positive integer. For each $n$, let a discrete random variable $X_n$ in $I_n=\{0,1,\dots,n\}$ have
\be {\rm Prob}(X_n=j)\ = \
\begin{cases}
\rho_{j;n}/\sum_{j=1}^n \rho_{j;n} & \text{{\rm if} } j\in I_n \\
0 &\text{{\rm otherwise}}
\end{cases}
\ee
for some positive real numbers $ \rho_{1;n},\dots, \rho_{n;n}$. Let $g_n(x) := \sum_j \rho_{j;n}x^j$ be the generating function of $X_n$. If $g_n$ has form $g_n(x)\ = \ \sum_{i=1}^\kappa q_i(x)\alpha_i^n(x)$ where
\begin{itemize}
\item[(i)] for each $i\in\{1,\dots,\kappa\}$, $q_i,\alpha_i:\mathbb{R}\to\mathbb{R}$ are three times differentiable functions which do not depend on $n$;
\item[(ii)] there exists some small positive $\epsilon$ and some positive constant $\lambda<1$ such that for all $x\in I_\epsilon=[1-\epsilon,1+\epsilon]$, $|\alpha_1(x)|>1$ and $\frac{|\alpha_i(x)|}{|\alpha_1(x)|}<\lambda<1$ for all $i=2,\dots,\kappa$;
\item[(iii)] $\alpha_1'(1)\neq 0$ and $\frac{d}{dx}\left[\frac{x\alpha_1'(x)}{\alpha_1(x)}\right] \left|_{x=1}\neq 0\right.$;
\end{itemize}
then
\begin{itemize}
\item[(a)] The mean $\mu_n$ and variance $\sigma_n^2$ of $X_n$ both grow linearly with $n$. Specifically,
\begin{equation}
\mu_n\ = \ A n+B+o(1)
\end{equation}
\begin{equation}
\sigma_n^2\ = \ C \cdot n+ D+o(1)
\end{equation}
where \begin{equation}A\ = \ \frac{\alpha_1'(1)}{\alpha_1(1)}, \ \ \ \ B\ = \ \frac{q_1'(1)}{q_1(1)}
\end{equation}
\begin{equation}
C\ = \ \left(\frac{x\alpha_1'(x)}{\alpha_1(x)}\right)\Bigg|_{x=1}\ = \ \frac{\alpha_1(1)[\alpha_1'(1)+\alpha_1''(1)]-\alpha_1'(1)^2}{\alpha_1(1)^2}
\end{equation}
\begin{equation}
D\ = \ \left(\frac{xq_1'(x)}{q_1(x)}\right)\Bigg|_{x=1} \ = \ \frac{q_1(1)[q_1'(1)+q_1''(1)]-q_1'(1)^2}{q_1(1)^2}.
\end{equation}
\item[(b)] As $n\to\infty$, $X_n$ converges in distribution to a normal distribution.
\end{itemize}
\end{thm}
Next we generalize previous work on gaps between summands. This result makes use of a standard result, the Generalized Binet's Formula; see \cite{BBGILMT} for a proof for a large family of recurrence relations which includes the $k$-Skipponaccis. We restate the result here for the specific case of the $k$-Skipponaccis.
\begin{lem} \label{Binet-Skipponacci}
Let $\lambda_1,\dots,\lambda_k$ be the roots of the characteristic polynomial for the $k$-Skipponaccis. Then $\lambda_1 > |\lambda_2| \ge \cdots \ge |\lambda_k|$, $\lambda_1 > 1$, and there exists a constant $a_1$ such that
\begin{equation}
S^{(k)}_n \ = \ a_1\lambda_1^n + O(n^{\max(0,k-2)}\lambda_2^n).
\end{equation}
\end{lem}
\begin{thm} \label{thm:gapresult} Consider the $k$-Skipponacci numbers $\{S^{(k)}_n\}$. For each $n$, let $P_n(j)$ be the probability that the size of a gap between adjacent terms in the far-difference decomposition of a number $m \in (R_k(n-1),R_k(n)]$ is $j$. Let $\lambda_1$ denote the largest root of the recurrence relation for the $k$-Skipponacci numbers, and let $a_1$ be the coefficient of $\lambda_1$ in the Generalized Binet's formula expansion for $S^{(k)}_n$. As $n\to\infty$, $P_n(j)$ converges to geometric decay for $j \ge 2k+2$, with computable limiting values for other $j$. Specifically, we have $ \lim_{n\to\infty}P_n(j) = P(j) = 0$ for $j \le k+1$, and
\begin{equation}
P(j) \ = \ \begin{cases}
\frac{a_1\lambda_1^{-3k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j} & \text{if }\; k+2 \le j < 2k+2 \\
\frac{a_1\lambda_1^{-2k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j} & \text{if }\; j \ge 2k+2.
\end{cases}
\end{equation}
where $A_{1,1}$ is a constant defined in \eqref{E[K+L]}.
\end{thm}
Our final results explore a complete characterization of sequences that exhibit far-difference representations. That is, we study integer decompositions on a sequence of terms in which same sign summands are $s$ apart in index and opposite sign summands are $d$ apart in index. We call such representations \emph{(s,d) far-difference representations}, which we formally define below.
\begin{defn}[$(s,d)$ far-difference representation]\label{def:sdfardiffrep} A sequence $\{a_n\}$ has an \emph{$(s,d)$ far-difference representation} if every integer can be written uniquely as sum of terms $\pm a_n$ in which every two terms of the same sign are at least $s$ apart in index and every two terms of opposite sign are at least $d$ apart in index.
\end{defn}
Thus the Fibonaccis lead to a $(4,3)$ far-difference representation. More generally, the $k$-Skipponaccis lead to a $(2k+2,k+2)$ one. We can consider the reverse problem; if we are given a pair of positive integers $(s,d)$, is there a sequence such that each number has a unique $(s,d)$ far-difference representation? The following theorem shows that the answer is yes, and gives a construction for the sequence.
\begin{thm}\label{farDiffRec} Fix positive integers $s$ and $d$, and define a sequence $\{a_n\}_{n=1}^{\infty}$ by
\begin{itemize}
\item[i.] For $n=1,2,\dots,\min(s,d)$, let $a_n=n$.
\item[ii.] For $\min(s,d)< n\leq \max(s,d)$, let
\be a_n \ = \ \left\{
\begin{array}{l l}
a_{n-1}+a_{n-s} & \quad \text{{\rm if}\ $s<d$}\\
a_{n-1}+a_{n-d}+1 & \quad \text{{\rm if}\ $d\leq s$.}
\end{array} \right.\ee
\item[iii.] For $n> \max(s,d)$, let $a_n=a_{n-1}+a_{n-s}+a_{n-d}$.
\end{itemize}
Then the sequence $\{a_n\}$ has an unique $(s,d)$ far-difference representation.
\end{thm}
In particular, as the Fibonaccis give rise to a $(4,3)$ far-difference representation, we should have $F_n = F_{n-1} + F_{n-4} + F_{n-3}$. We see this is true by repeatedly applying the standard Fibonacci recurrence: \bea F_n \ = \ F_{n-1} + F_{n-2} \ = \ F_{n-1} + \left(F_{n-3} + F_{n-4}\right) \ = \ F_{n-1} + F_{n-4} + F_{n-3}. \eea
To prove our results we generalize the techniques from \cite{Al, BBGILMT, MW1} to our families. In \S\ref{sec:fardiffrepskip} we prove that for any $k$-Skipponacci recurrence relation, a unique far-difference representation exists for all positive integers. In \S\ref{sec:gaussianity} we prove that the number of summands in any far-difference representation approaches a Gaussian, and then we study the distribution of gaps between summands in \S\ref{sec:distrgaps}. We end in \S\ref{sec:genfardiffseq} by exploring generalized $(s,d)$ far-difference representations.
\section{Far-difference representation of $k$-Skipponaccis}\label{sec:fardiffrepskip}
Recall the $k$-Skipponaccis satisfy the recurrence $S^{(k)}_{n+1} = S^{(k)}_n + S^{(k)}_{n-k}$ with $S^{(k)}_i = i$ for $1 \le i \le k+1$. Some common $k$-Skipponacci sequences are the 0-Skipponaccis (the binary sequence) and the 1-Skipponaccis (the Fibonaccis). We prove that every integer has a unique far-difference representation arising from the $k$-Skipponaccis. The proof is similar to Alpert's proof for the Fibonacci numbers.
We break the analysis into integers in intervals $(R_k(n-1), R_k(n)]$, with $R_k(n)$ as in \eqref{Rn}. We need the following fact.
\begin{lem} \label{Lem:R+R=S-1} Let $\{S^{(k)}_n\}$ be the $k$-Skipponacci sequence. Then
\begin{equation} \label{lemma1}
S^{(k)}_{n} - R_k(n-k-2) - R_k(n-1)=1.
\end{equation}
\end{lem}
The proof of follows by a simple induction argument, which for completeness we give in Appendix \ref{sec:proofsfromsecfardiffreplemmas}.
\begin{proof}[Proof of Theorem \ref{Thm:Far-Diff}] It suffices to consider the decomposition of positive integers, as negative integers follow similarly. Note the number 0 is represented by the decomposition with no summands.
We claim that the positive integers are the disjoint union over all closed intervals of the form $[S^{(k)}_n - R_k(n-k-2), R_k(n)]$. To prove this, it suffices to show that $S^{(k)}_{n} - R_k(n-k-2) = R_k(n-1) + 1$ which follows immediately from Lemma \ref{Lem:R+R=S-1}.
Assume a positive integer $x$ has a $k-$Skipponacci far-differenced representation in which $S^{(k)}_n$ is the leading term, (i.e., the term of largest index). It is easy to see that because of our rule, the largest number can be decomposed with the leading term $S^{(k))}_n$ is $ S^{(k)}_n+S^{(k)}_{n-2k-2}+S^{(k)}_{n-4k-4}+\cdots=R_k(n)$ and the smallest one is $S^{(k)}_n-S^{(k)}_{n-k-2}-S^{(k)}_{n-3k-4}-\cdots=S^{(k)}_n-R_k(n-k-2)$, hence $S^{(k)}_n-R_k(n-k-2)\leq x\leq R_k(n)$. Since we proved that $\{[S^{(k)}_n - R_k(n-k-2), R_k(n)]\}_{n=1}^\infty$ is a disjoint cover of all positive integers, for any integer $x\in \mathbb{Z}^+$, there is a unique $n$ such that $S^{(k)}_n - R_k(n-k-2) \le x \le S^{(k)}_n$. Further, if $x$ has a $k$-Skipponacci far-difference representation, then $S^{(k)}_n$ must be its leading term.
Therefore if a decomposition of such an $x$ exists it must begin with $S^{(k)}_n$. We are left with proving a decomposition exists and that it is unique. We proceed by induction.
For the base case, let $n=0$. Notice that the only value for $x$ on the interval $0 \le x \le R_k(0)$ is $x=0$, and the $k$-Skipponacci far-difference representation of $x$ is empty for any $k$. Assume that every integer $x$ satisfying $0 \le x \le R_k(n-1)$ has a unique far-difference representation. We now consider $x$ such that $R_k(n-1) < x \le R_k(n)$. From our partition of the integers, $x$ satisfies $S^{(k)}_n - R_k(n-k-2) \le x \le R_k(n)$. There are two cases.
\begin{itemize}
\item[(1)] $S^{(k)}_n - R_k(n-k-2) \le x \le S^{(k)}_n$. \\
Note that for this case, it is equivalent to say $0 \le S^{(k)}_n - x \le R_k(n-k-2)$. It then follows from the inductive step that $S^{(k)}_n - x$ has a unique $k$-Skipponacci far-difference representation with $S^{(k)}_{n-k-2}$ as the upper bound for the main term.
\item[(2)] $S^{(k)}_n \le x \le R_k(n)$. \\
For this case, we can once again subtract $S^{(k)}_n$ from both sides of the inequality to get $0 \le x-S^{(k)}_n \le R_k(n-2k-2)$. It then follows from the inductive step that $x-S^{(k)}_n$ has a unique far-difference representation with main term at most $S^{(k)}_{n-2k-2}$.
\end{itemize}
In either case, we can generate a unique $k$-Skipponacci far-difference representation for $x$ by adding $S^{(k)}_n$ to the representation for $x - S^{(k)}_n$ (which, from the definition of $R_k(m)$, in both cases has the index of its largest summand sufficiently far away from $n$ to qualify as a far-difference representation. \end{proof}
\section{Gaussian Behavior}\label{sec:gaussianity}
In this section we follow method in Section 6 of \cite{MW1} to prove Gaussianity for the number of summands. We first find the generating function for the problem, and then analyze that function to complete the proof.
\subsection{Derivation of the Generating Function}\label{sec:derivgenfns}
Let $p_{n,m,\ell}$ be the number of integers in $(R_k(n)$, $R_k(n+1)]$ with exactly $m$ positive summands and exactly $\ell$ negative summands in their far-difference decomposition via the $k$-Skipponaccis (as $k$ is fixed, for notational convenience we suppress $k$ in the definition of $p_{n,m,\ell}$). When $n \le 0$ we let $p_{n,m,\ell}$ be 0. We first derive a recurrence relation for $p_{n,m,\ell}$ by a combinatorial approach, from which the generating function immediately follows.
\begin{lem} Notation as above, for $n > 1$ we have
\begin{equation} \label{prec1}
p_{n,m,\ell}\ = \ p_{n-1,m,\ell}+ p_{n-(2k+2),m-1,\ell} + p_{n-(k+2),\ell,m-1}.
\end{equation}
\end{lem}
\begin{proof} First note that $p_{n,m,\ell} = 0$ if $m \le 0$ or $\ell < 0 $. In \S\ref{sec:fardiffrepskip} we partitioned the integers into the intervals $[R_k(n-1)+1,R_k(n)]$, and noted that if an integer $x$ in this interval has a far-difference representation, then it must have leading term $S^{(k)}_n$, and thus $x - S^{(k)}_n \in [R_k(n-1)+1-S^{(k)}_n,R_k(n)-S^{(k)}_n]$. From Lemma \ref{Lem:R+R=S-1} we have
\bea\label{S-R_n-1-R_n-k-2=1}
S^{(k)}_n - R_k(n-1) - R_k(n-k-2)
\ = \ 1,
\eea which implies $R_k(n-1) + 1 - S^{(k)}_n = -R_k(n-k-2)$. Thus $p_{n,m,\ell}$ is the number of far-difference representations for integers in $[-R_k(n-k-2), R_k(n-2k-2)]$ with $m-1$ positive summands and $\ell$ negative summands (as we subtracted away the main term $S^{(k)}_n$).
Let $n > 2k+2$. There are two possibilities.\\
\noindent \texttt{Case 1: $(k-1,\ell) = (0,0)$.}
\noindent Since $S^{(k)}_n - R_k(n-1) - R_k(n-k-2) = 1$ by \eqref{S-R_n-1-R_n-k-2=1}, we know that $S^{(k)}_{n-1} < R_k(n-1) < S^{(k)}_n$ for all $n > 1$. This means there must be exactly one $k$-Skipponacci number on the interval $[R_k(n-1)+1,R_k(n)]$ for all $n > 1$. It follows that $p_{n,1,0} = p_{n-1,1,0} = 1$, and the recurrence in \eqref{prec1} follows since $p_{n-k-2,0,0}$ and $p_{n-2k-2,0,0}$ are both 0 for all $n > 2k+2$. \\
\noindent \texttt{Case 2: $(k-1,\ell)$ is not $(0,0)$.}
\noindent Let $N(I,m,\ell)$ be the number of far-difference representations of integers in the interval $I$ with $m$ positive summands and $\ell$ negative summands. Thus
\begin{align} \label{pnml_sum1}
p_{n,m,\ell}
\;\ = \ &\; N\left[ (0,R_k(n-2k-2)],m-1,\ell \right] + N\left[ (-R_k(n-k-2),0],m-1,\ell \right] \nonumber \\
\;\ = \ &\; N\left[ (0,R_k(n-2k-2)],m-1,\ell \right] + N\left[ (0,R_k(n-k-2)],\ell,m-1 \right] \nonumber \\
\;\ = \ &\; \sum_{i=1}^{n-2k-2} p_{i,m-1,\ell} + \sum_{i=1}^{n-k-2} p_{i,\ell,m-1}.
\end{align}
Since $n > 1$, we can replace $n$ with $n-1$ in \eqref{pnml_sum1} to get
\begin{equation} \label{pnml_sum2}
p_{n-1,m,\ell}
\;\ = \ \; \sum_{i=1}^{n-2k-3} p_{i,m-1,\ell} + \sum_{i=1}^{n-k-3} p_{i,\ell,m-1}.
\end{equation}
Subtracting \eqref{pnml_sum2} from\eqref{pnml_sum1} gives us the desired expression for $p_{n,m,\ell}$. \end{proof}
The generating function $G_k(x,y,z)$ for the far-difference representations by $k$-Skipponacci numbers is defined by \be G_k(x,y,z)\ =\ \sum p_{n,m,\ell}x^my^{\ell}z^n. \ee
\begin{thm} \label{Thm:G_k(x,y,z)} Notation as above, we have
\begin{equation} \label{genfn}
G_k(x,y,z)
\;\ = \ \; \frac{xz-xz^2+xyz^{k+3}-xyz^{2k+3}}{1-2z+z^2-(x+y)z^{2k+2}+(x+y)z^{2k+3}-xyz^{2k+4}+xyz^{4k+4}}.
\end{equation}
\end{thm}
\begin{proof} Note that the equality in \eqref{prec1} holds for all triples $(n,m,\ell)$ except for the case where $n=1$, $m=1$, and $\ell=0$ under the assumption that $p_{n,m,\ell}=0$ whenever $n\leq 0$. To prove the claimed formula for the generating function in \eqref{genfn}, however, we require a recurrence relation in which each term is of the form $p_{n-n_0,m-m_0,\ell-\ell_0}$. This can be achieved with some simple substitutions. Replacing $(n,m,\ell)$ in \eqref{prec1} with $(n-k-2,\ell,m-1)$ gives
\begin{equation} \label{prec2}
p_{n-k-2,\ell,m-1}\ = \ p_{n-(k+3),\ell,m-1}+ p_{n-(3k+4),\ell-1,m-1} + p_{n-(2k+4),m-1,\ell-1},
\end{equation} which holds for all triples except $(k+3,1,1)$. Rearranging the terms of \eqref{prec1}, we get
\begin{equation} \label{prec3}
p_{n-(k+2),\ell,m-1} \ = \ p_{n,m,\ell} - p_{n-1,m,\ell} - p_{n-(2k+2),m-1,\ell}.
\end{equation}
We replace $(n,m,\ell)$ in \eqref{prec3} with $(n-1,m,\ell)$ and $(n-2k-2,m,\ell-1)$ which yields
\begin{equation} \label{prec4}
p_{n-(k+3),l,m-1} \ = \ p_{n-1,m,l} - p_{n-2,m,l} - p_{n-(2k+3),m-1,l},
\end{equation} which only fails for the triple $(2,1,0)$, and
\begin{equation} \label{prec5}
p_{n-(3k+4),l-1,m-1} \ = \ p_{n-(2k+2),m,l-1} - p_{n-(2k+3),m,l-1} - p_{n-(4k+4),m-1,l-1},
\end{equation} which only fails for the triple $(2k+3,1,1)$. We substitute equations \eqref{prec3}, \eqref{prec4} and \eqref{prec5} into \eqref{prec1} and obtain the following expression for $p_{n,m,\ell}$:
\begin{align} \label{pnmlrec}
p_{n,m,l}
\;\ = \ &\; 2p_{n-1,m,l} - p_{n-2,m,l} + p_{n-(2k+2),m-1,l} + p_{n-(2k+2),m,l-1} \nonumber \\
\;&\; - p_{n-(2k+3),m-1,l} - p_{n-(2k+3),m,l-1} + p_{n-(2k+4),m-1,l-1} - p_{n-(4k+4),m-1,l-1}.
\end{align}
Using this recurrence relation, we prove that the generating function in \eqref{genfn} is correct. Consider the following characteristic polynomial for the recurrence in \eqref{prec5}:
\begin{equation} \label{Pxyz}
P(x,y,z)
\ = \ 1 - 2z + z^2 -(x+y)z^{2k+2} + (x+y)z^{2k+3} - xyz^{2k+4} + xyz^{4k+4}.
\end{equation}
We take the product of this polynomial with the generating function to get
\begin{align} \label{GenRec}
P(x,y,z)G_k(x,y,z)
\;\ = \ &\; \left( 1 - 2z + z^2 -(x+y)z^{2k+2} + (x+y)z^{2k+3} - xyz^{2k+4}\right. \nonumber \\
\;&\; \left. + xyz^{4k+4}\right) \cdot \sum_{n \ge 1} p_{n,m,l}x^my^lz^n \nonumber \\
\;\ = \ &\; x^my^lz^n \cdot \sum_{n \ge 1} p_{n,m,l} - 2p_{n-1,m,l} + p_{n-2,m,l} - p_{n-(2k+2),m-1,l} \nonumber \\
\;&\; - p_{n-(2k+2),m,l-1} + p_{n-(2k+3),m-1,l} + p_{n-(2k+3),m,l-1} \nonumber \\
\;&\; - p_{n-(2k+4),m-1,l-1} + p_{n-(4k+4),m-1,l-1}.
\end{align}
Notice that the equality from \eqref{prec5} appears within the summation, and this quantity is zero whenever the equality holds. We have shown that the only cases where a triple does not satisfy the equality is when $(n,m,\ell)$ is given by $(1,1,0)$, $(2,1,0)$, $(k+3,1,1)$ or $(2k+3,1,1)$. Since \eqref{pnmlrec} is a combination of \eqref{prec3}, \eqref{prec4}, \eqref{prec2} and \eqref{prec5}, where these triples fail, it follows that they will also not satisfy the equality in \eqref{pnmlrec}. Thus within the summation in \eqref{GenRec} we are left with a non-zero coefficient for $x^my^{\ell}z^n$. We collect these terms and are left with the following:
\begin{equation}
P(x,y,z)G_k(x,y,z) \ = \ xz - xz^2 + xyz^{k+3} - xyz^{2k+3}.
\end{equation}
Rearranging these terms and substituting in our value for $P(x,y,z)$ gives us the desired equation for the generating function.
\end{proof}
Going forward, we often need the modified version of our generating function in which we factor out the term $(1-z)$ from both the numerator and the denominator:
\begin{align} \label{Genfn2}
G_k(x,y,z)
\;\ = \ &\; \frac{ xz + \frac{1-z^k}{1-z}xyz^{k+3} }{1-z-(x+y)z^{2k+2} + \frac{1-z^{2k}}{1-z}\left(-xyz^{2k+4}\right) } \nonumber \\
\;\ = \ &\; \frac{xz + xy\sum_{j=k+3}^{2k+2}z^j}{1-z-(x + y)z^{2k+2}-xy\sum_{j=2k+4}^{4k+3}z^j}.
\end{align}
For some calculations, it is more convenient to use this form of the generating function because the terms of the denominator are of the same sign (excluding the constant term).
\subsection{Proof of Theorem \ref{thm:Gaussianity[MW]}}\label{sec:subsecgaussianity}
Now that we have the generating function, we turn to proving Gaussianity. As the calculation is long and technical, we quickly summarize the main idea. We find, for $\kappa = 4k+3$, that we can write the relevant generating function as a sum of $\kappa$ terms. Each term is a product, and there is no $n$-dependence in the product (the $n$ dependence surfaces by taking one of the terms in the product to the $n$\textsuperscript{th} power). We then mimic the proof of the Central Limit Theorem. Specifically, we show only the first of the $\kappa$ terms contributes in the limit. We then Taylor expand and use logarithms to understand its behavior. The reason everything works so smoothly is that we almost have a fixed term raised to the $n$\textsuperscript{th} power; if we had that, the Central Limit Theorem would follow immediately. All that remains is to do some book-keeping to see that the mean is of size $n$ and the standard deviation of size $\sqrt{n}$.\\
To prove Theorem \ref{thm:Gaussianity[MW]}, we first prove that for each non-negative $(a,b)\neq (0,0)$, $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ converges to a normal distribution as $n$ approaches infinity.
Let $x=w^a$ and $y=w^b$, then the coefficient of $z^n$ in \eqref{genfn} is given by $\sum_{m,\ell} p_{n,m,\ell}x^my^{\ell}=\sum_{m,\ell} p_{n,m,\ell} w^{am+b\ell}$. Define
\begin{equation}
g_n(w) \ := \ \sum_{m>0,\ell\ge 0} p_{n,m,\ell}w^{am + b\ell}.
\end{equation}
Then $g_n(w)$ is the generating function of $X_n$ because for each $i\in\{1,\dots,n\}$,
\begin{equation}
P(X_n=i)\ = \ \sum_{am+b\ell =i}p_{n,m,\ell}.
\end{equation}
We want to prove $g_n(w)$ satisfies all the conditions stated in Theorem \ref{thm_generalGaussian}. The following proposition, which is proved in Appendix \ref{sec:propmainres}, is useful for that purpose.
\begin{prop} There exists $\epsilon \in (0,1)$ such that for any $w \in I_{\epsilon} = (1-\epsilon,1+\epsilon)$:\label{prop:mainres}
\begin{itemize}
\item[(a)] $A_w(z)$ has no multiple roots, where $A_w(z)$ is the denominator of \eqref{genfn}.
\item[(b)] There exists a single positive real root $e_1(w)$ such that $e_1(w) < 1$ and there exists some positive $\lambda<1$ such that $|e_1(w)|/|e_i(w)|<\lambda$ for all $i \ge 2$.
\item[(c)] Each root $e_i(w)$ is continuous, infinitely differentiable, and
\begin{equation} \label{eprime}
e_1'(w)\ = \ -\frac{(aw^{a-1}+bw^{b-1})e_1(w)^{2k+2}+(a+b)w^{a+b-1}\sum_{j=2k+4}^{4k+3}e_1(w)^j}{1+(w^a+w^b)(2k+2)e_1(w)^{2k+1}+w^{a+b}
\sum_{j=2k+4}^{4k+3}je_1(w)^{j-1}}.
\end{equation}
\end{itemize}
\end{prop}
In the next step, we use partial fraction decomposition of $G_k(x,y,z)$ (from Theorem \ref{Thm:G_k(x,y,z)}) to find a formula for $g_n(w)$. Let $A_w(z)$ be the denominator of $G_k$. Making the substitution $(x,y) = (w^a,w^b)$, we have
\begin{align} \label{pfA_w(z)}
\frac{1}{A_w(z)}
\;\ = \ &\; \frac{1}{w^{a+b}} \sum_{i\ = \ 1}^{4k+3} \frac{1}{(z-e_i(w))\prod_{j \neq i}(e_j(w) - e_i(w))} \nonumber \\
\;\ = \ &\; \frac{1}{w^{a+b}}\sum_{i=1}^{4k+3} \frac{1}{(1-\frac{z}{e_i(w)})} \cdot \frac{1}{e_i(w)\prod_{j \neq i}(e_j(w) - e_i(w))}.
\end{align}
Using the fact that $\frac{1}{1-\frac{z}{e_i(w)}}$ represents a geometric series, we combine the numerator of our generating function with our expression for the denominator in \eqref{pfA_w(z)} to get
\begin{align}
g_n(w)
\;\ = \ &\; \sum_{i=1}^{4k+3} \frac{1}{w^b e_i^n(w)\prod_{j \neq i}(e_j(w) - e_i(w))} -\sum_{i=1}^{4k+3} \frac{1}{w^b e_i^{n-1}(w)\prod_{j \neq i}(e_j(w) - e_i(w))} \nonumber \\
\;&\; + \sum_{i=1}^{4k+3} \frac{1}{e_i^{n-k-2}(w)\prod_{j \neq i}(e_j(w) - e_i(w))} - \sum_{i=1}^{4k+3} \frac{1}{e_i^{n-2k-2}(w)\prod_{j \neq i}(e_j(w) - e_i(w))} \nonumber \\
\;\ = \ &\; \sum_{i=1}^{4k+3} \frac{w^{-b}(1 - e_i(w)) + e_i^{k+2}(w) - e_i^{2k+2}(w)}{e_i^n(w)\prod_{j \neq i}(e_j(w) - e_i(w))}.
\end{align}
Let $q_i(w)$ denote all terms of $g_n(w)$ that do not depend on $n$:
\begin{equation} \label{q(w)}
q_i(w) \ := \ \frac{w^{-b}(1 - e_i(w)) + e_i^{k+2}(w) - e_i^{2k+2}(w)}{\prod_{j \neq i}(e_j(w) - e_i(w))}.
\end{equation}
Setting $\alpha_i:\ = \ 1/e_i$, we can find $g_n(w) = \sum_{i=1}^{4k+3} q_i(w)\alpha_i^n$. We want to apply Theorem \ref{thm_generalGaussian} to $X_n$. All the notations are the same except $\kappa:=4k+3$.
Indeed, by part (c) of Proposition \ref{prop:mainres}, $e_i(w)$ are infinitely many times differentiable for any $i=1,\dots,4k+3$. Since $0$ is not a root of $A_w(z)$, for sufficiently small $\epsilon$, $e_i(w)\neq 0$ for all $w\in I_\epsilon$. Therefore $\alpha_i$ and $q_i$, as rational functions of $e_1,\dots,e_{4k+3}$, are also infinitely many times differentiable; in particular, they are three times differentiable, thus satisfy condition $(i)$ in Theorem \ref{thm_generalGaussian}. By part (b) of Proposition \ref{prop:mainres}, $|e_1(w)|<1$ and $|e_1(w)|/|e_i(w)|<\lambda<1$ for $i\geq 2$. This implies $|\alpha_1(w)|>1$ and $|\alpha_i(w)|/|\alpha_1(w)|<\lambda<1$ for $i\geq 2$, thus $g_n$ satisfies condition $(ii)$ in Theorem \ref{thm_generalGaussian}. The following lemma, whose proof is stated in Appendix \ref{sec:proof_lem_variance_grow}, verifies the last condition.
\begin{lem}\label{lem_variance_grow} Given conditions as above:
\begin{equation}\label{nonzero_mean}
\frac{\alpha_1'(1)}{\alpha_1(1)}\ = \ \frac{-e'_1(1)}{e_1(1)}\ \neq \ 0.
\end{equation}
\begin{equation}\label{nonzero_variance}
\frac{d}{dw}\left[\frac{w\alpha_1'(w)}{\alpha_1(w)}\right] \Big|_{w=1}\ = \ -
\frac{d}{dw}\left[\frac{we_1'(w)}{e_1(w)}\right] \Big|_{w=1}\ \neq \ 0.
\end{equation}
\end{lem}
We can now apply Theorem \ref{thm_generalGaussian} to conclude that $X_n$ converges to a Gaussian as $n$ approaches infinity. Moreover, we have formulas for the mean and variance of $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ for each $(a,b)$ non-negative and not both zero. We have
\begin{equation} \label{E[K+L]}
\mathbb{E}[a\mathcal{K}_n+b\mathcal{L}_n] \ = \ A_{a,b}n + B_{a,b} + o(1),\end{equation}
where $A_{a,b}=\alpha'_1(1)/\alpha_1(1)$ and $B_{a,b}=q_1'(1)/q_1(1)$, which depend only on our choice of $a$ and $b$. Further,
\begin{equation} \label{Var[K+L]}
{\rm Var}(a\mathcal{K}_n + b\mathcal{L}_n) \ = \ C_{a,b}n + D_{a,b} + o(1),
\end{equation} where $C_{a,b}\ = \ \left(\frac{w\alpha_1'(w)}{\alpha_1(w)}\right)'\Big|_{w=1}$ and
$D_{a,b}\ = \ \left(\frac{wq_1'(w)}{q_1(w)}\right)'\Big|_{w=1}$,
which depend only on $a$ and $b$. By lemma \ref{lem_variance_grow}, $A_{a,b}$ and $C_{a,b}$ are non-zero, thus the mean and variance of $X_n$ always grows linearly with $n$.
As proved above, $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ converges to a Gaussian distribution as $n\to\infty$. Let $(a,b)=(1,0)$ and $(0,1)$ we get $\mathcal{K}_n$ and $\mathcal{L}_n$ individually converge to a Gaussian. By \eqref{E[K+L]}, their means both grows linearly with $n$.
\begin{equation}
\mathbb{E}[\mathcal{K}_n]=A_{1,0}n+B_{1,0}+o(1)
\end{equation}
\begin{equation}
\mathbb{E}[\mathcal{L}_n]=A_{0,1}n+B_{0,1}+o(1)
\end{equation}
Moreover, $A_{a,b}=A_{b,a}$ because $A_{a,b}=\frac{\alpha_1'(1)}{\alpha_1(1)}=\frac{-e_1'(1)}{e_1(1)}$ where $e_1(1)$ is a constant and $e'_1(1)$ is symmetric between $a$ and $b$ as shown in \eqref{eprime}. In particular $A_{1,0}=A_{0,1}$, hence $\mathbb{E}[\mathcal{K}_n]-\mathbb{E}[\mathcal{L}_n]$ converges to a constant as $n\to\infty$. This implies the average number of positive and negative summands differ by a constant.
Equation \eqref{Var[K+L]} gives us a way to calculate variance of any joint density of $\mathcal{K}_n$ and $\mathcal{L}_n$. We can furthermore calculate the covariance and correlation of any two joint densities as a function of $e_1$ and $q_1$.
In particular, we prove that $\mathcal{K}_n+\mathcal{L}_n$ and $\mathcal{K}_n-\mathcal{L}_n$ have correlation decaying to zero with $n$. Indeed, from \eqref{Var[K+L]}:
\begin{equation}
{\rm Var}[\mathcal{K}_n]\ = \ C_{1,0}n+D_{1,0}+o(1).
\end{equation}
\begin{equation}
{\rm Var}[\mathcal{L}_n]\ = \ C_{0,1}n+D_{0,1}+o(1).
\end{equation}
\noindent Note that $C_{0,1}=C_{1,0}$ because again we have \be C_{a,b}\ = \ \left(\frac{x\alpha_1'(w)}{\alpha_1(w)}\right)'\Bigg|_{w=1}\ = \ - \left(\frac{we_1'(w)}{e_1(w)}\right)'\Big|_{w=1}\ee
where $e_1(w)$ does not depend on $a,b$ and $e'_1(w)$ is symmetric between $a,b$. Therefore,
\begin{equation}
{\rm Cov}[\mathcal{K}_n+\mathcal{L}_n,\mathcal{K}_n-\mathcal{L}_n] \ = \ \frac{{\rm Var}[2\mathcal{K}_n]+{\rm Var}[2\mathcal{L}_n]}{4} \ = \ {\rm Var}[\mathcal{K}_n]-{\rm Var}[\mathcal{L}_n]\ = \ O(1).
\end{equation}
Therefore
\begin{equation}
{\rm Corr}[\mathcal{K}_n,\mathcal{L}_n]=\frac{{\rm Cov}[\mathcal{K}_n, \mathcal{L}_n]}{\sqrt{{\rm Var}[\mathcal{K}_n]{\rm Var}[\mathcal{L}_n]}}\ = \ \frac{O(1)}{\theta(n)}\ =\ o(1)
\end{equation} (where $\theta(n)$ represents a function which is on the order of $n$). This implies $\mathcal{K}_n-\mathcal{L}_n$ and $\mathcal{K}_n,\mathcal{L}_n$ are uncorrelated as $n\to\infty$. This completes the proof of Theorem \ref{thm:Gaussianity[MW]}. \hfill $\Box$
\subsection{Proof of Theorem \ref{thm_generalGaussian}}
We now collect the pieces. The argument here is different than the one used in \cite{MW1}, and leads to a conceptually simpler proof (though we do have to wade through a good amount of algebra). The rest of this section is just mimicking the standard proof of the Central Limit Theorem, while at the same time isolating the values of the mean and variance.\\
To prove part $(a)$, we use the generating function $g_n(x)$ to calculate $\mu_n$ and $\sigma^2_n$ as follows:
\begin{equation}
\mu_n\ = \ \mathbb{E}[X_n]\ = \ \frac{\sum_{i=1}^n \rho_{i;n}\cdot i}{\sum_{i=1}^n \rho_{i;n}}\ = \ \frac{g_n'(1)}{g_n(1)}
\end{equation}
\begin{equation}
\sigma_n^2\ = \ \mathbb{E}[X_n^2]-\mu_n^2\ = \ \frac{\sum_{i=1}^n \rho_{i;n}\cdot i^2}{\sum_{i=1}^n \rho_{i;n}}-\mu_n^2 \ = \ \frac{[xg'_n(x)]'\big|_{x=1}}{g_n(1)}-\left(\frac{g_n'(1)}{g_n(1)}\right)^2.
\end{equation}
The calculations are then straightforward:
\begin{equation}
g_n'(x)\ = \ \sum_{i=1}^\kappa [q_i(x)\alpha_i^n(x)]'\ = \ \sum_{i=1}^\kappa [q_i'(x)\alpha_i^n(x)+q_i(x)n\alpha_i^{n-1}(x)\alpha'_i(x)]
\end{equation}
\begin{align}\label{variance_formula}
[xg'_n(x)]' & \ = \ \sum_{i=1}^\kappa \left(x[q_i'(x)\alpha_i^n(x)+q_i(x)n\alpha_i^{n-1}(x)\alpha'_i(x)]\right)'\nonumber\\
&\ = \ \sum_{i=1}^\kappa \left( q_i'(x)\alpha_i^n(x)+q_i(x)n\alpha_i^{n-1}(x)\alpha_i'(x)+\right.\nonumber\\
& \left. x\left[ q_i''(x)\alpha_i^n(x)+2q_i'(x)n\alpha_i^{n-1}(x)\alpha_i'(x)+q_in\alpha_i^{n-1}\alpha_i''(x)+q_i(x)n(n-1)\alpha_i^{n-2}(\alpha_i'(x))^2\right]\right).
\end{align}
Since $|\alpha_i(1)/\alpha_1(1)|<\lambda<1$ for each $i\geq 2$, we have
\begin{equation}
\sum_{i=2}^\kappa q_i(1)\alpha_i^n(1)\ = \ \alpha_1^n(1)\sum_{i=2}^\kappa q_i(1)\left(\frac{\alpha_i(1)}{\alpha_1(1)}\right)^n\ = \ o(\lambda^n)\alpha_1^n(1).
\end{equation}
Similarly,
\begin{equation}
\sum_{i=2}^\kappa [q_i(x)\alpha_i^n(x)]'\Big|_{x=1}\ = \ \alpha_1^n(1)\sum_{i=2}^\kappa \left[q'_i(1)+\frac{nq_i(1)\alpha'_i(1)}{\alpha'_i(1)}
\right]\left(\frac{\alpha_i(1)}{\alpha_1(1)}\right)^n\ = \ o(\lambda^n)\alpha_1^n(1)
\end{equation} and
\begin{equation}
\sum_{i=2}^\kappa \Big(x[q_i(x)\alpha_i^n(x)]'\Big)'\Big|_{x=1}\ = \ o(\lambda^n)\alpha_1^n(1).
\end{equation}
Hence
\begin{align}
\mu_n\ = \ \frac{g'_n(1)}{g_n(1)}& \ = \ \frac{[q_1'(1)\alpha_1^n(1)+q_1(1)n\alpha_i^{n-1}(1)\alpha'_1(1)]+ o(\lambda^n) \alpha_1^n(1)}{q_1(1)\alpha_1^n(1)+o(\lambda^n) \alpha_1^n(1)}\nonumber\\
&\ = \ \frac{q_1'(1)+q_1(1)n\frac{\alpha'_1(1)}{\alpha_1(1)}+o(\lambda^n)}{q_1(1)+o(\lambda^n)}
\ = \ \frac{q_1'(1)}{q_1(1)}+n\frac{\alpha_1'(1)}{\alpha_1(1)}+o(1).
\end{align}
Similarly,
\begin{align}
\sigma_n^2 & \ = \ \frac{[xg'_n(x)]'\big|_{x=1}}{g_n(1)}-\mu_n^2\nonumber\\ &\ = \ \frac{([x(q_1(x)\alpha_1(x))']'\Big|_{x=1}+o(\lambda^n)\alpha_1^n(1)}{q_1(1)\alpha_1^n(1)+o(\lambda^n)\alpha_1^n(1)}-\mu_n^2\nonumber\\
&\ = \ \frac{q_1'}{q_1}+\frac{n\alpha_1'}{\alpha_1}+\frac{q''_1}{q_1(1)}+\frac{2q'_1n\alpha_1'}{\alpha_1}+\frac{n\alpha_1''}{\alpha_1}+\frac{n(n-1)(\alpha'_1)^2}{\alpha_1^2}-\left(\frac{\alpha'_1}{\alpha_1}n+\frac{q'_1}{q_1}+o(1)\right)^2\nonumber\\
& \ = \ \frac{\alpha_1(\alpha_1'+\alpha_1'')-(\alpha_1')^2}{\alpha_1^2}\cdot n+\frac{q_1(q_1'+q_1'')-(q_1')^2}{q_1^2}+o(1).
\end{align}
Here we apply \eqref{variance_formula} and use $q_1,\alpha_1$ short for $q_1(1),\alpha_1(1)$. The last things we need are
\be \frac{\alpha_1(1)[\alpha_1'(1)+\alpha_1''(1)]-\alpha_1'(1)^2}{\alpha_1(1)^2}
\ = \ \left(\frac{x\alpha_1'(x)}{\alpha_1(x)}\right)\Bigg|_{x=1}\ee
and
\be\frac{q_1(1)[q_1'(1)+q_1''(1)]-q_1'(1)^2}{q_1(1)^2}\ = \ \left(\frac{xq_1'(x)}{q_1(x)}\right)\Bigg|_{x=1},\ee
which are simple enough to check directly. This completes the proof of part $(a)$ of Theorem \ref{thm_generalGaussian}.\\ \
To prove part $(b)$ of the theorem, we use the method of moment generating functions, showing that moment generating function of $X_n$ converges to that of a Gaussian distribution as $n\to\infty$. (We could use instead the characteristic functions, but the moment generating functions have good convergence properties here.) The moment generating function of $X_n$ is
\be M_{X_n}(t)=\mathbb{E}[e^{tX_n}]\ = \ \frac{\sum_i \rho_{i;n} e^{ti}}{\sum_i {\rho_{i;n}}}\ = \ \frac{g_n(e^t)}{g_n(1)}\ = \ \frac{\sum_{i=1}^\kappa q_i(e^t)\alpha_i^n(e^t)}{\sum_{i=1}^\kappa q_i(1)\alpha_i^n(1)}.\ee
Since $|\alpha_i(e^t)|<|\alpha_1(e^t)|$ for any $i\geq 2$, the main term of $g_n(e^t)$ is $q_1(e^t)\alpha_1(e^t)$. We thus write
\begin{align}
M_{X_n}(t) &\ = \ \frac{\sum_{i=1}^\kappa q_i(e^t)\alpha_i^n(e^t)}{\sum_{i=1}^\kappa q_i(1)\alpha_i^n(1)}
\ = \ \frac{q_1(e^t)\alpha_1^n(e^t)\left[1+\sum_{i=2}^k \frac{q_i(e^t)}{q_1(e^t)}\left(\frac{\alpha_i(e^t)}{\alpha_1(e^t)}\right)^n\right]}{q_1(1)\alpha_1^n(1)\left[1+\sum_{i=2}^\kappa \frac{q_i(1)}{q_1(1)}\left(\frac{\alpha_i(1)}{\alpha_1(1)}\right)^n\right]}\nonumber\\
&\ = \ \frac{q_1(e^t)\alpha_1^n(e^t)[1+ O(\kappa Q\lambda^n)]}{q_1(1)\alpha_1^n(1)[1+ O(\kappa Q\lambda^n)]}
\ = \ \frac{q_1(e^t)}{q_1(1)}\left(\frac{\alpha_1(e^t)}{\alpha_1(1)}\right)^n\left(1+O(\kappa Q\lambda^n)\right),
\end{align}
where $Q=\max_{i\geq 2} \sup_{t\in [-\delta,+\delta]} \frac{q_i(e^t)}{q_1(e^t)}$. As $0<\lambda<1$, $\kappa Q\lambda^n$ rapidly decays when $n$ gets large. Taking the logarithm of both sides yields
\begin{equation}
\log M_{X_t}\ = \ \log \frac{q_1(e^t)}{q_1(1)}+n\log\frac{\alpha_1(e^t)}{\alpha_1(1)}+\log\left(1+O(\kappa Q\lambda^n)\right)\ = \ \log \frac{q_1(e^t)}{q_1(1)}+n\log\frac{\alpha_1(e^t)}{\alpha_1(1)}+o(1).
\end{equation}
Let
$Y_n=\frac{X_n-\mu_n}{\sigma_n}$, then the moment generating function of $Y_n$ is
\begin{equation}
M_{Y_n}(t)\ = \ \mathbb{E}[e^{t(X_n-\mu_n)/\sigma_n}]\ = \ M_{X_n}(t/\sigma_n) e^{-t\mu_n/\sigma_n}.
\end{equation}
Therefore
\begin{equation}\label{log_M_Yn}
\log M_{Y_n}(t)\ = \ \frac{-t\mu_n}{\sigma_n}+ \log \frac{q_1(e^{t/\sigma_n})}{q_1(1)}+n\log\frac{\alpha_1(e^{t/\sigma_n})}{\alpha_1(1)}+o(1).
\end{equation}
Since $\sigma_n=\theta (\sqrt{n})$, $t/\sigma_n\to 0$ as $n\to\infty$. Hence \begin{equation}\label{log_q}
\lim_{n\to\infty} \log \frac{q_1(e^{t/\sigma_n})}{q_1(1)}\ = \ \log 1\ = \ 0.
\end{equation}
Using the Taylor expansion of degree two at 1, we can write $\alpha_1(x)$ as
\begin{equation}
\alpha_1(x)=\alpha_1(1)+\alpha'(1)(x-1)+\frac{\alpha_1''(1)}{2} (x-1)^2+O((x-1)^3).
\end{equation}
Substituting $x=e^{t/\sigma_n}=1+\frac{t}{\sigma_n}+\frac{t^2}{2\sigma_n^2}+O(\frac{t^3}{\sigma_n^3})$ and noting that $\sigma_n=\theta(n^{1/2})$), we get
\begin{equation}
\alpha_1(e^{t/\sigma_n})\ = \ \alpha_1(1)+\alpha'(1)(\frac{t}{\sigma_n}+\frac{t^2}{2\sigma_n^2}+O(n^{-3/2}))+\frac{\alpha_1''(1)}{2} \left[\frac{t^2}{\sigma^2_n}+O(n^{-3/2})\right]+O(n^{-3/2}).
\end{equation}
Taking the logarithm and using the Taylor expansion $\log(1+x)=x-x^2/2+O(x^3)$ gives us:
\begin{align}\label{log_alpha}
\log \frac{\alpha_1(e^{t/\sigma_n})}{\alpha_1(1)} & \ = \ \log \left( 1+\frac{\alpha_1'(1)}{\alpha_1(1)}\frac{t}{\sigma_n}+\frac{\alpha_1'(1)+\alpha''_1(1)}{\alpha_1(1)}\frac{t^2}{2\sigma^2_n}+O(n^{-3/2}\right)\nonumber\\
& \ = \ \frac{\alpha_1'(1)}{\alpha_1(1)}\frac{t}{\sigma_n}+\frac{\alpha_1'(1)+\alpha''_1(1)}{\alpha_1(1)}\frac{t^2}{2\sigma^2_n}-\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\right)^2\frac{t^2}{2\sigma_n^2}+O(n^{-3/2}).
\end{align}
Substituting \eqref{log_q} and \eqref{log_alpha} into \eqref{log_M_Yn}:
\begin{align}
\log M_{Y_n}(t) & \ = \ -\frac{t\mu_n}{\sigma_n}+n\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\frac{t}{\sigma_n}+\frac{\alpha_1'(1)+\alpha''_1(1)}{\alpha_1(1)}\frac{t^2}{2\sigma^2_n}-\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\right)^2\frac{t^2}{2\sigma_n^2}+O(n^{-3/2}) \right)+o(1)\nonumber\\
& \ = \ \left(n\frac{\alpha'_1(1)}{\alpha_1(1)} - \mu_n\right)\frac{t}{\sigma_n} + n\frac{\alpha_1(1)[\alpha_1'(1)+\alpha_1''(1)] - \alpha_1'(1)^2}{\alpha_1(1)^2}\frac{t^2}{2\sigma_n^2}+o(1).
\end{align}
Using the same notations $A,B,C,D$ as in Theorem \ref{thm_generalGaussian}:
\begin{align}
\log M_{Y_n}(t) & \ = \ \frac{An-\mu_n}{\sigma_n}\cdot t+\frac{Cn}{\sigma_n^2}\cdot \frac{t^2}{2}+o(1)\nonumber\\
& \ = \ \frac{B+o(1)}{\sqrt{Cn+D+o(1)}}\cdot t+\frac{Cn}{Cn+D+o(1)}\cdot \frac{t^2}{2}+o(1)\nonumber\\
& \ = \ \frac{t^2}{2}+o(1).
\end{align}
This implies the moment generating function of $Y_n$ converges to that of the standard normal distribution. So as $n\to\infty$, the moment generating function of $X_n$ converges to a Gaussian, which implies convergence in distribution.
\hfill $\Box$
\section{Distribution of Gaps}\label{sec:distrgaps}
\subsection{Notation and Counting Lemmas}
In this section we prove our results about gaps between summands arising from $k$-Skipponacci far-difference representations. Specifically, we are interested in the probability of finding a gap of size $j$ among all gaps in the decompositions of integers $x \in [R_k(n),R_k(n+1)]$. In this section, we adopt the notation used in \cite{BBGILMT}. If $\epsilon_i \in \{-1, 1\}$ and
\be x \ = \ \epsilon_j S^{(k)}_{i_j} + \epsilon_{j-1} S^{(k)}_{i_{j-1}} + \cdots + \epsilon_1 S^{(k)}_{i_1} \ee
is a legal far-difference representation (which implies that $i_j = n$), then the gaps are
\be i_j - i_{j-1}, \ \ \ \ i_{j-1} - i_{j-2}, \ \ \ \ \dots, \ \ \ \ i_2 - i_1. \ee
Note that we do not consider the `gap' from the beginning up to $i_1$, though if we wished to include it there would be no change in the limit of the gap distributions. Thus in any $k$-Skipponacci far-difference representations, there is one fewer gap than summands. The greatest difficulty in the subject is avoiding double counting of gaps, which motivates the following definition.
\begin{defn}[Analogous to Definition 1.4 in \cite{BBGILMT}] \label{GapNotation} \
\begin{itemize}
\item Let $X_{i,i+j}(n)$ denote the number of integers $x \in [R_k(n),R_k(n+1)]$ that have a gap of length $j$ that starts at $S^{(k)}_i$ and ends at $S^{(k)}_{i+j}$.
\item Let $Y(n)$ be the total number of gaps in the far-difference decomposition for \\ $x \in [R_k(n), R_k(n+1)]$:
\begin{equation} \label{Y(n)}
Y(n) \ := \ \sum_{i=1}^n \sum_{j=0}^n X_{i,i+j}(n).
\end{equation}
Notice that $Y(n)$ is equivalent to the total number of summands in all decompositions for all $x$ in the given interval \emph{minus} the number of integers in that interval. The main term is thus the total number of summands, which is
\be \left[A_{1,1}n + B_{1,1} + o(1)\right] \cdot [R_k(n+1)-R_k(n)] \ = \ A_{1,1} n [R_k(n+1)-R_k(n)], \ee
as we know from \S\ref{sec:subsecgaussianity} that $\mathbb{E}[\mathcal{K}_n+\mathcal{L}_n]=A_{1,1}n + B_{1,1} + o(1)$.
\item Let $P_n(j)$ denote the proportion of gaps from decompositions of $x$ $\in$ $[R_k(n)$, $R_k(n+1)]$ that are of length $j$:
\begin{equation} \label{P_n(j)}
P_n(j) \ := \ \frac{\sum_{i=1}^{n-j} X_{i,i+j}(n)}{Y(n)},
\end{equation}
and let
\begin{equation} \label{P(j)}
P(j) \ := \ \lim_{n\to\infty} P_n(j)
\end{equation} (we will prove this limit exists).
\end{itemize}
\end{defn}
Our proof of Theorem \ref{thm:gapresult} starts by counting the number of gaps of constant size in the $k$-Skipponacci far-difference representations of integers. To accomplish this, it is useful to adopt the following notation.
\begin{defi} Notation for counting integers with particular $k$-Skipponacci summands. \label{defi:N(S)notation}
\begin{itemize}
\item Let $N(\pm S^{(k)}_i,\pm S^{(k)}_j)$ denote the number of integers whose decomposition begins with $\pm S^{(k)}_i$ and ends with $\pm S^{(k)}_j$.
\item Let $N(\pm F_i)$ be the number of integers whose decomposition ends with $\pm F_i$.
\end{itemize}
\end{defi}
The following results, which are easily derived using the counting notation in Definition \ref{defi:N(S)notation}, are also useful.
\begin{lem} \label{lem:counting}
\begin{equation} \label{N(S^{(k)}_n)shift}
N(\pm S^{(k)}_i,\pm S^{(k)}_j) \ = \ N(\pm S^{(k)}_1,\pm S^{(k)}_{j-i+1}).
\end{equation}
\begin{equation} \label{N(S^{(k)}_n)in-exclusion}
N(-S^{(k)}_1, +S^{(k)}_j) + N(+S^{(k)}_1, +S^{(k)}_j) \ = \ N(+S^{(k)}_j) - N(+S^{(k)}_{j-1}).
\end{equation}
\begin{equation} \label{N(S^{(k)}_n)cardinality}
N(+S^{(k)}_i) \ = \ R_k(i) - R_k(i-1).
\end{equation}
\end{lem}
\begin{proof} First, note that \eqref{N(S^{(k)}_n)shift} describes a shift of indices, which doesn't change the number of possible decompositions. For \eqref{N(S^{(k)}_n)in-exclusion}, we can apply inclusion-exclusion to get
\bea
& & N(-S^{(k)}_1, +S^{(k)}_j) + N(+S^{(k)}_1, +S^{(k)}_j)
\nonumber\\ & & \ \ \ \ \ = \ N(+S^{(k)}_j) - \left[N(+S^{(k)}_2, +S^{(k)}_j) + N(+S^{(k)}_3, +S^{(k)}_j) + \cdots\right] \nonumber\\ & & \ \ \ \ \ = \ N(+S^{(k)}_j) - \left[N(+S^{(k)}_1, +S^{(k)}_{j-1}) + N(+S^{(k)}_2, +S^{(k)}_{j-1}) + \cdots\right] \nonumber\\ & & \ \ \ \ \ = \ N(+S^{(k)}_j) - N(+S^{(k)}_{j-1}).
\eea
Finally, for \eqref{N(S^{(k)}_n)cardinality}, recall that the $k$-Skipponaccis partition the integers into intervals of the form $[S^{(k)}_n-R_k(n-k-2), R_k(n)]$, where $S^{(k)}_n$ is the main term of all of the integers in this range. Thus $N(+F_i)$ is the size of this interval, which is just $R_k(i) - R_k(i-1)$, as desired. \end{proof}
\subsection{Proof of Theorem \ref{thm:gapresult}}
We take a combinatorial approach to proving Theorem \ref{thm:gapresult}. We derive expressions for $X_{i,i+c}(n)$ and $X_{i,i+j}(n)$ by counting, and then we use the Generalized Binet's Formula for the $k$-Skipponaccis in Lemma \ref{Binet-Skipponacci} to reach the desired expressions for $P_n(j)$, and then take the limit as $n\to\infty$.
\begin{proof}[Proof of Theorem \ref{thm:gapresult}] We first consider gaps of length $j$ for $k+2 \le j < 2k+2$, then show that the case with gaps of length $j \ge 2k+2$ follows from a similar calculation. It is important to separate these two intervals as there are sign interactions that must be accounted for in the former that do not affect our computation in the latter. From Theorem \ref{Thm:Far-Diff}, we know that there are no gaps of length $k+1$ or smaller. Using Lemma \ref{lem:counting}, we find a nice formula for $X_{i,i+j}(n)$. For convenience of notation, we will let $R_k$ denote $R_k(n)$ in the following equations:
\begin{align} \label{X(i,i+c)}
X_{i,i+j}(n)
\;\ = \ &\; N(+S^{(k)}_i)\left[N(+S^{(k)}_{n-i-j+1}) - N(+S^{(k)}_{n-i-j})\right] \nonumber \\
\;\ = \ &\; (R_i - R_{i-1})\left[(R_{n-i-j+1} - R_{n-i-j}) - (R_{n-i-j} - R_{n-i-j-1})\right] \nonumber \\
\;\ = \ &\; R_{i-k-1} \cdot (R_{n-i-j-k} - R_{n-i-j-k-1}) \nonumber \\
\;\ = \ &\; R_{i-k-1} \cdot R_{n-i-j-2k-1}.
\end{align}
To continue, we need a tractable expression for $R_k(n)$. Using the results from the Generalized Binet's Formula in Lemma \ref{Binet-Skipponacci}, we can express $R_k(n)$ as
\begin{align} \label{R_nBinet}
R_k(n)
\;\ = \ &\; S^{(k)}_n + S^{(k)}_{n-2k-2} + S^{(k)}_{n-4k-4} + S^{(k)}_{n-6k-6} + \cdots \nonumber \\
\;\ = \ &\; a_1\lambda_1^n + a_1\lambda_1^{n-2k-2} + a_1\lambda_1^{n-4k-4} + a_1\lambda_1^{n-6k-6} + \cdots \nonumber \\
\;\ = \ &\; a_1\lambda_1^n\left[1 + \lambda_1^{-2k-2} + \lambda_1^{-4k-4} + \lambda_1^{-6k-6} + \cdots\right] \nonumber \\
\;\ = \ &\; a_1\lambda_1^n\left[1 + \left(\lambda_1^{-2k-2}\right) + \left(\lambda_1^{-2k-2}\right)^2 + \left(\lambda_1^{-2k-2}\right)^3 + \cdots\right] \nonumber \\
\;\ = \ &\; \frac{a_1\lambda_1^n}{1-\lambda_1^{-2k-2}} + O_k(1)
\end{align} (where the $O_k(1)$ error depends on $k$ and arises from extending the finite geometric series to infinity). We substitute this expression for $R_k(n)$ into the formula from \eqref{X(i,i+c)} for $X_{i,i+j}(n)$, and find
\bea \label{X(i,i+c)Binet}
X_{i,i+j}(n)
& \ = \ & R_{i-k-1} \cdot R_{n-i-j-2k-1} \nonumber\\ & = & \frac{a_1\lambda_1^{i-k-1}(1 + O_k(1))}{1-\lambda_1^{-2k-2}} \cdot \frac{a_1\lambda_1^{n-i-j-2k-1}(1 + O_k(1))}{1-\lambda_1^{-2k-2}} \nonumber\\
& \ = \ & \frac{a_1^2\lambda_1^{n-j-3k-2}(1 + O_k(\lambda_1^{-i} + \lambda_1^{-n+i+j})}{\left(1-\lambda_1^{-2k-2}\right)^2}.
\eea
We then sum $X_{i,i+j}(n)$ over $i$. Note that almost all $i$ satisfy $\log\log n \ll i \ll n - \log \log n$, which means the error terms above are of significantly lower order (we have to be careful, as if $i$ or $n-i$ is of order 1 then the error is of the same size as the main term). Using our expression for $Y(n)$ from Definition \ref{GapNotation} we find
\begin{align} \label{P_n(c)proof}
P_n(j)
\;\ = \ &\; \frac{\sum_{i=1}^{n-j} X_{i,i+j}(n)}{Y(n)} \nonumber \\
\;\ = \ &\; \frac{a_1^2\lambda_1^{n-j-3k-2}(n-j)(1 + o_k(n \lambda_1^n))}{\left[A_{1,1}n+B_{1,1} + o(1)\right] \cdot \left(1-\lambda_1^{-2k-2}\right)^2 \cdot a_1\lambda_1^n(\lambda_1-1) + O(\lambda_1^n)}.
\end{align}
Taking the limit as $n\to\infty$ yields
\begin{align} \label{P(c)proof}
P(j) \ = \ \lim_{n\to\infty} P_n(j) \ = \ \frac{a_1\lambda_1^{-3k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j}.
\end{align}
For the case where $j \ge 2k+2$, the calculation is even easier, as we no longer have to worry about sign interactions across the gap (that is, $S^{(k)}_i$ and $S^{(k)}_{i+j}$ no longer have to be of opposite sign). Thus the calculation of $X_{i,i+j}(n)$ reduces to
\begin{align} \label{X(i,i+j)}
X_{i,i+j}(n)
\;\ = \ &\; N(+S^{(k)}_i)N(+S^{(k)}_{n-i-j})\nonumber \\
\;\ = \ &\; (R_i - R_{i-1})(R_{n-i-j} - R_{n-i-j-1}) \nonumber \\
\;\ = \ &\; R_{i-k-1} \cdot R_{n-i-j-k-1}.
\end{align}
We again use \eqref{R_nBinet} to get
\begin{equation}
X_{i,i+c}(n)
\;\ = \ \; R_{i-k-1} \cdot R_{n-i-j-k-1}
\;\ = \ \; \frac{a_1^2\lambda_1^{n-j-2k-2}(1 + o_k(\lambda_1^n))}{\left(1-\lambda_1^{-2k-2}\right)^2}.
\end{equation}
Which, by a similar argument as before, gives us
\begin{equation} \label{P(j)proof}
P(j)
\;\ = \ \; \frac{a_1\lambda_1^{-2k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j},
\end{equation} completing the proof.\end{proof}
\section{Generalized Far-Difference Sequences}\label{sec:genfardiffseq}
The $k$-Skipponaccis give rise to unique far-difference representations where same signed indices are at least $k + 2$ apart and opposite signed indices are at least $2k+2$ apart. We consider the reverse problem, namely, given a pair $(s,d)$ of positive integers, when does there exist a sequence $\{a_n\}$ such that every integer has a unique far-difference representation where same signed indices are at least $s$ apart and opposite signed indices are at least $d$ apart. We call such representations $(s,d)$ far-difference representations.
\subsection{Existence of Sequences}
\begin{proof}[Proof of Theorem \ref{farDiffRec}]
Define
\be
R^{(s,d)}_n \ = \ \sum_{i=0}^{\lfloor n/s\rfloor}a_{n-is}\ = \ a_n+a_{n-s}+a_{n-2s}+\cdots.
\ee
For each $n$, the largest number that can be decomposed using $a_n$ as the largest summand is $R^{(s,d)}_n$, while the smallest one is $a_n-R^{(s,d)}_{n-d}$. It is therefore natural to break our analysis up into intervals $I_n=[a_n-R^{(s,d)}_{n-d},R^{(s,d)}_n]$.
We first prove by induction that
\begin{equation}\label{condition1}
a_n\ = \ R^{(s,d)}_{n-1}+R^{(s,d)}_{n-d}+1,
\end{equation} or equivalently, $a_n-R^{(s,d)}_{n-d}=R^{(s,d)}_{n-1}+1$ for all $n$, so that these intervals $\{I_n\}_{n=1}^\infty$ are disjoint and cover $\mathbb{Z}^+$.
Indeed, direct calculation proves \eqref{condition1} is true for $n=1,\dots,\max(s,d)$. For $n>\max(s,d)$, assume it is true for all positive integers up to $n-1$. We have
\begin{align}
a_{n-s}
\ = \ R^{(s,d)}_{n-s-1}+R^{(s,d)}_{n-s-d}+1
\ =& \ (R^{(s,d)}_{n-1}-a_{n-1})+(R^{(s,d)}_{n-d}-a_{n-d})+1 \nonumber \\
\Rightarrow \ R^{(s,d)}_{n-1}+R^{(s,d)}_{n-d}+1
\ =& \ a_{n-s}+a_{n-1}+a_{n-d}\ = \ a_n.
\end{align}
This implies that \eqref{condition1} is true for $n$ and thus true for all positive integers.\\
We prove that every integer is uniquely represented as a sum of $\pm a_n$'s in which every two terms of the same sign are at least $s$ apart in index and every two terms of opposite sign are at least $d$ apart in index. We prove by induction that any number in the interval $I_n$ has a unique $(s,d)$ far-difference representation with main term (the largest term) be $a_n$.
It is easy to check for $n\leq \max(s,d)$. For $n>\max(s,d)$, assume it is true up to $n-1$. Let $x$ be a number in $I_n$, where $a_n-R^{(s,d)}_{n-d}\leq x\leq R^{(s,d)}_n$. There are two cases to consider.
\begin{enumerate}
\item If $a_n\leq x\leq R^{(s,d)}_n$, then either $x=a_n$ or $1\leq x-a_n\leq R^{(s,d)}_n-a_n=R^{(s,d)}_{n-s}$. By the induction assumption, we know that $x-a_n$ has a far-difference representation with main term of at most $a_{n-s}$. It follows that $x=a_n+(x-a_n)$ has a legal decomposition.
\item If $a_n-R^{(s,d)}_{n-d}\leq x<a_n$ then $1\leq a_n-x\leq R^{(s,d)}_{n-d}$. By the induction assumption, we know that $a_n-x$ has a far-difference representation with main term at most $a_{n-d}$. It follows that $x=a_n-(a_n-x)$ has a legal decomposition.
\end{enumerate}
To prove uniqueness, assume that $x$ has two difference decompositions $\sum_i \pm a_{n_i}=\sum_i \pm a_{m_i}$, where $n_1>n_2>\dots$ and $m_1>m_2>\dots$. Then it must be the case that $x$ belongs to both $I_{n_1}$ and $I_{m_1}$. However, these intervals are disjoint, so by contradiction we have $n_1=m_1$. Uniqueness follows by induction.
\end{proof}
\begin{remark}
As the recurrence relation of $a_n$ is symmetric between $s$ and $d$, it is the initial terms that define whether a sequence has an $(s,d)$ or a $(d,s)$ far-difference representation.
\end{remark}
\begin{cor}
The Fibonacci numbers $\{1,2,3,5,8,\dots\}$ have a $(4,3)$ far-difference representation.
\end{cor}
\begin{proof}
We can rewrite Fibonacci sequence as $F_1=1, F_2=2, F_3=3$, $F_4=F_3+F_1+1$, and $F_n=F_{n-1}+F_{n-2} = F_{n-1} + (F_{n-3}+F_{n-4})$ for $n\geq 5$.
\end{proof}
\begin{cor}
The $k$-Skipponacci numbers, which are defined as $a_n=n$ for $n\leq k$ and $a_{n+1}=a_n+a_{n-k}$ for $n>k$, have a $(2k+2,k+2)$ far-difference representation.
\end{cor}
\begin{proof}
This follows from writing the recurrence relation as $a_n=a_{n-1}+a_{n-k-1}=a_{n-1}+a_{n-k-2}+a_{n-2k-2}$ and using the same initial conditions.
\end{proof}
\begin{cor}
Every positive integer can be represented uniquely as a sum of $\pm 3^n$ for $n=0,1,2,\dots$.
\end{cor}
\begin{proof}
The sequence $a_n=3^{n-1} $ satisfies $a_n=3a_{n-1}$, which by our theorem has an $(1,1)$ far-difference representation.
\end{proof}
\begin{cor}
Every positive integer can be represented uniquely as $\sum_i \pm 2^{n_i}$ where $n_1>n_2>\dots$ and $n_i\geq n_{i-1}+2$, so any two terms are apart by at least two.
\end{cor}
\begin{proof}
The sequence $a_n=2^n $ satisfies $a_n=a_{n-1}+2a_{n-2}$, which by our theorem has a $(2,2)$ far-difference representation.
\end{proof}
\subsection{Non-uniqueness}
We consider the inverse direction of Theorem \ref{farDiffRec}. Given positive integers $s$ and $d$, how many increasing sequences are there that have $(s,d)$ far-difference representation?
The following argument suggests that any sequence $a_n$ that has $(s,d)$ far-difference representation should satisfy the recurrence relation $a_n=a_{n-1}+a_{n-s}+a_{n-d}$. If we want the intervals $[a_n-R_{n-d},R_n]$ to be disjoint, which is essential for the unique representation, we must have
\begin{equation}
a_n-R_{n-d}\ = \ R_{n-1}+1.
\end{equation}
Replacing $n$ by $n-s$ gives us
\begin{equation}
a_{n-s}-R_{n-d-s}\ = \ R_{n-1-s}+1.
\end{equation}
When we subtract those two equations and note that $R_k-R_{k-s}=a_k$, we get
\begin{equation}
a_n-a_{n-s}-a_{n-d}\ = \ a_{n-1}
\end{equation}
or $a_n=a_{n-1}+a_{n-s}+a_{n-d}$, as desired. What complicates this problem is the choice of initial terms for this sequence. Ideally, we want to choose the starting terms so that we can guarantee that every integer will have a unique far-difference representation. We have shown this to be the case which for the initial terms defined in Theorem \ref{farDiffRec}, which we refer as the \emph{standard} $(s,d)$ sequence. However, it is not always the case that the initial terms must follow the standard model to have a unique far-difference representation. In fact, it is not even necessary that the sequence starts with $1$.
In other types of decompositions where only positive terms are allowed, it is often obvious that a unique increasing sequence with initial terms starting at $1$ is the desired sequence. However, in far-difference representations where negative terms are allowed, it may happen that a small number (such as 1) arises through subtraction of terms that appear later in the sequence. Indeed, if $(s,d)=(1,1)$, we find several examples where the sequence need not start with 1.
\begin{exa}\label{exa:one}
The following sequences have a $(1,1)$ far-difference representation.
\begin{itemize}
\item $a_1=2,a_2=6$ and $a_n=3^{n-1}$ for $n\geq 3$
\item $a_1=3,a_2=4$ and $a_n=3^{n-1}$ for $n\geq 3$
\item $a_1=1,a_2=9,a_3=12$ and $a_n=3^{n-1}$ for $n\geq 4$
\end{itemize}
\end{exa}
\begin{exa}\label{exa:two} For each positive integer $k$, the sequence $B_{k}$, defined by $B_{k,i}= \pm 2 \cdot 3^{i-1}$ for $i=k+1$ and $B_{k,i}= \pm 3^{i-1}$ otherwise, has a $(1,1)$ far-difference representation. \end{exa}
\noindent
We prove this by showing that there is a bijection between a decomposition using the standard sequence $b_n=\pm 3^{n-1}$ and a decomposition using $B_{k}$. First we give an example: For $k=2$, the sequence is $1,3,2\cdot 3^2,3^3,3^4,\dots$
\begin{align*}
763 &\ = \ 1-3+3^2+3^3+3^6\nonumber\\
&\ = \ 1-3+(3^3-2\cdot 3^2)+3^3+3^6\nonumber\\
&\ = \ 1-3-2\cdot 3^2+2\cdot 3^3+3^6\nonumber\\
&\ = \ 1-3-2\cdot 3^2+3^4-3^3+3^6\nonumber\\
&\ = \ B_{2,0}-B_{2,1}-B_{2,2} - B_{2,3} + B_{2,4} + B_{2,6}.
\end{align*}
\noindent Conversely,
\begin{align*}
763 &\ = \ B_{2,0}-B_{2,1}-B_{2,2} - B_{2,3} + B_{2,4} + B_{2,6} \nonumber\\
&\ = \ 1-3-2.3^2-3^3+3^4+3^6\nonumber\\
&\ = \ 1-3- (3^3-3^2)-3^3+3^4+3^6\nonumber\\
&\ = \ 1-3+3^2-2.3^3+3^4+3^6\nonumber\\
&\ = \ 1-3+3^2-(3^4-3^3)+3^4+3^6\nonumber\\
&\ = \ 1-3+3^2+3^3+3^6.
\end{align*}
\noindent To prove the first direction, assume $x=\sum_{i\in I} 3^i-\sum_{j\in J}3^j$ where $I,J$ are disjoint subsets of $\mathbb{Z}^+$. If $k$ is not in $I\cup J$, this representation is automatically a representation of $x$ using $B_{k}$. Otherwise, assume $k\in I$, we replace the term $3^k$ by $3^{k+1}-2 \cdot 3^k=B_{k,k+2}-B_{k,k+1}$. If $k+1\notin I$, again $x$ has a $(1,1)$ far-difference representation of $B_{k}$. Otherwise, $x$ has the term $2 \cdot 3^{k+1}$ in its representation, we can replace this term by $3^{k+2}-3^{k+1}$. Continue this process, stopping if $k+2\notin I$ and replacing the extra term if $k+2\in I$. Hence we can always decompose $x$ by $\pm B_{k,i}$.
Conversely, suppose $x=\sum_{i\in I} B_{k,i}-\sum_{j\in J} B_{k,j}$. If $k+1\notin I\cup J$, this representation is automatically a representation of $x$ using $\pm 3^n$. If not, assume $k+1\in I$, we replace $B_{k,k+1}=2\cdot 3^k$ by $3^{k+1}-3^k$. If $k+2\notin I$ we are done, if not, $x$ has a term $2\cdot 3^{k+1}$, replace this one by $3^{k+2}-3^{k+1}$ and continue doing this, we always get a decomposition using $\pm 3^n$. Since there is only one such decomposition, the decomposition using $\pm B_{k,i}$ must also be unique. \hfill $\Box$
\begin{remark}
From Example \ref{exa:two}, we know that there is at least one infinite family of sequences that have $(1,1)$ far-difference representations. Example \ref{exa:one} suggests that there are many other sequences with that property and, in all examples we have found to date, there exists a number $k$ such that the recurrence relation $a_n=3a_{n-1}$ holds for all $n\geq k$.
\end{remark}
\section{Conclusions and Further Research}
In this paper we extend the results of \cite{Al, MW1, BBGILMT} on the Fibonacci sequence to all $k$-Skipponacci sequences. Furthermore, we prove there exists a sequence that has an $(s,d)$ far-difference representation for any positive integer pair $(s,d)$. This new sequence definition further generalizes the idea of far-difference representations by uniquely focusing on the index restrictions that allow for unique decompositions. Still many open questions remain that we would like to investigate in the future. A few that we believe to be the most important and interesting include:
\begin{itemize}
\item[(1)]
Can we characterize all sequences that have $(1,1)$ far-difference representations? Does every such sequence converge to the recurrence $a_n=3a_{n-1}$ after first few terms?
\item[(2)] For $(s,d)\neq (1,1)$, are there any \emph{non-standard} increasing sequences that have a $(s,d)$ far-difference representation? If there is such a sequence, does it satisfy the recurrence relation stated in Theorem \ref{farDiffRec} after the first few terms?
\item[(3)] Will the results for Gaussianity in the number of summands still hold for any sequence that has an $(s,d)$ far-difference representation?
\item[(4)] How are the gaps in a general $(s,d)$ far-difference representation distributed?
\end{itemize}
| {
"timestamp": "2014-05-13T02:03:53",
"yymm": "1309",
"arxiv_id": "1309.5600",
"language": "en",
"url": "https://arxiv.org/abs/1309.5600",
"abstract": "A natural generalization of base B expansions is Zeckendorf's Theorem: every integer can be uniquely written as a sum of non-consecutive Fibonacci numbers $\\{F_n\\}$, with $F_{n+1} = F_n + F_{n-1}$ and $F_1=1, F_2=2$. If instead we allow the coefficients of the Fibonacci numbers in the decomposition to be zero or $\\pm 1$, the resulting expression is known as the far-difference representation. Alpert proved that a far-difference representation exists and is unique under certain restraints that generalize non-consecutiveness, specifically that two adjacent summands of the same sign must be at least 4 indices apart and those of opposite signs must be at least 3 indices apart. We prove that a far-difference representation can be created using sets of Skipponacci numbers, which are generated by recurrence relations of the form $S^{(k)}_{n+1} = S^{(k)}_{n} + S^{(k)}_{n-k}$ for $k \\ge 0$. Every integer can be written uniquely as a sum of the $\\pm S^{(k)}_n $'s such that every two terms of the same sign differ in index by at least 2k+2, and every two terms of opposite signs differ in index by at least k+2. Additionally, we prove that the number of positive and negative terms in given Skipponacci decompositions converges to a Gaussian, with a computable correlation coefficient that is a rational function of the smallest root of the characteristic polynomial of the recurrence. The proof uses recursion to obtain the generating function for having a fixed number of summands, which we prove converges to the generating function of a Gaussian. We next explore the distribution of gaps between summands, and show that for any k the probability of finding a gap of length $j \\ge 2k+2$ decays geometrically, with decay ratio equal to the largest root of the given k-Skipponacci recurrence. We conclude by finding sequences that have an (s,d) far-difference representation for any positive integers s,d.",
"subjects": "Number Theory (math.NT)",
"title": "A Generalization of Fibonacci Far-Difference Representations and Gaussian Behavior",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9937100982356567,
"lm_q2_score": 0.7956581097540519,
"lm_q1q2_score": 0.7906534984056959
} |
https://arxiv.org/abs/2110.02543 | A logical approach for temporal and multiplex networks analysis | Many systems generate data as a set of triplets (a, b, c): they may represent that user a called b at time c or that customer a purchased product b in store c. These datasets are traditionally studied as networks with an extra dimension (time or layer), for which the fields of temporal and multiplex networks have extended graph theory to account for the new dimension. However, such frameworks detach one variable from the others and allow to extend one same concept in many ways, making it hard to capture patterns across all dimensions and to identify the best definitions for a given dataset. This extended abstract overrides this vision and proposes a direct processing of the set of triplets. In particular, our work shows that a more general analysis is possible by partitioning the data and building categorical propositions that encode informative patterns. We show that several concepts from graph theory can be framed under this formalism and leverage such insights to extend the concepts to data triplets. Lastly, we propose an algorithm to list propositions satisfying specific constraints and apply it to a real world dataset. | \section{Introduction} \vspace{-10pt}
Many systems generate data as a set of triplets $(a, b, c)$: they may represent that user $a$ called $b$ at time $c$ or that customer $a$ purchased product $b$ in store $c$. These datasets are traditionally studied as networks with an extra dimension (time or layer), for which the fields of temporal and multiplex networks have extended graph theory to account for the new dimension \cite{Kivela}. However, such frameworks detach one variable from the others and allow to extend one same concept in many ways, making it hard to capture patterns across all dimensions and to identify the best definitions for a given dataset. This work overrides this vision and proposes a direct processing of the set of triplets. While \cite{Cerf} also approaches triplets directly, it focuses on specific patterns and applications. Our work shows that a more general analysis is possible by partitioning the data and building categorical propositions (CPs) that encode informative patterns. We show that several concepts from graph theory can be framed under this formalism and leverage such insights to extend the concepts to data triplets. Lastly, we propose an algorithm to list CPs satisfying specific constraints and apply it to a real world dataset. \vspace{-10pt}
\section{Results}\vspace{-10pt}
{\bf Analysis via propositions}. We consider the most general case where all the triplet entries come from arbitrary sets $A, B, C$. We thus define a triplet space as $\mathcal{S} = \{(a,b,c) | a \in A, b \in B, c \in C \}$ and a dataset as $\mathcal{D} \subseteq \mathcal{S}$. We also define the sub-dataset induced by $\alpha \subseteq A$, $\beta \subseteq B$, $\gamma \subseteq C$ as $\mathcal{D}_{(\alpha, \beta, \gamma)} = \{(a,b,c) \in \mathcal{D} | a \in \alpha, b \in \beta, c \in \gamma \}$. Our main observation is that given $\alpha \subseteq A$, $\beta \subseteq B$, $\gamma \subseteq C$, we can partition $\mathcal{D}$ into eight disjoint regions (or bins) according to whether a triplet has its entries in $\alpha$, $\beta$ and $\gamma$. Then, we can capture how triplets in $\mathcal{D}$ distribute across these bins via CPs. This process is illustrated in the Fig \ref{fig_prop}-Left: the large square depicts the eight possible partition bins, while the smaller squares illustrate how the triplets (crosses) may distribute and CPs be constructed to capture the distribution pattern. In a nutshell, a CP asserts or denies that all or some of the members of one group (the subject) possess the attributes of another group (the predicate), using an expression of the form: `Q S are P', where S refers to the subject, P to the predicate, and Q to a quantification word which can be `All', `Some', or `No' \cite{Copi}. The expression `All S are P' is a typical example. In our case, we form CPs using $\alpha$, $\beta$, or $\gamma$ as S and the other two as P, such that the following expression holds: `Q (triplets with elements in) S are (in relation with at least one element from) P'. For simplicity, we omit the words in parenthesis. In Fig. \ref{fig_prop}-Left we notice that all the triplets with elements in $\alpha$ also have elements in $\beta$ and $\gamma$, thus forming: `All $\alpha$ are $\beta$ and $\gamma$'. These are informative patterns: if $\alpha$ represents customers, $\beta$ products and $\gamma$ stores, then `All $\alpha$ are $\beta$ and $\gamma$' indicates that customers in $\alpha$ buy only products from $\beta$ and only in stores from $\gamma$. It is thus of interest to list informative CPs. We notice that (i) universal quantifiers (All, No) are more informative than particular ones (Some), yet particular propositions may be close to a universal one; and (ii) propositions above do not express how dense are the relations between S and P. We therefore extend propositions to: $x \%$ S are $y \%$ P, where $x$ is the fraction of triplets in S in relation to P and $y$ is the density of relationships between S and P. This allows us to state the algorithmic challenge of listing all propositions satisfying constraints on $x$ and $y$ without needing to explore the full space.
\begin{figure}[t!]
\includegraphics[width=0.5\linewidth]{Statements_fig1} \hspace{50pt}
\includegraphics[width=0.28\linewidth]{Statements_fig2}
\caption{ {\bf Left}: Partition bins (big square) and distribution of $\mathcal{D}$ (crosses) into the bins with associated categorical propositions (small squares). {\bf Right}: Framework applied to a graph composed of two clique components.} \vspace{-20pt}
\label{fig_prop}
\end{figure}
\newline
{\bf Relation to graph theory and extensions}. Our formalism can also be used to study tuples $(a, b)$. By setting $A = B = \mathcal{V}$ we address the particular case of graphs, where $\mathcal{V}$ is to the vertex set of graph $\mathcal{G}$. Several concepts from graph theory may be re-formulated in terms of propositions satisfying specific constraints. An illustration is given in Fig. \ref{fig_prop}-Right. Going further, we use this re-formulation to generalize the concepts to data triplets. Our results are listed in Table \ref{table_prop}. It can be seen that some patterns, like XOR predicates, may not be easily derived from pure graph extensions.
\newline
{\bf Listing propositions}. We propose Algorithm \ref{algo_prop} to list propositions of type $x \geq x_{min} \%$ $\alpha$ are $y \geq y_{min} \%$ $\beta$ and $\gamma$, where $x_{min}$ and $y_{min}$ are user-defined parameters. It uses the fact that disjoint subjects satisfying a predicate also satisfy it when merged. Thus, the algorithm searches valid predicates for singleton subjects and merges all those sharing predicates. We find predicates via a constructive approach where each triplet forms a region iteratively grown until the constraints are no longer satisfied. While this approach does not in general retrieve all propositions, it identifies a significant number of non-trivial patterns, and it may be improved in further work.
\newline
{\bf Application to real-world data}. We apply Algorithm \ref{algo_prop} to a contact network in a hospital \cite{Vanhems}. Sets A = B consist of 29 patients and 46 healthcare workers (27 nurses, 11 doctors, 8 admin), set C represents time (1890 minutes of data). We use time as subject set and $x_{min} = 0.7, y_{min} = 0.5$. The algorithm finds 1456 predicates from which it forms patterns like: (i) group of 7 minutes where 85\% activity corresponds to 3 nurses and 1 admin interacting with 64\% density; (ii) group of 3 minutes where 84\% activity corresponds to 4 doctors interacting with 66\% density; (iii) group of 7 minutes where 80\% activity corresponds to 3 nurses interacting with 66\% density. Clearly, the patterns found are representative of the typical activity in a hospital.
\newline
{\bf Acknowledgements}. This work is funded in part by the ANR (French National Agency of Research) under the Limass (ANR-19-CE23-0010) and FiT LabCom grants. \vspace{-10pt}
| {
"timestamp": "2021-10-07T02:14:42",
"yymm": "2110",
"arxiv_id": "2110.02543",
"language": "en",
"url": "https://arxiv.org/abs/2110.02543",
"abstract": "Many systems generate data as a set of triplets (a, b, c): they may represent that user a called b at time c or that customer a purchased product b in store c. These datasets are traditionally studied as networks with an extra dimension (time or layer), for which the fields of temporal and multiplex networks have extended graph theory to account for the new dimension. However, such frameworks detach one variable from the others and allow to extend one same concept in many ways, making it hard to capture patterns across all dimensions and to identify the best definitions for a given dataset. This extended abstract overrides this vision and proposes a direct processing of the set of triplets. In particular, our work shows that a more general analysis is possible by partitioning the data and building categorical propositions that encode informative patterns. We show that several concepts from graph theory can be framed under this formalism and leverage such insights to extend the concepts to data triplets. Lastly, we propose an algorithm to list propositions satisfying specific constraints and apply it to a real world dataset.",
"subjects": "Social and Information Networks (cs.SI); Data Structures and Algorithms (cs.DS); Logic in Computer Science (cs.LO); Signal Processing (eess.SP)",
"title": "A logical approach for temporal and multiplex networks analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846703886662,
"lm_q2_score": 0.8080672066194946,
"lm_q1q2_score": 0.7906005676003044
} |
https://arxiv.org/abs/1202.3493 | Probability calculations under the IAC hypothesis | We show how powerful algorithms recently developed for counting lattice points and computing volumes of convex polyhedra can be used to compute probabilities of a wide variety of events of interest in social choice theory. Several illustrative examples are given. | \section{Introduction}
\label{sec:intro}
Much research has been undertaken in recent decades with the aim of
quantifying the probability of occurrence of certain types of election
outcomes for a given voting rule under fixed assumptions on the
distribution of voter preferences. Most prominent among these outcomes
of interest are the so-called voting paradoxes, which have been shown to
be unavoidable, hence the interest in how commonly they may occur. The
survey \cite{GeLe2004} discusses these questions
and gives a summary of results up to 2002.
In very many cases, particularly under the IAC hypothesis on voter
preferences, the calculations involved amount simply to counting integer
lattice points inside convex polytopes. In the social choice literature,
two main methods have been used to carry out such computations. The
first, dating back several decades, decomposes the polytope into smaller
pieces each of which can be treated by elementary methods involving
simplification of multiple sums. This method works fairly well for
simple problems but requires considerable ingenuity and perseverance to
carry out even for moderately complicated ones. More recently, more
powerful methods have been introduced in \cite{HuCh2000,
Gehr2002b} but there are several recent instances where even these
methods did not suffice to solve natural questions about 3-candidate
elections.
The purpose of the present paper is to point out that there is an
established mathematical theory of counting lattice points in convex
polytopes (and the closely related issue of computing the volume of such
a region), which has been partially rediscovered by workers in social
choice theory. The area has recently been the subject of active research
(see \cite{Delo2005} for a good summary). Several more efficient
new algorithms have been devised and implemented in publicly available
software.
We aim to apply these new methods to answer questions in voting theory
that have proven beyond the reach of previous authors. In addition we
corroborate, correct, and unify the derivation of some previously
published results by using this methodology. We believe that the solution
of many hitherto difficult problems can now be relegated to a trivial
computation. This should open the way for social choice theorists to
tackle more difficult and realistic problems. We note that
Lepelley, Louichi and Smaoui \cite{LLS2006} have recently, and independently from us,
circulated a preprint with a similar goal, which covers very similar ground. The
fact that two groups of researchers discovered this approach almost
simultaneously shows that the time has indeed come for these methods to
be assimilated by the social choice community.
The basic idea is that many sets of voting situations that are of
interest can be characterized by linear equations and inequalities. The
variables are usually the numbers of voters with each of the $m!$
possible preference orders, where $m$ is the number of alternatives. The
set of such (in)equalities defines a convex polytope in $\mathbb{R}^d$
for some $d$, given by $Ax \leq b$ for some matrix $A$ (here $d \leq m!$
and the inequality may be strict, because we may first use equality
relations to eliminate variables and reduce dimension). Each lattice
point will correspond to a voting situation in the desired set. The
probability that a randomly chosen situation has the property under consideration
is therefore a straightforward ratio of lattice point counts. Dividing through by
$n$, the total number of voters, yields a convex polytope $P$, independent of $n$,
in $\mathbb{R}^d$. For a given number $n$ of voters, the
dilation $nP$ describes the set of lattice points that we wish to
enumerate.
\section{Counting lattice points in convex polytopes}
\label{sec:convex}
We give only a brief description here. For more information we recommend
\cite{Delo2005}.
The \Em{Ehrhart series} of the rational polytope $P$ is a rational
generating function $F(t)= P(t)/Q(t) = \sum_n a_n t^n$ whose $n$th
Maclaurin coefficient $a_n$ gives the number of lattice points inside
the dilation $nP$. The function $f:n \mapsto a_n$ is known to be a
polynomial of degree $d$ if all the vertices of $P$ are integral;
otherwise it is a quasipolynomial of some minimal period $e$. That is,
the restriction of $f$ to each fixed congruence class modulo $e$ is a
polynomial.
It is known that $e$ is a divisor of $m$, where $m$ is an integer such
that all coordinates of vertices of $mP$ are integers. The
least such $m$ is the least common multiple of the denominators of the
coordinates of the vertices of $P$ when each coordinate is written in
reduced terms. However there are examples where $e < m$ \cite{McWo2005}.
A method for determining $e$ was presented in \cite{HuCh2000}.
Many questions in voting theory are of most interest in the asymptotic
case where $n \to \infty$. For small $n$, issues such as the method of
tiebreaking used assume great importance, whereas in the limit such
issues disappear (the situations in which ties occur correspond in the
limit to the boundary of $P$). We focus on limiting results in the
present paper.
The leading coefficient of the quasipolynomial $f$ is the same for all
congruence classes: only the lower degree terms differ. It is well known
that this leading coefficient is precisely the volume of $P$. For many
purposes, knowledge of this coefficient is sufficient. The limiting
probability under IAC as $n \to \infty$ is simply the volume of $P$
divided by the volume of $X$ where $X$ is the analogously defined
polytope that describes all possible voting situations.
To compute the number of lattice points in $nP$, if that amount of
detail is desired, we may use one of several algorithms. An attractive
approach pioneered by Barvinok makes heavy use of rational generating
functions; this is implemented in the software {\tt LattE}
\cite{DHTY2004, latte}. There are also several algorithms available
for volume computation; see \cite{BEF2000} for a survey of
algorithms, a hybrid of which has been implemented in {\tt vinci}
\cite{vinci} for floating point computation only. One of these
algorithms has been used in the Maple package {\tt Convex}
\cite{convex-package}, and uses exact rational arithmetic.
The software {\tt LattE} gives the Ehrhart series as standard output. In
order to extract the quasipolynomial formula for $f(n)$ from the Ehrhart
series, we may use interpolation. On each congruence class modulo $e$,
we must evaluate $f(n)$ at $d+1$ distinct values of $n$ in this class.
Given the explicit expression $F(t) = P(t)/Q(t)$ and a computer algebra
system, such evaluations are trivially obtained (the $a_n$ satisfy a
linear recurrence relation with constant coefficients). The Lagrange
interpolation formula then yields the desired formula for the particular
polynomial that is applicable for the given congruence class.
Another (generally less efficient) method of extraction is to decompose $F(t)$
into partial fractions. Note that $F(0) = 1$ and we can arrange so that $Q(t)$
factors as $\prod_j (1 - \alpha_j t)$ for some complex numbers
$\alpha_j$, possibly not distinct. We then have the partial fraction decomposition
$$F(t) = \sum_\alpha \sum_k c_{\alpha,k} (1 - \alpha t)^{-k}$$
where $\alpha$ runs over the roots of $Q$ and $k$ runs from $1$ to the multiplicity of $\alpha$.
This shows how the periodicity occurs: the
factorization of $Q(t)$ will introduce complex roots of unity and the
terms corresponding to powers of these will simplify on each congruence class.
In fact, on extracting the coefficient of $t^n$ we obtain
$$
[t^n] F(t) = \sum_{\alpha, k} \alpha^n c_{\alpha,k} \binom{n+k-1}{k-1}
$$
and the terms $\alpha^n$ simplify on each equivalence class modulo $e$.
Note that $e = 1$ (that is, $f(n)$ is a single polynomial) if and only if $Q$
factors completely over the rationals.
Note that since we know \textit{a priori} that the coefficient of $t^n$ is
polynomially growing, all $\alpha$ with $|\alpha| \neq 1$ can be
ignored, since their contribution must cancel (otherwise we would obtain
terms exponentially growing or decreasing in $n$). Unfortunately this
observation does not help in the present case, because the Ehrhart
series has a denominator of the form $\prod_i (1 - t^{a_i})$, so all the
$\alpha_j$ above are in fact roots of unity.
In summary, the Ehrhart series contains all information required to
solve the problem of counting lattice points in polytopes parametrized
by a single parameter $n$. The hardest step is usually determining the
minimal period $e$.
\section{Examples}
\label{sec:examples}
In this section we compute, using the recipe above, a few probabilities
under IAC that have been considered in the recent social choice
literature. We emphasize problems where older methods have not yielded
an answer, but also check results obtained by previous authors using
older methods. Some of these earlier results appear to be incorrect. The
use of a computer algebra system such as Maple \cite{maple} is essential for
some of the more complicated examples.
\subsection{Manipulability}
\label{ss:manip}
We first consider the probability under IAC that a voting situation in a
$3$-candidate election is manipulable by some coalition. Counterthreats
are not considered --- we assume that some group of voters with incentive
to manipulate will not be opposed by the other, naive, voters. See
\cite{PrWi2005, FaLe2006} for more discussion of these
(standard) assumptions.
For the classical rules plurality and antiplurality, the answer is
known: 7/24 and 14/27 respectively \cite{LeMb1987,
LeMb1994}. These results were derived by the earliest methods
described above and required considerable hand computation.
However, for the Borda rule, no such result has been
derived even using more sophisticated methods. A good numerical
approximation to the limit has been obtained. In
\cite{FaLe2006} the authors used the method of
\cite{HuCh2000} to obtain bounds on the solution but could not carry
out the full computation. Using their method requires interpolation,
hence computing the first $6e$ coefficients of the Ehrhart series, where
$e$ is the minimal period of the quasipolynomial. They showed that $e >
48$, and since they computed these coefficients by exhaustive
enumeration, it was not possible to carry out the computation to the end
(the number of voting situations is of order $n^5$). They estimated a
value of $0.5025$ for the limit.
However with more powerful tools the answers are easily obtained. We let $n_1, \dots
,n_6$ denote the number of voters with sincere preference order $abc,
acb, bac, bca, cab, cba$ respectively, and let $x_i = n_i/n$. Then
$\sum_i x_i = 1$ and $x_i \geq 0$. We use the linear systems derived for
general positional rules in \cite{PrWi2005}. As shown in \cite{PrWi2005} we may assume
without
loss of generality that $a$ wins, $b$ is second, and $c$ last in the
election (this assumption will only affect lower order terms in our resulting quasipolynomial,
and this is inevitable when different tie-breaking assumptions are made).
Thus we must multiply our final answer by $6$ since we are only considering
one of the $3!$ equally likely permutations of the candidates.
\subsubsection*{Plurality}
We first consider the plurality rule. We define polytopes $P_b, P_c, P_{bc}$ as follows.
Consider the inequalities
\begin{align}
\label{eq:plur a-b}
0 & \leq x_1 + x_2 - x_3 - x_4 \qquad \text{($a$ beats $b$ (sincere))} \\
\label{eq:plur b-c}
0 & \leq x_3 + x_4 - x_5 - x_6 \qquad \text{($b$ beats $c$ (sincere))} \\
\label{eq:plur-str b-a}
0 & \leq -x_1 -x_2 + x_3 + x_4 + x_6 \qquad \text{($b$ beats $a$ (strategic))}\\
\label{eq:plur-str b-c}
0 & \leq -x_1 -x_2 + 2x_3 + 2x_4 -x_5 + 2x_2 \qquad \text{($b$ beats $c$ (strategic))}.
\end{align}
The polytope $P_b$ (the region where manipulation in favour of $b$ is possible) is defined by
the inequalities
\eqref{eq:plur a-b} -- \eqref{eq:plur-str b-c}, the equality $\sum_i x_i = 1$, and the condition
that all $x_i$ are nonnegative. Polytope $P_c$ is obtained by
applying the permutation $b \leftrightarrow c$, which induces the
permutation $x_1 \leftrightarrow x_2, x_3 \leftrightarrow x_5, x_4
\leftrightarrow x_6$, and $P_{bc} = P_b \cap P_c$ is just given by the union of the two
sets of inequalities defining $P_b$ and $P_c$.
The software {\tt LattE} readily computes the Ehrhart series of each polytope. They are
\begin{align*}
H_b & = {\frac {12\,{t}^{12}+24\,{t}^{11}+44\,{t}^{10}+56\,{t}^{9}+66\,{t}^{8}+64\,{t}^{
7}+63\,{t}^{6}+44\,{t}^{5}+30\,{t}^{4}+14\,{t}^{3}+6\,{t}^{2}+2\,t+1}{ \left( 1
-t \right) ^{2} \left( 1-{t}^{3} \right) ^{4} \left( 1+t \right) ^{4} \left( 1+{t
}^{2} \right) ^{3}}}
\\
H_c & = {\frac {8\,{t}^{12}+16\,{t}^{11}+26\,{t}^{10}+34\,{t}^{9}+38\,{t}^{8}+40\,{t}^{7
}+41\,{t}^{6}+30\,{t}^{5}+20\,{t}^{4}+10\,{t}^{3}+4\,{t}^{2}+2\,t+1}{ \left( 1
-{t}^{4} \right) ^{3} \left( 1-t \right) ^{2} \left( 1-{t}^{2} \right) \left(
1+t+{t}^{2} \right) ^{4}}}
\\
H_{bc} & =
{\frac {4\,{t}^{8}+5\,{t}^{6}+4\,{t}^{5}+4\,{t}^{4}+4\,{t}^{3}+2\,{t}^{2}+1}{
\left( 1-t \right) ^{4} \left( 1-{t}^{4} \right) ^{2} \left( 1+t+{t}^{2}
\right) ^{4}}}
\end{align*}
The series we require is therefore
\begin{align*}
H& :=H_b + H_c - H_{bc} \\
& =
{\frac {16\,{t}^{12}+32\,{t}^{11}+57\,{t}^{10}+68\,{t}^{9}+78\,{t}^{8}+74\,{t}^{
7}+73\,{t}^{6}+50\,{t}^{5}+33\,{t}^{4}+14\,{t}^{3}+6\,{t}^{2}+2\,t+1}{ \left( 1
-{t}^{4} \right) ^{3} \left( 1-t \right) ^{2} \left( 1-{t}^{2} \right)
\left( 1+t+{t}^{2} \right) ^{4}}}.
\end{align*}
Note that in order to factor the denominator of $H$ completely we require both a cube root and
fourth root of $1$, hence a field extension of degree 12. Thus we expect the period of
the quasipolynomial $f(n):=[t^n]H(t)$ to be 12. We may determine the polynomial formula for $f$
on each congruence class in more than one way, as described in section~\ref{sec:convex}.
First, we try interpolation. Consider the polynomial expression valid for $f(n)$ when
$n \equiv 0 \mod 12$. This is a polynomial of degree 5 in $n$. We compute the values $f(12j)$
for $j=0, \dots, 5$ and then determine the unique interpolating polynomial of degree $5$
determined by these points, via, say, the Lagrange
inversion formula. The built-in commands in Maple find this polynomial immediately: the answer is
$$
f(n) = {\frac {7}{17280}}\,{n}^{5}+{\frac {1}{108}}\,{n}^{4}+{\frac {3}{32}}\,{n}^{3}+{
\frac {15}{32}}\,{n}^{2}+{\frac {137}{120}}\,n+1 \qquad (n \equiv 0 \mod 12).
$$
As a check, we substitute $n = 96$ into this expression --- the correct answer, namely
$[t^{96}] H(t) = 4176821$, is obtained. Analogous formulae can be obtained in the same way for the
other congruence classes modulo $12$. For example, the result for $n$ congruent to $6$ modulo
$12$ is
$$
f(n) = {\frac {7}{17280}}\,{n}^{5}+{\frac {1}{108}}\,{n}^{4}+{\frac {3}{32}}\,{n}
^{3}+{\frac {15}{32}}\,{n}^{2}+{\frac {61}{60}}\,n+5/8 \qquad (n \equiv 6 \mod 12),
$$
while that for $n$ congruent to $1$ is given by
$$
f(n) = {\frac {7}{17280}}\,{n}^{5}+{\frac {1}{108}}\,{n}^{4}+{\frac {341}{5184}}
\,{n}^{3}+{\frac {5}{36}}\,{n}^{2}-{\frac {917}{17280}}\,n-{\frac {209}{
1296}} \qquad (n \equiv 1 \mod 12).
$$
Note that since the number of voting situations is given by
$\binom{n+5}{5} = (n+1) \cdots (n+5)/120$, and we have only counted
one-sixth of the manipulable situations, the limiting probability of manipulability is 720
times the leading coefficient of $f$, namely $7/24$. This agrees with the results obtained in
\cite{LeMb1987}. Note that the expressions for finite $n$ do not agree, probably because
of different tie-breaking assumptions yielding slightly different sets of manipulable voting
situations. We use random tiebreaking as described in \cite{PrWi2005}, with a winner being
chosen uniformly at random from the
set of those with highest score; the alternative used in many papers
breaks the symmetry by breaking ties in favour of a fixed but arbitrary order on the candidates.
It is clear from the discussion at the beginning of the
proof of \cite[Theorem 2]{LeMb1987} that the latter tiebreaking method is used in that paper.
As mentioned in section~\ref{sec:convex}, another method would be to compute the full partial
fraction decomposition of $H$ over the extension field of $\mathbb{Q}$
generated by a primitive 12th root of $1$. This can
be done easily by Maple. However the result is somewhat messy and the ensuing
computation involving binomial coefficients is certainly
no easier than using interpolation, so we omit it.
\subsubsection*{Borda}
We now consider the Borda rule. We can attempt an analysis similar to the above (the polytopes
are defined in a similar manner, and all coefficients lie in $\{0, \pm 1, \pm 2, \pm 3\}$),
but we run into serious complexity issues in this case.
The Ehrhart series $F_b, F_c, F_{bc}$
given by {\tt LattE} are such that when $F:=F_b + F_c - F_{bc}$ is
simplified, its denominator is a product of cyclotomic polynomials
(minimal polynomials for roots of unity). The corresponding roots of
unity required are of orders whose least common multiple is $2520$.
So we are still faced with the major task of computing $e$. It is still
an open problem as to whether there exists an algorithm to determine $e$
which runs in polynomial time (in the input size) when the dimension is fixed. A
polynomial time algorithm to determine whether an integer $p$ is equal
to $e$ was presented in \cite{Wood2005}, but has not been
implemented in software as far as we are aware.
Of course, we do not need to know the exact value of $e$, and we could
assume it to be $2520$. In order to determine exact formulae for $f(n)$ in all cases by
interpolation, we would require the first $15120$ values of $f(n)$. Trying this in Maple we obtain
an overflow error. However it would be possible in principle to compute these using the
recurrence supplied by the rational form of $F$. We do not proceed further along these lines, but
we indicate how the computation would go. Writing
$P(t) = \sum_k b_k t^k, Q(t) = \sum_k c_k t^k, F(t) = P(t)/Q(t) = \sum_n a_n t^n$
and comparing coefficients, we obtain $b_n = \sum_{0\leq k\leq n} c_k f(n-k)$. This constant
coefficient linear recurrence allows us to
determine sequentially $f(0), \dots, f(r)$ where $r = \deg P$, and for $n > r$ we have the
defining recurrence $\sum_{0\leq k\leq n} c_k f(n-k) = 0$. In the present case
$\deg P = 75$ and $\deg Q = 82$, so the computation would be rather involved.
However, we can certainly determine the leading term of the quasipolynomial $f$, namely
the volume of a certain region. It is convenient to
eliminate $x_6$ throughout, using the sole equality
constraint $\sum_i x_i = 1$. In other words we look at the projection onto the
subspace $x_6 = 0$. Since we are dividing by the volume of the
projection of the simplex the exact scale factor is unimportant. This
projection is defined by the conditions $x_i \geq 0$ and $\sum_{i=1}^5 x_i
\leq 1$ --- we call these the \Em{standard inequalities}. The volume in
$\mathbb{R}^5$ of this simplex is easily computed to be $1/5! = 1/120$. Recalling the
factor of $6$ mentioned above, we shall therefore multiply the volume
answer obtained below by $720$ to compute the limiting probability.
The volume required is given by inclusion-exclusion as $\vol(R_b) +
\vol(R_c) - \vol(R_b \cap R_c)$ where $R_b, R_c$ respectively denote the
region for which manipulation in favour of $b$ or $c$ is possible.
The conditions describing the sincere outcome reduce, after elimination
of $x_6$, to
\begin{align}
\label{eq:borda a-b}
2x_1 + 3x_3 + 2x_4 - x_5 & \geq 1; \qquad \text{($a$ beats $b$ (sincere))} \\
\label{eq:borda b-c}
2x_1 + 3x_2 - x_4 + 2x_5 & \geq 1; \qquad \text{($b$ beats $c$ (sincere))}
\end{align}
while the conditions describing the outcome after manipulation amount to
\begin{align}
\label{eq:borda-str b-a}
3x_1 + 4x_2 + 3x_5 & \leq 2 \qquad \text{($b$ beats $a$ (strategic))}\\
\label{eq:borda-str b-c}
x_1 + 2x_2 + 2x_5 & \leq 1 \qquad \text{($b$ beats $c$ (strategic))}.
\end{align}
Now $R_b$ is defined by the standard inequalities and those in
\eqref{eq:borda a-b} -- \eqref{eq:borda-str b-c}. Also $R_c$ is obtained by
applying the permutation $b \leftrightarrow c$, which induces the
permutation $x_1 \leftrightarrow x_2, x_3 \leftrightarrow x_5, x_4
\leftrightarrow x_6$, and $R_{bc}$ is given by the union of the two
sets of inequalities defining $R_b$ and $R_c$.
The package \texttt{Convex} \cite{convex-package} immediately yields the
answer when given this input. The respective volumes of $R_b, R_c,
R_{bc}$ are $371/559872, 881/6531840, 170873/1714608000$ and the
required limit is precisely $132953/264600 \approx 0.5024678760$.
The large denominators in the fractions above give a clue to the
difficulty of this problem. \texttt{Convex} also computes the
vertices of the polytope. The least common multiple of the denominators
of the coordinates of the vertices is $72$ for $R_b$, $504$ for $R_c$,
$1260$ for $R_{bc}$. Thus the minimum period $e$ is a divisor of $2^3
\cdot 3^2 \cdot 5 \cdot 7 = 2520$, as we already knew from above.
\subsection{Condorcet phenomena}
\label{ss:cond}
See the two surveys and recent book by Gehrlein \cite{Gehr1997,
Gehr2002a, Gehr2006} for more information about previous work on this
topic.
In \cite{GeLe2004} Gehrlein and Lepelley state ``A
very large number of studies (probably more than $50\%$ of the studies
that have been devoted to probability calculations in social choice
theory) have been conducted to develop representations for the
probability that Condorcet's Paradox will occur, and for the Condorcet
efficiency of various rules, with the assumptions of IC and IAC."
\subsubsection*{Condorcet's paradox}
\Em{Condorcet's Paradox} occurs in a voting situation when there is no
Condorcet winner --- that is, no one candidate beats all others when
only pairwise comparisons are considered. This occurrence is independent
of the voting rule being used. To compute its likelihood, we compute the
complementary event.
Suppose that we have $3$ alternatives $a, b, c$. Let $C$ be the event
that $a$ is the Condorcet winner. This yields inequalities that boil
down to
\begin{align}
\label{eq:cond a-b}
2x_1 + 2x_2 + 2x_3 & \geq 1 \qquad\text{($a$ beats $b$ pairwise);} \\
\label{eq:cond a-c}
2x_1 + 2x_2 + 2x_5 & \geq 1 \qquad\text{($a$ beats $c$ pairwise).}
\end{align}
Let $P_C$ be the polytope defined by these and the standard
inequalities. Then {\tt Convex} yields $\vol(P_C) = 1/384$, so that
Condorcet's Paradox occurs with asymptotic probability $1 - 3 \cdot
5!/384 = 1/16$ for IAC with 3 alternatives.
This is of course a known result dating back several decades.
\subsubsection*{Condorcet efficiency}
Similarly we may compute the \Em{Condorcet efficiency} of a given rule,
namely the conditional probability that it elects the Condorcet winner
given that this winner exists. For a given scoring rule defined by
weights $(1, \lambda, 0)$, let $X_\lambda$ be the event that $a$ is the
winner when this rule is used. Clearly $\Pr(X_\lambda) = 1/3$.
These conditions describing $X_\lambda$ amount to
\begin{align}
\label{eq:gen a-b}
x_1 + (1 + \lambda) x_2 + (2 \lambda - 1) x_3 + (\lambda - 1) x_4 + 2\lambda x_5 & \geq \lambda
\qquad \text{($a$ beats $b$ with rule $\lambda$)} \\
\label{eq:gen a-c}
2x_1 + (2 - \lambda) x_2 + (1 + \lambda) x_3 + (1 - \lambda) x_4 + \lambda x_5 & \geq 1
\qquad \text{($a$ beats $c$ with rule $\lambda$)}.
\end{align}
The Condorcet efficiency of rule $\lambda$ is $\Pr(X_\lambda \cap
C)/\Pr(C)$ which equals $3 \cdot 5! (16/15) \vol(P_\lambda \cap P_C)$.
In the special cases $\lambda = 0, 1/2, 1$ of plurality, Borda,
antiplurality, respectively, we obtain $119/135$, $41/45$, $17/27$.
These last three results were obtained long ago by Gehrlein.
We can consider further intersections of such events. For example,
Gehrlein has computed limiting results under IC for the
conditional probability that rule $\lambda$ chooses the Condorcet winner
given that Borda does, that Borda does given that rule $\lambda$ does,
and that both rules choose the Condorcet winner given that it exists.
The answers to these questions are easily found for IAC using the above
methods and are listed in Table~\ref{table:cond}. These have not
previously been published as far as we are aware (numbers in brackets in that table
represent citations). In
Table~\ref{table:cond} we let $A | C$ denote the event that
antiplurality chooses the Condorcet winner given that it exists, $B | (P
\cap C)$ the probability that Borda chooses the Condorcet winner given
that plurality does, etc. These can be computed easily using the events
$C$ and $X_\lambda$ above. For example, the entry $B | (P \cap C)$
corresponds to the probability of the event that Borda and Condorcet
agree given that plurality and Condorcet agree. This is simply the
volume of the polytope $P_{1/2} \cap P_C \cap P_0$ divided by the volume
of $P_0 \cap P_C$ (the factor of $3$ cancels out because we are
computing conditional probabilities via $P(E_1 | E_2) = P(E_1 \cap
E_2)/P(E_1)$).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$P | C$ & $A | C$ & $B | C$ & $(A \cap B) | C$ & $(A \cap P) | C$ & $(B \cap P) | C$ &
$B | (P \cap C)$ & $B| (A \cap C)$ \\
\hline
0.88148 \cite{Gehr1982}& 0.62963 \cite{Gehr1982}& 0.91111\cite{Gehr1992} & 0.61775 & 0.53040 & 0.81821 & 0.92282 & 0.98113 \\
\hline
\end{tabular}
\caption{(Joint) limiting Condorcet efficiencies of the standard positional rules
under IAC, 3 candidates}
\label{table:cond}
\end{table}
We consider even more intersections of such events in the next section.
In \cite{CGZ2005} the value of $\lambda$ for which the
positional rule with weights $(1, \lambda, 0)$ is most Condorcet
efficient was determined. We call this ``rule M" for brevity.
The optimal value of $\lambda$ is an algebraic irrational number given as
the root of a polynomial of degree $8$ and to $5$ decimal places equals
$0.37228$. The corresponding value of the Condorcet efficiency is
approximately $0.92546$, only slightly more than that for Borda.
To use this particular value of $\lambda$ in computations similar to
those above, it is probably best to switch to software that performs
floating point computations in order to compute volumes. One such is
{\tt vinci}. We obtain for example that the joint Condorcet efficiency of
the optimal rule and the Borda rule equals, to 5 decimal places, $0.89183$.
\subsubsection*{Borda's Paradox}
We finish here by discussing \Em{Borda's Paradox}. Some rules can elect
a Condorcet loser, namely a candidate that is beaten by every other when
pairwise comparisons are made. The probability of this event for
plurality and antiplurality has been studied under IAC in
\cite{Lepe1993}, and it has long been known to be zero for
Borda. The methods in this section can be applied directly, since we
need only replace the Condorcet winner conditions by the same ones with
the direction of the inequality reversed. This shows that Borda's
Paradox occurs for plurality with probability $1/36$, agreeing with
\cite{Lepe1993}. The corresponding results for Borda and
antiplurality are $0$ and $17/576$, corroborating the previous results.
The probability that the most Condorcet efficient rule above elects the
Condorcet loser is, as one might expect, very small. The results are shown in
Table~\ref{table:loser}.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
plurality & rule M & Borda & antiplurality \\
\hline
0.0278 \cite{Lepe1993} & 0.00131 & 0 & 0.0295 \cite{Lepe1993}\\
\hline
\end{tabular}
\caption{Limiting probability of Borda's paradox under IAC, 3 candidates}
\label{table:loser}
\end{table}
\subsection{When do all common rules elect the same winner?}
\label{ss:allsame}
For three-alternative elections, all positional voting rules elect the
same winner in a given situation if and only if both plurality and
antiplurality elect the same winner, since the vector of scores is a
convex combination of those for the two extreme rules. The probability
of this event has been investigated under IC but not under IAC as far as
we are aware. In \cite{MTV2000} Merlin, Tataru and
Valognes also investigated the probability under IC that all positional
rules and all Condorcet efficient rules yield the same winner (in this
case, all scoring runoff rules also yield this same winner).
We again suppose that $a$ is the winner. We want to compute the
probability of the event $P \cap A$ as described in the previous
section. The relevant polytope has $18$ vertices and $m = 12$. Its
volume is $113/77760$ and so the limiting probability that all
positional rules yield the same winner for $3$ alternatives under IAC is
$113/216$ (this confirms a result in \cite{Gehr2002b}).
We could also investigate the relationship between, say, plurality and
Borda. They agree with probability $89/108$, whereas
antiplurality and Borda agree with probability $1039/1512$.
The probability that all Condorcet rules and all positional rules elect
the same winner given that the Condorcet winner exists is obtained
easily via computation of $\Pr(P \cap C \cap A)$ as above. The answer is
$3437/6480$. The polytope involved has $29$ vertices and $m = 12$.
We must also consider the case when no Condorcet winner exists. There
are two cases corresponding to the two cycles $a, b, c, a$ and $a, c, b,
a$. In the first case, \cite{MTV2000} shows that the
rules all agree if and only if all positional rules give the ranking $a,
b, c$, and this occurs if and only if both plurality and antiplurality
give that ordering. The computation is straightforward as above and the
probability of this event is only $5/10368$. The contribution from the
cyclic case is therefore 32 times this, or, $5/324$, and the final
result for the probability that all rules agree is $10631/20736$.
We can also consider the probability that two rules agree in their whole
ranking, not just in the choice of winner. This is easily computed
similarly to above: plurality and antiplurality agree on their whole
ranking with probability $8/27$, while Borda and
plurality agree with probability $61/108$. Borda and
antiplurality also agree with probability $61/108$, which is clear by
symmetry in any case.
\begin{table}
\begin{tabular}{|c|c|c|}
\hline
Rules & Elect same winner & Agree whole ranking\\
\hline
Antiplurality and Borda & 0.68717 & 0.56481 \\
\hline
Antiplurality and plurality (hence all scoring rules) & 0.52315 \cite{Gehr2002b} & 0.29630\\
\hline
Plurality and Borda & 0.82407 & 0.56481\\
\hline
All common rules & 0.51268 & \\
\hline
\end{tabular}
\caption{Limiting probability of agreement of various rules under IAC, 3 candidates}
\label{table:agree}
\end{table}
\subsection{Abstention and Participation Paradoxes}
In \cite{LeMe2001} Lepelley and Merlin discuss various
ways in which voters can attempt to manipulate an election by abstaining
from voting. All scoring runoff rules and Condorcet rules suffer from
this problem. Although abstaining turns out to be a dominated strategy
for scoring runoff rules, it is still of interest to compute the
probability that a situation may be manipulated in this way. Lepelley
and Merlin carry this out under IC and IAC for scoring runoff rules
based on plurality, antiplurality and Borda. For the latter (the Nanson
rule) the limiting probability was not computed exactly (Table~5 of
\cite{LeMe2001} refers to results of Monte Carlo
simulation). We compute some exact values here.
We use the linear system given in \cite{LeMe2001}. Suppose
that $c$ is eliminated first and $a$ then beats $b$ in the runoff. The
Positive Participation Paradox occurs when voters ranking $a$ first are
added to the electorate, and yet $a$ then loses. This cannot happen when
plurality is used at the first stage, but for other rules it can happen
that $b$ now loses the first stage, and $a$ subsequently loses the
runoff against $c$. Note that only voters with preference order $acb$
can cause this to occur, and it can only occur when $c$ originally beats
$a$ pairwise.
The system describing this set of voting situations contains the
inequalities stating that $a$ beats $b$ and $b$ beats $c$ using the
given scoring rule, and also that $a$ beats $b$ pairwise. In addition we
have another constraint as described in \cite{LeMe2001}
(note that $n_6$ in the first equation on p.58 of that paper should be
$-n_6$). Carrying out the (by now routine) computation we obtain
$1/72$ which confirms the simulation result $0.14$ referred to
above. Note that the polytope involved has only $6$ vertices and $6$
facets but $m =18$; if $e = 18$ (which we have not checked), it would
be difficult to compute the Ehrhart polynomial using the old methods,
which probably explains why only simulation results were obtained for the Nanson
rule in the paper cited above.
Similarly we may compute the result for each of several other participation paradoxes. The results
for the negative participation, positive abstention and negative
abstention paradoxes (see \cite{LeMe2001} for definitions
and characterizations of the polytopes) are respectively $1/48, 1/96,
1/72$ confirming the earlier simulation results $0.020, 0.010, 0.14$.
We can also perform the analogous computations for plurality and antiplurality runoff --- the
results confirm those in \cite{LeMe2001} and are shown in Table~\ref{table:abs}.
\begin{table}
\begin{tabular}{|c|c|c|c|c|}
\hline
Underlying rule & PPP & NPP & PAP & NAP\\
\hline
plurality \cite{LeMe2001} & 0 &0.07292 & 0 & 0.04080\\
Borda & 0.01389 & 0.02083 & 0.01042 & 0.01389 \\
antiplurality \cite{LeMe2001} & 0.03822 &0 &0.04253 & 0\\
\hline
\end{tabular}
\caption{Limiting probability of participation paradoxes for scoring runoff rules under IAC,
3 candidates}
\label{table:abs}
\end{table}
\subsection{The referendum paradox}
\label{ss:ref}
This gives an example where the variables describing our polytopes are
slightly different.
In \cite{FLM2004} the referendum or Compound Majority Paradox
is studied. In the simplest case there are $N$ equal sized districts
each having $n$ voters. There are two candidates $a$ and $b$ and voters
in each district use majority rule to decide which candidate wins each
district. The candidate winning a majority of districts is the winner of
the election; the paradox occurs when this candidate would have lost if
simple majority had been used in the union of all districts.
Among other things, the authors of \cite{FLM2004} derive the
probability of occurrence for $N=3, 4, 5$ under IAC using the older
methods and state that they are not able to extend it to $N\geq 6$.
Using the methods of the present paper it is easy to perform the
computations for at least a few more values of $N$. Let $n_i$ denote the
number of voters voting for $a$ in district $i$. The relevant set turns
out to be described (ignoring ties for simplicity) by the union of
polytopes of the form
\begin{align*}
n_i & \geq N/2 \text{ for $1 \leq i \leq k$}
\qquad\text{($a$ wins $k$ districts)} \\
0 \leq n_i & \leq N/2 \text{ for $k+1 \leq i \leq N$}
\qquad\text{($b$ wins $N-k$ districts)} \\
\sum_i n_i & \leq Nn/2 \qquad\text{($b$ wins overall)}
\end{align*}
for $\lfloor N/2 \rfloor + 1 \leq k \leq N - 1$. The polytope $P_n$
corresponding to $k$ must have volume multiplied by $2 \binom{N}{k}$ to
account for the symmetries of the problem. Note that there are $(n+1)^N$
situations to consider and so the leading term in $n$ of the Ehrhart
polynomial gives the probability required.
Doing the analogous computation for $N = 3$ and $N = 4$ we obtain (as
$n \to \infty$, in other words computing the volume of $P_n/n$) results
agreeing with \cite{FLM2004}. However already for $N = 5$ we
obtain $61/384$ as opposed to their result $55/384$. For $N = 7$ we have
$9409/46080$. The most complicated corresponding polytope in the last
case has $36$ vertices and $11$ facets, whereas for $N = 9$ it has $91$ vertices and $14$ facets.
We did not attempt to find the
maximum value of $N$ for which our software could obtain an answer;
the answer was given essentially instantaneously for $N = 9$. The conjecture in \cite{FLM2004} that
the probability tends to a limit of around $0.165$ as $N$ (odd) goes to infinity seems unlikely
in the light of these results.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
number of districts & 3 & 4 & 5 & 6 & 7 & 9\\
\hline
probability &0.125 \cite{FLM2004} &0.02083 \cite{FLM2004} &0.15885 \cite{FLM2004}
&0.04063 & 0.20419 & 0.26954 \\
\hline
\end{tabular}
\caption{Limiting probability of referendum paradox under IAC,
3 candidates}
\label{table:ref}
\end{table}
\section{Summary and discussion of future work}
We have shown that a wide variety of natural probabilistic questions for
$3$-alternative elections under IAC can be answered by applying standard
algorithms for counting lattice points in, and computing volumes of,
convex polytopes. For $4$ or more alternatives the computations are
conceptually the same but necessarily more complicated. However, the
scope for extending results in the $3$-candidate case to $4$ or more
candidates is obviously higher than for the older methods, which now
appear to be completely superseded. One important point to notice is
that many algorithms for volume computation have running times that are
very sensitive to the number of defining hyperplanes and the number of
vertices. Thus finding the most efficient description of the input
system is important. It is certainly clear that further progress in this
area will require researchers in social choice theory to understand in
some detail how the fastest algorithms for lattice point counting and
volume computation actually work. This may even lead to proofs for
larger (or general) numbers of candidates when the polytopes concerned
have a particularly nice structure.
Many questions naturally arise from our work here. One obvious line of
attack is to try to find the optimal parameter for $3$-alternative scoring
rules that minimizes the probability of a certain undesirable behaviour
occurring. The present authors are already engaged in carrying this out
for the case of (naive, coalitional) manipulability. Numerical results
obtained in \cite{PrWi2005} show that the answer may well be
plurality, but this has never been proved. An attack on this problem
along the lines of the approach in the present paper would require
computation of volumes of a polytope whose defining constraints depend
linearly in a parameter $\lambda$, and this requires considerable work
as shown in \cite{CGZ2005}. Understanding of how to
carry out such a computation would help in understanding the variation
between positional rules. For example, the probability of electing a
Condorcet loser is of order $0.03$ for both plurality and antiplurality,
but an order of magnitude smaller near the Borda rule, and as a function
of $\lambda$ is very flat there. Quantifying this type of variation
analytically may show, for example, that it is not worth the trouble of
replacing Borda by the Condorcet-optimal positional rule.
Another direction is to consider other probability models. For
simplicity here we have not considered some common assumptions such as
single-peaked preferences and the Maximal Culture Condition. Many
computations in these cases reduce to ones identical in spirit to those
we have undertaken here. More general P{\'o}lya-Eggenberger distributions
would lead to the more difficult issue of integrals of nonconstant
probability densities over polytopes, but some results may be
forthcoming there.
\bibliographystyle{plain}
| {
"timestamp": "2012-02-17T02:00:50",
"yymm": "1202",
"arxiv_id": "1202.3493",
"language": "en",
"url": "https://arxiv.org/abs/1202.3493",
"abstract": "We show how powerful algorithms recently developed for counting lattice points and computing volumes of convex polyhedra can be used to compute probabilities of a wide variety of events of interest in social choice theory. Several illustrative examples are given.",
"subjects": "Combinatorics (math.CO)",
"title": "Probability calculations under the IAC hypothesis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363545048391,
"lm_q2_score": 0.8031737987125612,
"lm_q1q2_score": 0.790593169058526
} |
https://arxiv.org/abs/1812.08485 | First-order algorithms converge faster than $O(1/k)$ on convex problems | It is well known that both gradient descent and stochastic coordinate descent achieve a global convergence rate of $O(1/k)$ in the objective value, when applied to a scheme for minimizing a Lipschitz-continuously differentiable, unconstrained convex function. In this work, we improve this rate to $o(1/k)$. We extend the result to proximal gradient and proximal coordinate descent on regularized problems to show similar $o(1/k)$ convergence rates. The result is tight in the sense that a rate of $O(1/k^{1+\epsilon})$ is not generally attainable for any $\epsilon>0$, for any of these methods. | \section{Introduction} \label{sec:intro}
Consider the unconstrained optimization problem
\begin{equation}
\min_x \, f(x),
\label{eq:f}
\end{equation}
where $f$ has domain in an inner-product space and is convex and $L$-Lipschitz
continuously differentiable for some $L > 0$. We assume throughout
that the solution set $\Omega$ is non-empty. (Elementary arguments
based on the convexity and continuity of $f$ show that $\Omega$ is a
closed convex set.) Classical convergence theory for gradient descent
on this problem indicates a $O(1/k)$ global convergence
rate in the function value. Specifically, if
\begin{equation}
x_{k+1} \coloneqq x_k - \alpha_k \nabla f(x_k), \quad k=0,1,2,\dotsc,
\label{eq:gd}
\end{equation}
and $\alpha_k \equiv \bar\alpha \in (0, 1/L]$, we have
\begin{equation}
f\left( x_k \right) - f^* \leq \frac{\mbox{\rm dist}(x_0, \Omega)^2}{2\bar\alpha
k}, \quad k=1,2,\dotsc,
\label{eq:1k}
\end{equation}
where $f^*$ is the optimal objective value and $\mbox{\rm dist}(x, \Omega)$
denotes the distance from $x$ to the solution set. The proof of
\eqref{eq:1k} relies on showing that
\begin{equation}
k \left(f\left( x_k \right) - f^*\right) \leq \sum_{T=1}^{k}
\left(f\left( x_{T} \right) - f^* \right) \leq \frac{1}{2\bar
\alpha} \mbox{\rm dist}(x_0, \Omega)^2, \quad k=1,2,\dotsc,
\label{eq:sum}
\end{equation}
where the first inequality utilizes the fact that gradient descent is
a descent method (yielding a nonincreasing sequence of function values
$\{ f(x_k \}$).
We demonstrate in this paper that the bound \eqref{eq:1k} is not
tight, in the sense that $k (f(x_k)-f^*) \to 0$, and thus $f(x_k)-f^*
= o(1/k)$. This result is a consequence of the following technical
lemma.
\begin{lemma}
\label{lemma:techniques}
Let $\{\Delta_k\}$ be a nonnegative sequence satisfying the following
conditions:
\begin{enumerate}
\item $\{\Delta_k\}$ is monotonically decreasing;
\item $\{\Delta_k\}$ is summable, that is, $\sum_{k=0}^{\infty}
\Delta_k < \infty$.
\end{enumerate}
Then $k \Delta_k \to 0$, so that $\Delta_k = o(1/k)$.
\end{lemma}
\begin{proof}
The proof uses simplified elements of the proofs of Lemmas~2 and 9
of Section~2.2.1 from \cite{Pol87a}. Define $s_k \coloneqq k \Delta_k$ and
$u_k \coloneqq s_k + \sum_{i=k}^{\infty} \Delta_i$. Note that
\begin{equation} \label{eq:vp1}
s_{k+1} = (k+1) \Delta_{k+1} \le k \Delta_k + \Delta_{k+1} \leq s_k
+ \Delta_k.
\end{equation}
From \eqref{eq:vp1} we have
\[
u_{k+1} = s_{k+1} + \sum_{i=k+1}^{\infty} \Delta_i \le s_k +
\Delta_k + \sum_{i=k+1}^{\infty} \Delta_i = s_k +
\sum_{i=k}^{\infty} \Delta_i = u_k,
\]
so that $\{ u_k \}$ is a monotonically decreasing nonnegative
sequence. Thus there is $u \ge 0$ such that $u_k \to u$, and since
$\lim_{k \to \infty} \sum_{i=k}^{\infty} \Delta_i = 0$, we have $s_k
\to u$ also.
Assuming for contradiction that $u>0$, there exists $k_0>0$ such
that $s_k \ge u/2>0$ for all $k \ge k_0$, so that $\Delta_k \ge
{u}/{(2k)}$ for all $k \ge k_0$. This contradicts the summability of
$\{ \Delta_k \}$. Therefore we have $u=0$, so that $k \Delta_k =
s_k \to 0$, proving the result.
\qed
\end{proof}
Our claim about the fixed-step gradient descent method follows
immediately by setting $\Delta_k = f(x_k)-f^*$ in
Lemma~\ref{lemma:techniques}. We state the result formally as follows,
and prove it at the start of Section~\ref{sec:main}.
\begin{theorem}
\label{thm:main}
Consider \eqref{eq:f} with $f$ convex and $L$-Lipschitz continuously
differentiable and nonempty solution set $\Omega$. If the step sizes
satisfy $\alpha_k \equiv \bar\alpha \in (0, 1 / L]$ for all $k$, then
gradient descent \eqref{eq:gd} generates objective values $f(x_k)$
that converge to $f^*$ at an asymptotic rate of $o(1/k)$.
\end{theorem}
This result shows that the $o(1/k)$ rate for gradient descent with a
fixed short step size is universal on convex problems, without any
additional requirements such as the boundedness of $\Omega$ assumed in
\cite[Proposition~1.3.3]{Ber16a}. In the remainder of the paper, we
show that this faster rate holds for several other smooth optimization
algorithms, including gradient descent with fixed steps in the larger
range $(0,2/L)$, gradient descent with various line-search strategies,
and stochastic coordinate descent with arbitrary sampling
strategies. We then extend the result to algorithms for regularized
convex optimization problems, including proximal gradient and
stochastic proximal coordinate descent.
Except for the cases of coordinate descent and proximal coordinate
descent which require a finite-dimensional space so that all the
coordinates can be processed, our results apply to any inner-product
spaces. Assumptions such as bounded solution set, bounded level set,
or bounded distance to the solution set, which are commonly assumed in
the literature, are all unnecessary. We can remove these assumptions
because an implicit regularization property causes the iterates to
stay within a bounded area.
In our description, the Euclidean norm is used for simplicity, but our
results can be extended directly to any norms induced by an inner
product,\footnote{We meant that given an inner product
$<\cdot,\cdot>$, the norm $\|\cdot\|$ is defined as $\|x\|
\coloneqq \sqrt{<x,x>}$.}
provided that Lipschitz continuity of $\nabla f$ is defined
with respect to the corresponding norm and its dual norm.
\paragraph{Related Work.}
Our work was inspired by \cite[Corollary~2]{PenZZ18a} and
\cite[Proposition~1.3.3]{Ber16a}, which improve convergence for
certain algorithms and problems on convex problems in a Euclidean
space from $O(1/k)$ to $o(1/k)$ when the level set is compact. Our
paper develops improved convergence rates of several algorithms on
convex problems without the assumption on the level set, with most of
our results applying to non-Euclidean Hilbert spaces. The main proof
techniques in this work are somewhat different from those in the works
cited here.
For an accelerated version of proximal gradient on convex problems,
it is proved in \cite{AttP16a} that the convergence rate can be
improved from $O(1/k^2)$ to $o(1/k^2)$. Accelerated proximal
gradient is a more complicated algorithm than the nonaccelerated
versions we discuss, and thus \cite{AttP16a} require a more
complicated analysis that is quite different from ours.
\cite{DenLPY17a} have stated a version of
Lemma~\ref{lemma:techniques} with a proof different from the proof
that we present, using it to show the
convergence rate of the quantity $\|x_k - x_{k+1}\|$ of a version of
the alternating-directions method of multipliers (ADMM). Our work
differs in the range of algorithms considered and the nature of the
convergence. We also provide a discussion of the tightness of the
$o(1/k)$ convergence rate.
\section{Main Results on Unconstrained Smooth Problems}
\label{sec:main}
We start by detailing the procedure for obtaining \eqref{eq:sum}, to
complete the proof of Theorem~\ref{thm:main}. First, we define
\begin{equation}
M(\alpha) \coloneqq \alpha - \tfrac12 L\alpha^2.
\label{eq:M}
\end{equation}
From the Lipschitz continuity of $\nabla f$, we have for any
point $x$ and any real number $\alpha$ that
\begin{align}
f\left( x - \alpha \nabla f(x) \right)
\leq f( x ) - \nabla f(x)^\top \left( \alpha \nabla f(x) \right)
+ \frac{L}{2}\nabs{\alpha \nabla f(x)}^2
\label{eq:Lip}
= f( x) - M( \alpha ) \nabs{\nabla f(x)}^2.
\end{align}
Clearly,
\begin{equation}
\alpha \in \left(0,\frac1L\right] \quad \Rightarrow \quad
M(\alpha) \geq \tfrac12 {\alpha}> 0,
\label{eq:Mbound}
\end{equation}
so in this case, we have by rearranging \eqref{eq:Lip} that
\begin{align}
\| \nabla f(x) \|^2 \le \frac{1}{M(\alpha)} \left( f(x) -
f(x-\alpha \nabla f(x)) \right) \le \frac{2}{\alpha} \left( f(x) - f(x-\alpha
\nabla f(x)) \right).
\label{eq:th2}
\end{align}
Considering any solution $\bar{x} \in \Omega$ and any $T \geq 0$, we have
for gradient descent \eqref{eq:gd} that
\begin{align}
\label{eq:dist}
\nabs{x_{T+1} - \bar{x}}^2 = \nabs{x_T - \alpha_T \nabla f(x_T) - \bar{x}}^2
= \nabs{x_T - \bar{x}}^2 + \alpha_T^2 \|\nabla f(x_T)\|^2 - 2 \alpha_T \nabla f(x_T)^\top
\left( x_T - \bar{x} \right).
\end{align}
Since $\alpha_T \in (0, 1/L]$ in \eqref{eq:dist}, from
\eqref{eq:th2} and the convexity of $f$ (implying $\nabla
f(x_T)^T(\bar{x}-x_T) \le f^* - f(x_T)$), we have
\begin{equation}
\nabs{x_{T+1} - \bar{x}}^2 \leq \nabs{x_T - \bar{x}}^2 + 2\alpha_T \left(
f\left( x_{T} \right) - f\left( x_{T+1} \right)\right) + 2 \alpha_T
\left(f^*- f\left( x_T \right) \right).
\label{eq:rk}
\end{equation}
By rearranging \eqref{eq:rk} and using $\alpha_T \equiv \bar\alpha \in (0,1/L]$,
\begin{align}
f\left( x_{T+1} \right) - f^*
\leq \frac{1}{2\bar\alpha} \left(
\nabs{x_T - \bar{x}}^2 - \nabs{x_{T+1} - \bar{x}}^2\right).
\label{eq:tosum}
\end{align}
We then obtain \eqref{eq:sum} by summing \eqref{eq:tosum} from $T=0$
to $T=k-1$ and noticing that $\bar{x}$ is arbitrary in $\Omega$.
Theorem~\ref{thm:main} applies to step sizes in the range $(0,1/L]$
only, but it is known that gradient descent converges at the rate of
$O(1/k)$ for both the fixed step size scheme with $\bar\alpha \in
(0,2/L)$ and line-search schemes. Next, we show that $o(1/k)$ rates
hold for these variants too. We then extend the result to
stochastic coordinate descent with arbitrary sampling of
coordinates.
\subsection{Gradient Descent with Longer Steps}
\label{subsec:linesearch}
In this subsection, we allow the steplengths $\alpha_k$ for
\eqref{eq:gd} to vary from iteration to iteration, according to the
following conditions, for some $\gamma \in (0,1]$:
\begin{subequations}
\begin{gather}
\label{eq:bdd}
\alpha_k \in [C_2, C_1],\quad C_2 \in \left(0, \frac{ 2 - \gamma
}{L}\right],\quad C_1 \geq C_2,\\
\label{eq:sufficient}
f\left( x_k - \alpha_k \nabla f(x_k) \right) \leq f\left( x_k \right) -
\frac{\gamma \alpha_k}{2} \nabs{\nabla f(x_k)}^2,
\end{gather}
\label{eq:linesearch}
\end{subequations}
Note that these conditions encompass a fixed-steplength strategy with
$\alpha_k \equiv C_2$ as a special case, by setting $C_1 = C_2$, and
noting that condition \eqref{eq:sufficient} is a consequence of
\eqref{eq:Lip}. (Note too that $\alpha_k \equiv C_2 \in (0,
( 2 - \gamma) / L]$ can be almost
twice as large as the bound $1/L$ considered above.)
The main result for this subsection is as follows.
\begin{theorem}
\label{thm:main1}
Consider \eqref{eq:f} with $f$ convex and $L$-Lipschitz continuously
differentiable and nonempty solution set $\Omega$. If the step sizes
$\alpha_k$ satisfy \eqref{eq:linesearch}, then gradient descent
\eqref{eq:gd} generates objective values $f(x_k)$ converging to $f^*$
at an asymptotic rate of $o(1/k)$.
\end{theorem}
We give two alternative proofs of this result to provide different
insights. The first proof is similar to the one we presented for
Theorem \ref{thm:main} at the start of this section. The second proof
holds only for Euclidean spaces. This proof improves the standard
proof of \cite[Section~2.1.5]{Nes04a}.
We start from the following lemma, which verifies that the iterates
remain in a bounded set and is used in both proofs.
\begin{lemma}
\label{lemma:rk}
Consider algorithm \eqref{eq:gd} with any initial point $x_0$, and
assume that $f$ is convex and $L$-Lipschitz-continuously
differentiable for some $L > 0$. Then when the sequence of
steplengths $\alpha_k$ is chosen to satisfy \eqref{eq:linesearch}, all
iterates $x_k$ lie in a bounded set. In particular, for any $\bar{x} \in
\Omega$ and any $k \geq 0$, we have that
\begin{align}
\label{eq:toprove1}
\|x_{k+1} - \bar{x}\|^2 &\le \|x_0 - \bar{x}\|^2 +
\frac{2 C_1}{\gamma}\left( f\left( x_0 \right) - f\left( x_{k+1}
\right) \right)
+ 2 C_2 \sum_{T=0}^k\left( f^* - f\left( x_T \right)
\right)\\
&\le \|x_0 - \bar{x}\|^2 + \frac{2 C_1}{\gamma}\left( f\left( x_0 \right)
- f^* \right).
\label{eq:toprove2}
\end{align}
\end{lemma}
\begin{proof}
By \eqref{eq:sufficient} and the convexity of $f$, \eqref{eq:dist}
further implies that for any $T \ge 0$,
\begin{align}
\label{eq:bound}
\nabs{x_{T+1} - \bar{x}}^2 - \nabs{x_{T} - \bar{x}}^2
\le \frac{2\alpha_T}{\gamma}\left( f\left( x_T \right) - f\left(
x_{T+1}
\right)\right) + 2 \alpha_T \left( f^* - f\left( x_T \right) \right).
\end{align}
We know that the first term is nonnegative from \eqref{eq:sufficient},
while the second term is nonpositive from the optimality of $f^*$.
Therefore, \eqref{eq:bound} implies
\begin{align}
\label{eq:bound2}
\nabs{x_{T+1} - \bar{x}}^2 - \nabs{x_{T} - \bar{x}}^2
\le \frac{2 C_1}{\gamma}\left( f\left( x_T \right) - f\left(
x_{T+1}
\right)\right) + 2 C_2 \left( f^* - f\left( x_T \right) \right).
\end{align}
We then obtain \eqref{eq:toprove1}
by summing \eqref{eq:bound2} for $T=0,1,\dotsc,k$ and telescoping.
By noting that $f(x_k) \geq f^*$ for all $k$,
\eqref{eq:toprove2} follows.
\qed
\end{proof}
The first proof of Theorem \ref{thm:main1} is as follows.
\begin{proof}[First Proof of Theorem \ref{thm:main1}]
We again consider Lemma~\ref{lemma:techniques} with $\Delta_k
\coloneqq f(x_k) - f^*$, which is always nonnegative from the
optimality of $f^*$. Monotonicity is clear from
\eqref{eq:sufficient}, so we just need to show summability. By
rearranging \eqref{eq:toprove1} and noting $f(x_{k+1}) \geq f^*$, we
obtain
\begin{align*}
2C_2\sum_{T=0}^k \Delta_T \le \nabs{x_0 - \bar{x}}^2 - \nabs{x_{k+1} - \bar{x}}^2 +
\frac{2 C_1}{\gamma}\Delta_0
\le \nabs{x_0 - \bar{x}}^2+ \frac{2 C_1}{\gamma}\Delta_0.
\tag*{\qed}
\end{align*}
\end{proof}
For the second proof of Theorem~\ref{thm:main1}, we first outline the
analysis from \cite[Section~2.1.5]{Nes04a} and then show how it can
be modified to produce the desired $o(1/k)$ rate. Denote by
$\bar{x}_T$ the projection of $x_T$ onto $\Omega$ (which is well defined
because $\Omega$ is nonempty, closed, and convex). We can utilize the
convexity of $f$ to obtain
\begin{equation*}
\Delta_T \leq \nabla f(x_T)^\top \left( x_T - \bar{x}_T \right) \leq \|\nabla f(x_T)\| \mbox{\rm dist}\left(
x_T,\Omega \right),
\end{equation*}
so that
\begin{equation} \label{eq:r0bound}
\| \nabla f(x_T) \| \ge \frac{\Delta_T}{\mbox{\rm dist} ( x_T,\Omega)}.
\end{equation}
By subtracting $f^*$ from both sides of
\eqref{eq:sufficient} and using $\alpha_k \ge C_2$ and
\eqref{eq:r0bound}, we obtain
\[
\Delta_{T+1} \leq \Delta_T -
\frac{C_2 \gamma\Delta_T^2}{2 \mbox{\rm dist}\left( x_T,\Omega \right)^2}.
\]
By dividing both sides of this expression by $\Delta_T \Delta_{T+1}$
and using $\Delta_{T+1} \le \Delta_T$, we obtain
\begin{equation}
\frac{1}{\Delta_{T+1}} \geq \frac{1}{\Delta_T} + \frac{C_2 \gamma
\Delta_T }{2 \mbox{\rm dist}\left( x_T, \Omega \right)^2 \Delta_{T+1}}
\ge
\frac{1}{\Delta_T} + \frac{C_2 \gamma}{2 \mbox{\rm dist}\left( x_T, \Omega \right)^2}.
\label{eq:rate}
\end{equation}
By summing \eqref{eq:rate} over $T=0,1,\dotsc,k-1$, we obtain
\begin{equation}
\frac{1}{\Delta_{k}} \geq \frac{1}{\Delta_0} +
\sum_{T=0}^{k-1}
\frac{C_2 \gamma}{2 \mbox{\rm dist}\left( x_T,\Omega \right)^2}
\quad
\Rightarrow
\quad
\Delta_{k} \leq \frac{1}{\sum_{T=0}^{k-1}
\frac{C_2 \gamma}{2 \mbox{\rm dist}\left( x_T,\Omega \right)^2}}.
\label{eq:fromhere}
\end{equation}
A $O(1/k)$ rate is obtained by noting from Lemma~\ref{lemma:rk} that
$\mbox{\rm dist}(x_T, \Omega) \le R_0$ for some $R_0>0$ and all $T$, so that
\begin{equation}
\sum_{T=0}^{k-1} \frac{1}{\mbox{\rm dist}\left( x_T,\Omega \right)^2} \geq
\frac{k}{R_0^2}.
\label{eq:toimprove}
\end{equation}
Our alternative proof uses the fact that \eqref{eq:toimprove} is a
loose bound for Euclidean spaces and that an improved result can be
obtained by working directly with \eqref{eq:fromhere}. We first use
the Bolzano-Weierstrass theorem (a bounded and closed set is
sequentially compact in a Euclidean space) together with
Lemma~\ref{lemma:rk}, to show that the sequence $\{x_k \}$ approaches
the solution set $\Omega$.
\begin{lemma}
\label{lemma:conv}
Assume the conditions in Lemma~\ref{lemma:rk} and in addition that $f$
has domain in a Euclidean space $f: \Re^n \rightarrow \Re$. We have
\begin{equation}
\lim_{k \rightarrow \infty}\, \mbox{\rm dist}\left( x_k,\Omega \right) = 0.
\label{eq:conv}
\end{equation}
\end{lemma}
\begin{proof}
The proof is similar to \cite[Proposition~1]{PenZZ18a}. Assume for
contradiction that \eqref{eq:conv} does not hold. Then there are
$\epsilon > 0$ and an infinite increasing sequence $\{k_i\}$,
$i=1,2,\dotsc$, such that
\begin{equation}
\mbox{\rm dist}\left( x_{k_i}, \Omega \right) \geq \epsilon, \quad i=1,2,\dotsc.
\label{eq:contradict}
\end{equation}
From Lemma~\ref{lemma:rk} and that $\{x_{k_i}\} \subset \Re^n$, we can
the sequence $\{x_{k_i}\}$ lies in a compact set and therefore has an
accumulation point $x^*$. From \eqref{eq:rate}, we have
$$\frac{1}{\Delta_{k_{i+1}}} \ge \frac{1}{\Delta_{k_i}} +
\frac{C_2\gamma}{2\epsilon^2},$$ so
that $1/\Delta_k \uparrow \infty$ and hence $\Delta_k \downarrow
0$. By continuity of $f$, it follows that $f(x^*) = f^*$, so that $x^*
\in \Omega$ by definition, contradicting \eqref{eq:contradict}.
\qed
\end{proof}
We note that a result similar to Lemma \ref{lemma:conv} has been given
in \cite{BurGIS95a} using a more complicated argument with more
restricted choices of $\alpha$.
\begin{proof}[Second Proof of Theorem~\ref{thm:main1}, for
Euclidean Spaces]
We start with \eqref{eq:fromhere} and show that
\begin{equation*}
\lim_{k\rightarrow \infty} \frac{\frac{1}{\frac{C_2 \gamma}{2}
\sum_{T=0}^{k-1}
\frac{1}{\mbox{\rm dist}(x_T,\Omega)^2}}}{\frac{1}{k}} = 0,
\end{equation*}
or, equivalently,
\begin{equation}
\lim_{k \rightarrow \infty} \frac{k}{ \sum_{T = 0}^{k-1}
\frac{1}{\mbox{\rm dist}(x_T,\Omega)^2}} = 0.
\label{eq:inf}
\end{equation}
From the arithmetic-mean / harmonic-mean inequality,\footnote{ This
inequality says that for any real numbers $a_1,\dotsc, a_n > 0$,
their harmonic mean does not exceed their arithmetic mean. Namely,
\begin{equation*}
\frac{n}{\sum_{i=1}^na_i^{-1}} \leq \frac{\sum_{i=1}^n a_i}{n}.
\end{equation*}
}
we have that
\begin{equation}
0 \le \frac{k}{ \sum_{T = 0}^{k-1} \frac{1}{\mbox{\rm dist}(x_T,\Omega)^2}}
\leq \frac{\sum_{T=0}^{k-1} \mbox{\rm dist}(x_T,\Omega)^2}{k}.
\label{eq:amhm}
\end{equation}
Lemma \ref{lemma:conv} shows that $\mbox{\rm dist}(x_T, \Omega) \to 0$, so by
the Stolz-Ces\`aro theorem (see, for example, \cite{Mur09a}), the
right-hand side of \eqref{eq:amhm} converges to $0$. Therefore, from
the sandwich lemma, \eqref{eq:inf} holds. \qed
\end{proof}
\subsection{Coordinate Descent}
\label{subsec:cd}
We now extend Theorem~\ref{thm:main} to the case of randomized
coordinate descent. Our results can extend immediately to
block-coordinate descent with fixed blocks. Our analysis for
coordinate descent requires Euclidean spaces so that coordinate
descent can go through all coordinates.
The standard short-step coordinate descent procedure requires
knowledge of coordinate-wise Lipschitz constants. Denoting by $e_i$
the $i$th unit vector, we denote by $L_i \geq 0$ the constants such
that:
\begin{equation}
\left| \nabla_i f(x) - \nabla_i f(x + h e_i)\right | \leq L_i
\left|h\right|, \quad
\mbox{for all $x \in \Re^n$ and all $h \in \Re$},
\label{eq:Ls}
\end{equation}
where $\nabla_i f(\cdot)$ denotes the $i$th coordinate of the gradient.
Note that if $\nabla f(x)$ is $L$-Lipschitz continuous, there always
exist $L_1,\dotsc,L_n \in [0,L]$ such that \eqref{eq:Ls} holds.
Without loss of generality, we assume $L_i > 0$ for all
$i$.
Given parameters $\{\bar L_i\}_{i=1}^n$ such that $\bar L_i
\geq L_i$ for all $i$, the coordinate descent update is
\begin{equation}
x_{k+1} \leftarrow x_k - \frac{\nabla_{i_k} f\left( x_k
\right)}{\bar L_{i_k}} e_{i_k},
\label{eq:cd}
\end{equation}
where $i_k$ is the coordinate selected for updating at the $k$th iteration.
We consider the general case of stochastic coordinate
descent in which each $i_k$ is independently identically distributed
following a fixed prespecified probability distribution
$p_1,\dotsc,p_n$ satisfying
\begin{equation}
p_i \ge p_{\min} ,\quad i=1,2,\dotsc,n; \quad \sum_{i=1}^n p_i = 1,
\label{eq:prob}
\end{equation}
for some constant $p_{\min}>0$. Nesterov \cite{Nes12a} proves that
stochastic coordinate descent has a $O(1/k)$ convergence rate (in
expectation of $f$) on convex problems. We show below that this rate
can be improved to $o(1/k)$.
\begin{theorem}
\label{thm:cd}
Consider \eqref{eq:f} with $f$ convex and nonempty solution set
$\Omega$, and that componentwise-Lipschitz continuous differentiability
\eqref{eq:Ls} holds with some
$L_1,\dotsc,L_n > 0$. If we apply coordinate descent \eqref{eq:cd}
and at each iteration, $i_k$ is independently picked at random
following a probability distribution satisfying \eqref{eq:prob}, then
the expected objective $\mathbb{E}_{i_0,i_1,\dotsc,i_{k-1}}[f(x_k)]$ converges to
$f^*$ at an asymptotic rate of $o(1/k)$.
\end{theorem}
\begin{proof}
From \eqref{eq:Ls} and that $\bar L_i \geq L_i$, by treating all other
coordinates as non-variables, we have that for any $T \geq 0$,
\begin{equation}
f\left( x_T - \frac{\nabla_i f\left( x_T
\right)}{\bar L_i} e_i \right) - f\left( x_T \right) \leq
- \frac{1}{2 \bar L_i} \nabs{\nabla_i f\left( x_T \right)}^2,
i=1,\dotsc,n,
\label{eq:cdsuff}
\end{equation}
showing that the algorithm decreases $f$ at each iteration. Consider
any $\bar{x} \in \Omega$, by defining
\begin{equation}
r_T^2 \coloneqq \sum_{i=1}^n \frac{\bar L_i}{p_i} \nabs{\left( x_T - \bar{x}
\right)_i}^2,
\label{eq:r}
\end{equation}
we have from \eqref{eq:cd} that
\begin{equation*}
r_{T+1}^2 = r_T^2 + \frac{1}{ \bar L_{i_T}p_{i_T}} \left\| \nabla_{i_T}
f\left( x_T \right) \right\|^2 - \frac{2}{p_{i_T}} \nabla_{i_T} f\left(
x_T \right)^\top \left( x_T - \bar{x} \right)_{i_T}.
\end{equation*}
By taking expectation over $i_T$ on both sides of the above expression, we
obtain from the convexity of $f$ and \eqref{eq:cdsuff} that
\begin{align}
\nonumber
\mathbb{E}_{i_T}\left[ r_{T+1}^2 \right] - r_T^2
\stackrel{\eqref{eq:cdsuff}}{\leq}&~ \frac{1}{p_{\min}}
\sum_{i=1}^n 2 p_i \left( f\left( x_T\right) - f\left( x_T -
\frac{\nabla_i f\left( x_T \right)}{\bar L_i} e_i \right) \right) - 2
\nabla f\left( x_T \right)^\top \left( x_T - \bar{x} \right)\\
\leq&~ \frac{2}{p_{\min}} \left(f \left( x_T \right) - \mathbb{E}_{i_T}
\left[f\left( x_{T+1} \right)\right] \right)+ 2 \left( f^* - f\left(
x_T \right) \right).
\label{eq:tosumcd}
\end{align}
By taking expectation over $i_0,i_1,\dotsc,i_{T-1}$ on
\eqref{eq:tosumcd} and summing \eqref{eq:tosumcd} over $T=0,1,
\dotsc,k$, we obtain
\begin{align*}
2 \sum_{T=0}^k \left(\mathbb{E}_{i_0,\dotsc,i_{T-1}}
\left[f(x_T)\right] - f^*\right)
&~\leq r_0^2 - \mathbb{E}_{i_0,\dotsc,i_k}\left[ r_{k+1}^2\right] + \frac{2 \left(
f\left( x_{0} \right) - \mathbb{E}_{i_0,\dotsc,i_k}\left[
f\left( x_{k+1} \right)\right] \right)}{p_{\min}}\\
&~\leq r_0^2 + \frac{2 \left(f\left( x_0 \right) - f^*\right) }{p_{\min}}.
\end{align*}
The result now follows from Lemma~\ref{lemma:techniques}.
\qed
\end{proof}
\section{Regularized Problems}
We turn now to regularized optimization in an inner-product space:
\begin{equation} \label{eq:F}
\min_{x} \, F(x) \coloneqq f(x) + \psi(x),
\end{equation}
where both terms are convex, $f$ is $L$-Lipschitz-continuously
differentiable, and $\psi$ is extended-valued, proper, and closed, but
possibly nondifferentiable. We also assume that $\psi$ is such that the
prox-operator can be applied easily, by solving the following
problem for any given $y$ and any $\lambda>0$:
\begin{equation*}
\min_x\, \psi\left( x \right) + \frac{1}{2 \lambda} \nabs{x - y}^2.
\label{eq:prox}
\end{equation*}
We assume further that the solution set $\Omega$ of \eqref{eq:F} is
nonempty, and denote by $F^*$ the value of $F$ for all $x\in\Omega$.
We discuss two algorithms to show how our techniques can
be extended to regularized problems. They are proximal gradient (both
with and without line search) and stochastic proximal coordinate
descent with arbitrary sampling.
\subsection{Short-Step Proximal Gradient}
\label{subsec:shortprox}
Given $\bar L \geq L$, the $k$th step of the proximal gradient
algorithm is defined as follows:
\begin{equation}
x_{k+1} \leftarrow x_k + d_k,\quad d_k \coloneqq \arg\min_{d}\,
\nabla f(x_k)^\top
d + \frac{\bar{L}}{2}\|d\|^2 + \psi\left( x_k+d \right).
\label{eq:proxgrad}
\end{equation}
Note that $d_k$ is uniquely defined here, since the subproblem is
strongly convex. It is shown in \cite{BecT09a,Nes13a} that $F(x_k)$
converges to $F^*$ at a rate of $O(1/k)$ for this algorithm, under our
assumptions. We prove that a $o(1/k)$ rate can be attained.
\begin{theorem}
\label{thm:proxgrad}
Consider \eqref{eq:F} with $f$ convex and $L$-Lipschitz continuously
differentiable, $\psi$ convex, and nonempty solution set $\Omega$.
Given any $\bar L \geq L$, the proximal gradient method
\eqref{eq:proxgrad} generates iterates whose objective value converges
to $F^*$ at a $o(1/k)$ rate.
\end{theorem}
\begin{proof}
The method \eqref{eq:proxgrad} can be shown to be a descent method
from the Lipschitz continuity of $\nabla f$ and the fact that
$\bar L \geq L$. From the optimality of the solution to
\eqref{eq:proxgrad} and that $x_{k+1} = x_k + d_k$,
\begin{equation}
-\left(\nabla f(x_k) + \bar L d_k\right) \in \partial \psi\left(
x_{k+1} \right),
\label{eq:opt}
\end{equation}
where $\partial \psi$ denotes the subdifferential of $\psi$. Consider
any $\bar{x} \in \Omega$. We have from \eqref{eq:proxgrad} that for any
$T \geq 0$, the following chain of relationships holds:
\begin{align}
\nonumber
&~\nabs{x_{T+1} - \bar{x}}^2 - \nabs{x_T - \bar{x}}^2\\
\nonumber
= &~ 2 d_T^\top \left( x_T - \bar{x}
\right) + \nabs{d_T}^2\\
\nonumber
=&~ 2 d_T^\top \left( x_T +d_T - \bar{x}\right) -
\nabs{d_T}^2\\
\nonumber
=&~ 2 \left(d_T + \frac{\nabla f(x_T)}{\bar{L}}\right)^\top
\left( x_{T+1} - \bar{x}\right) - \frac{2}{\bar{L}} \nabla f(x_T)^\top \left( x_T + d_T
- \bar{x} \right) - \nabs{d_T}^2\\
\nonumber
\stackrel{\eqref{eq:opt}}{\leq}&~ 2 \frac{\psi\left( \bar{x} \right) -
\psi\left( x_{T+1} \right)}{\bar L} - \frac{2}{\bar L} \nabla f(x_T)^\top
\left( x_T - \bar{x} \right) - \frac{2}{\bar L} \nabla f(x_T)^\top d_T -
\nabs{d_T}^2\\
\nonumber
\leq&~ \frac{2}{\bar L}\left(\left(\psi\left( \bar{x} \right) - \psi\left(
x_{T+1} \right)\right) + f\left( \bar{x} \right) - \left( f\left( x_T
\right) + \nabla f(x_T)^\top d_T + \frac{\bar L \nabs{d_T}^2}{2}\right)\right)\\
\leq&~ \frac{2 \left(F^* - F\left( x_{T+1} \right)\right)}{\bar L},
\label{eq:rkprox}
\end{align}
where in the last inequality, we have used
\begin{equation} \label{eq:hs9}
f(x+d) \leq f(x) + \nabla f(x)^\top d + \frac{L}{2} \|d\|^2 \leq
f(x) + \nabla f(x)^\top d + \frac{\bar{L}}{2} \|d\|^2.
\end{equation}
By rearranging \eqref{eq:rkprox} we obtain
\[
F(x_{T+1}) - F^* \le \frac{\bar{L}}{2} \left( \| x_T - \bar{x} \|^2 - \| x_{T+1} - \bar{x} \|^2 \right).
\]
The result follows by summing both sides of this expression over
$T=0,1,\dotsc,k-1$ and applying Lemma~\ref{lemma:techniques}. \qed
\end{proof}
\iffalse
Similar rates hold for the variant conducting line search to find
$\bar L > 0$ such that
\begin{equation}
F\left( x_k + d_k \right) \leq F\left( x_k \right) +\nabla f(x_k)^\top d_k +
\frac{\bar L}{2} \nabs{d_k}^2 + \psi\left( x_k + d_k \right) -
\psi\left( x_k \right).
\label{eq:proxgradline}
\end{equation}
See our discussion in the next subsection.
\fi
\subsection{Proximal Gradient with Line Search}
\label{subsec:proxlinesearch}
We discuss a line-search variant of proximal gradient, where the
update is defined as follows:
\begin{equation}
x_{k+1} \leftarrow x_k + d_k,\quad d_k \coloneqq \arg\min_{d}\,
\nabla f(x_k)^\top
d + \frac{1}{2\alpha_k}\|d\|^2 + \psi\left( x_k+d \right),
\label{eq:sparsa}
\end{equation}
where $\alpha_k$ is chosen such that for given $\gamma \in (0,1]$ and
$C_1 \ge C_2 > 0$ defined as in \eqref{eq:bdd}, we have
\begin{equation}
\alpha_k \in [C_2, C_1],\quad
F\left( x_k + d_k \right) \leq F\left( x_k \right) -
\frac{\gamma}{2 \alpha_k}\|d_k\|^2.
\label{eq:alpha}
\end{equation}
This framework is a generalization of that in Section
\ref{subsec:linesearch}, and includes the SpaRSA algorithm
of \cite{WriNF09a}, which obtains an initial choice of $\alpha_k$ from a
Barzilai-Borwein approach and adjusts it until \eqref{eq:alpha} holds.
The approach of the previous subsection can also be seen as a special
case of \eqref{eq:sparsa}-\eqref{eq:alpha} through the following
elementary result, whose proof is omitted.
\begin{lemma}
\label{lemma:suff}
Consider a convex function $\psi$, a positive scalar $a >0$ and two
vectors $b$ and $x$. If $d$ is the unique solution of the strictly
convex problem
\begin{equation*}
\min_d\, b^\top d + \frac{a}{2} \nabs{d}^2 + \psi(x+d),
\end{equation*}
then
\begin{equation}
b^\top d + \frac{a}{2} \nabs{d}^2 + \psi(x+d) - \psi(x) \leq
-\frac{a}{2} \nabs{d}^2.
\label{eq:suff}
\end{equation}
\end{lemma}
By setting $b = \nabla f(x)$, $1/\alpha_k \equiv a = \bar L > 0$
(where $\bar{L} \ge L$), this lemma together with \eqref{eq:hs9}
implies that \eqref{eq:alpha} holds for any $\gamma \in (0,1]$.
Moreover, it also implies that for any $k \ge 0$,
\begin{align*}
F\left( x_{k+1} \right) - F\left( x_k \right)
= &~ f\left( x_k + d_k \right) - f\left( x_k \right) + \psi\left( x_k + d_k \right) -
\psi \left( x_k \right)
\\
\stackrel{\eqref{eq:hs9}}{\leq}
&~ \nabla f\left( x_k \right)^\top d_k +
\frac{L}{2}\left\|d_k\right\|^2 + \psi\left( x_k + d_k \right) -
\psi \left( x_k \right)\\
=&~ \nabla f\left( x_k \right)^\top d_k +
\frac{1}{2\alpha_k}\left\|d_k\right\|^2 + \psi\left( x_k + d_k
\right) - \psi \left( x_k \right)
+ \left(\frac{L}{2} -
\frac{1}{2\alpha_k}\right) \left\|d_k\right\|^2\\
\stackrel{\eqref{eq:suff}}{\le}&~ - \frac{1}{2\alpha_k}
\left\|d_k\right\|^2 +
\left(\frac{L}{2} -
\frac{1}{2\alpha_k}\right)\left\|d_k\right\|^2\\
=& -\left(\frac{1}{\alpha_k} - \frac{L}{2}
\right)\left\|d_k\right\|^2.
\end{align*}
Therefore, for any $\gamma \in (0,1]$, \eqref{eq:alpha} holds
whenever
\[
\alpha > 0,
-\frac{\gamma}{2 \alpha_k} \geq - \left(\frac{1}{\alpha_k} -
\frac{L}{2}\right),
\]
or equivalently $$\alpha_k \in \left(0,\frac{2 - \gamma}{L}\right],$$ which is how the
upper bound for $C_2$ is set.
We show now that this approach also has a $o(1/k)$ convergence rate
on convex problems.
\begin{theorem}
\label{thm:proxline}
Consider \eqref{eq:F} with $f$ convex and $L$-Lipschitz continuously
differentiable, $\psi$ convex, and nonempty solution set $\Omega$.
Given some $\gamma \in (0,1]$ and $C_2$ and $C_1$
such that $C_1 \geq C_2$ and $C_2 \in (0, (2 - \gamma)/L]$, then the
algorithm \eqref{eq:sparsa} with $\alpha_k$ satisfying
\eqref{eq:alpha} generates iterates $\{ x_k \}$ whose objective values
converge to $F^*$ at a rate of $o(1/k)$. Moreover, the
sequence of iterates is bounded.
\end{theorem}
\begin{proof}
From the optimality conditions of \eqref{eq:sparsa}, we have
\begin{equation}
-\left(\nabla f(x_T) + \frac{1}{\alpha_T} d_T \right)\in \partial \psi\left(
x_{T+1} \right).
\label{eq:opt2}
\end{equation}
Now consider any $\bar{x} \in \Omega$. We have from \eqref{eq:sparsa}
that for any $T \geq 0$, the following chain of relationships holds:
\begin{align}
\nonumber
&~\nabs{x_{T+1} - \bar{x}}^2 - \nabs{x_T - \bar{x}}^2\\
\nonumber
=&~ 2 d_T^\top \left( x_T +d_T - \bar{x}\right) -
\nabs{d_T}^2\\
\nonumber
=&~ 2 \left(d_T + \alpha_T \nabla f(x_T)\right)^\top
\left( x_{T+1} - \bar{x}\right) - {2}{\alpha_T} \nabla f(x_T)^\top \left( x_T + d_T
- \bar{x} \right) - \nabs{d_T}^2\\
\nonumber
\stackrel{\eqref{eq:opt2}}{\leq}&~ 2 \alpha_T \left( {\psi\left( \bar{x} \right) - \psi\left( x_{T+1}
\right)} \right) - {2}{\alpha_T} \nabla f(x_T)^\top \left( x_T - \bar{x} \right) -
{2}{\alpha_T} \nabla f(x_T)^\top d_T - \|d_T \|^2 \\
\nonumber
\leq&~ 2 \alpha_T \left( {\psi\left( \bar{x} \right) - \psi\left( x_{T+1}
\right)} \right) - {2}{\alpha_T} \nabla f(x_T)^\top \left( x_T - \bar{x} \right) -
{2}{\alpha_T} \nabla f(x_T)^\top d_T\\
\nonumber
= &~ 2 \alpha_T \left( {\psi\left( \bar{x} \right) - \psi\left( x_{T+1}
\right)} \right) - {2}{\alpha_T} \nabla f(x_T)^\top \left( x_T - \bar{x} \right) -
{2}{\alpha_T} \nabla f(x_T)^\top d_T +
\alpha_T L \nabs{d_T}^2 - \alpha_T L \nabs{d_T}^2\\
\nonumber
\leq&~ 2\alpha_T \left(\psi\left( \bar{x} \right) - \psi\left(
x_{T+1}\right) + f\left( \bar{x} \right) - \left(f\left( x_T \right) +
\nabla f(x_T)^\top d_T + \frac{L}{2}\nabs{d_T}^2\right) \right)+
\alpha_T L \nabs{d_T}^2\\
\nonumber
\stackrel{\eqref{eq:alpha}}{\leq}&~ {2 \alpha_T \left(F^* - F\left(
x_{T+1} \right)\right)} + \frac{2 L \alpha_T^2}{\gamma} \left(
F(x_T) - F(x_{T+1}) \right)\\
\leq&~{2}{C_2}\left( F^* - F\left( x_{T+1} \right)
\right) + \frac{2LC_1^2}{\gamma} \left( F\left( x_T \right)
- F\left( x_{T+1} \right) \right).
\label{eq:rkprox2}
\end{align}
By rearrangement, of this inequality, we obtain
\begin{align*}
F(x_{T+1})-F^*
\le \frac{L C_1^2}{\gamma C_2}(F(x_T)-F(x_{T+1}))
+ \frac{1}{2 C_2} \left( \| x_{T}-\bar{x}\|^2 - \| x_{T+1}-\bar{x}\|^2\right),
\end{align*}
and by summing both sides and using telescoping sums, we find that
$\sum_{T=0}^\infty (F(x_{T+1})-F^*) < \infty$, thus the conditions of
Lemma~\ref{lemma:techniques} are satisfied by $\Delta_T :=
F(x_T)-F^*$, and the $o(1/k)$ rate follows.
By summing the inequality above finitely over $T=0,1,\dotsc,k-1$, we obtain
\begin{align*}
0 \le \sum_{T=0}^{k-1} (F(x_{T+1})-F^*) \le \frac{L
C_1^2}{\gamma C_2} (F(x_0)-F^*)
+ \frac{1}{2 C_2} \left( \|x_0 - \bar{x}\|^2 - \|x_k - \bar{x}\|^2 \right).
\end{align*}
By rearranging this inequality, we obtain a uniform upper bound on
$\|x_k - \bar{x} \|$, thus showing that the sequence $\{ x_k \}$ is
bounded.
\qed
\end{proof}
\subsection{Proximal Coordinate Descent}
We now discuss the extension of coordinate descent to \eqref{eq:F},
with the assumption \eqref{eq:Ls} on $f$, Euclidean domain of dimension $n$,
sampling weighted according to \eqref{eq:prob} as in
Section~\ref{subsec:cd}, and the additional assumption of separability
of the regularizer $\psi$, that is,
\begin{equation}
\psi(x) = \sum_{i=1}^n \psi_i(x_i),
\label{eq:separable}
\end{equation}
where each $\psi_i$ is convex, extended valued, and possibly
nondifferentiable. As in our discussion of Section~\ref{subsec:cd},
the results in this subsection can be extended directly to the case of
block-coordinate descent.
Given the component-wise Lipschitz constants $L_1,L_2,\dotsc,L_n$ and
algorithmic parameters $\bar L_1, \bar{L}_2,\dotsc,\bar L_n$ with $\bar
L_i \geq L_i$ for all $i$, proximal coordinate descent updates have
the form
\begin{equation}
x_{k+1} \leftarrow x_k + d^k_{i_k} e_{i_k},\quad
d^k_{i_k} \coloneqq \arg\min_{d \in \Re}\, \nabla_{i_k} f(x_k) d +
\frac{\bar{L}_{i_k}}{2}d^2 + \psi_{i_k}\left( (x_k)_{i_k} +
d\right).
\label{eq:proxcd}
\end{equation}
With $p_i \equiv 1 / n$ for all $i$,
\cite{LuX15a} showed that the expected objective value converges to
$F^*$ at a $O(1/k)$ rate. When arbitrary sampling \eqref{eq:prob} is
considered, \eqref{eq:proxcd} is a special case of the general
algorithmic framework described in \cite{LeeW18b}. The latter paper
shows the same $O(1/k)$ rate for convex problems under the additional
assumption that for any $x_0$, we have
\begin{equation}
\max_{x: F\left( x \right) \leq F\left( x_0 \right)}\, \mbox{\rm dist}\left(
x, \Omega \right) < \infty.
\label{eq:bddset}
\end{equation}
We show here that with arbitrary sampling according to
\eqref{eq:prob}, \eqref{eq:proxcd} produces $o(1/k)$ convergence rates
for the expected objective on convex problems, without the assumption
\eqref{eq:bddset}.
The following result makes use of the quantity $r_k$ defined in
\eqref{eq:r}.
\begin{theorem}
\label{thm:proxcd}
Consider \eqref{eq:F} with $f$ and $\psi$ convex and nonempty
solution set $\Omega$. Assume further that
\eqref{eq:separable} is true, and that \eqref{eq:Ls}
holds with some $L_1,L_2,\dotsc,L_n > 0$. Given $\{\bar
L_i\}_{i=1}^n$ with $\bar L_i \geq L_i$ for all $i$, suppose that
proximal coordinate descent defines iterates according to
\eqref{eq:proxcd}, with $i_k$ chosen i.i.d. according to a
probability distribution satisfying \eqref{eq:prob}. Then
$\mathbb{E}_{i_0,i_1,\dotsc,i_{k-1}}[F(x_k)]$ converges to $F^*$ at an
asymptotic rate of $o(1/k)$.
Moreover, given any $\bar{x} \in \Omega$, the sequence of $\mathbb{E}_{i_0,\dotsc, i_{k-1}}
r_k^2$ is bounded.
\end{theorem}
\begin{proof}
From \eqref{eq:Ls}, we first notice that in the update
\eqref{eq:proxcd},
\begin{align}
F\left( x_k + d^k_{i_k} e_{i_k} \right) - F\left( x_k \right) \leq &~
\nabla_{i_k} f(x_k) d^k_{i_k} +
\frac{\bar{L}_{i_k}}{2} \left(d^k_{i_k}\right)^2+
\psi_{i_k}\left( \left(x_k\right)_{i_k} + d^k_{i_k}\right) -
\psi_{i_k}\left( \left(x_k\right)_{i_k} \right).
\label{eq:suffCD2}
\end{align}
From Lemma~\ref{lemma:suff}, the method defined by \eqref{eq:proxcd}
is a descent method. Optimality of the subproblem in
\eqref{eq:proxcd} yields
\begin{equation}
-\left(\nabla_{i_T} f\left( x_T \right) + \bar L_{i_T} d^T_{i_T}
\right) \in \partial \psi_{i_T} \left( \left( x_T \right)_{i_T} +
d^T_{i_T} \right).
\label{eq:partial}
\end{equation}
By taking any $\bar{x} \in \Omega$, and using the definition \eqref{eq:r}, we
have:
\begin{align}
\nonumber
r_{T+1}^2
=&~ r_T^2 + \frac{2 \bar L_{i_T}}{p_{i_T}} \left(d_{i_T}^\top\right)^\top
\left( x_T + d_{i_T}^T - \bar{x} \right)_{i_T} - \frac{\bar
L_{i_T}}{p_{i_T}} \left(d_{i_T}^T\right)^2\\
\nonumber
=&~ r_T^2 + \frac{2}{p_{i_T}} \left(\nabla_{i_T} f\left( x_T
\right) + \bar L_{i_T}d_{i_T}^T\right)^\top \left( x_T +
d_{i_T}^T - \bar{x} \right)_{i_T}
- \frac{\bar L_{i_T}}{p_{i_T}} \left( d_{i_T}^T \right)^2 \\
\nonumber
&\quad - \frac{2}{p_{i_T}} \nabla_{i_T} f\left(
x_T \right)^\top \left( x_T - \bar{x} \right)_{i_T} - \frac{2}{p_{i_T}}
\nabla_{i_T} f\left( x_T \right)^\top d^T_{i_T}\\
\nonumber
\stackrel{\eqref{eq:partial}}{\leq}&~ r_T^2 + \frac{2}{p_{i_T}}
\left(\psi_{i_T} \left( \bar{x}_{i_T} \right) - \psi_{i_T} \left(
\left( x_T\right)_{i_T} + d^T_{i_T} \right) - \nabla_{i_T}
f\left( x_T \right)^\top \left( x_T - \bar{x} \right)_{i_T}\right)\\
\nonumber
&\quad -\frac{2}{p_{i_T}} \left(\nabla_{i_T} f\left( x_T \right)^\top
d^T_{i_T} + \frac{\bar L_{i_T}}{2} \nabs{d_{i_T}^k}^2 \right)\\
\label{eq:rkcd}
\leq&~ r_T^2 + \frac{2}{p_{i_T}} \left(\psi_{i_T} \left( \bar{x}_{i_T}
\right) - \psi_{i_T} \left(\left(x_T\right)_{i_T} \right) -
\nabla_{i_T} f\left( x_T \right)^\top \left(x_T - \bar{x}
\right)_{i_T}\right)\\
\nonumber
&\quad -\frac{2}{p_{i_T}} \left(\nabla_{i_T} f\left( x_T \right)^\top
d^T_{i_T} + \frac{\bar L_{i_T}}{2} \nabs{d_{i_T}^T}^2 +
\psi_{i_T} \left( \left( x_T\right)_{i_T} + d^T_{i_T} \right) -
\psi_{i_T} \left(\left(x_T\right)_{i_T} \right) \right).
\end{align}
By taking expectation over $i_T$ on both sides of \eqref{eq:rkcd} and
using the convexity of $f$ together with \eqref{eq:suffCD2}, we obtain
\begin{subequations}
\begin{align}
\nonumber
&~\mathbb{E}_{i_T}\left[ r_{T+1}^2 \right] - r_T^2\\
\nonumber
\leq &~ 2\left( \psi\left( \bar{x} \right) - \psi\left(x_T\right) +
f\left( \bar{x} \right) - f\left( x_T \right)\right)
+ 2 \left( \sum_{i=1}^n F\left( x_T \right) - F\left( x_T + d^T_i e_i
\right) \right)\\
\label{eq:toexplain}
\leq&~ 2 \left( F^* - F\left( x_T \right) \right)
+ \frac{2}{p_{\min}} \sum_{i=1}^n p_i \left( F\left( x_T
\right) - F\left( x_T + d^T_i e_i \right) \right)\\
\label{eq:tosumcd3}
=&~ 2 \left( F^* - F\left( x_T \right) \right) + \frac{2}{p_{\min}} \left(F\left( x_T \right) - \mathbb{E}_{i_T} \left[F\left( x_{T+1}
\right)\right]\right),
\end{align}
\end{subequations}
where in \eqref{eq:toexplain} we used the fact that \eqref{eq:proxcd}
is a descent method. By taking expectation over $i_0,\dotsc,i_k$
on \eqref{eq:tosumcd3}, summing over $T=0,\dotsc,k$, and applying
Lemma~\ref{lemma:techniques}, we obtain the result.
Boundedness of $\mathbb{E}_{i_0,\dotsc,i_{k-1}} [r_k^2]$ follows from the same
telescoping sum and the fact that $F(x_k)$ decreases monotonically
with $k$.
\qed
\end{proof}
Our result shows that, similar to gradient descent and proximal
gradient, proximal coordinate descent and coordinate descent also
provide a form of implicit regularization in that the expected value
of $r_k$ is bounded. Since $r_k$ can be viewed as a weighted
Euclidean norm, this observation implies that the iterates are also in
a sense expected to lie within a bounded region.
Our analysis here improves the rates in
\cite{LuX15a,LeeW18b} in terms of the dependency on $k$ and removes
the assumption of \eqref{eq:bdd} in \cite{LeeW18b}. Even aside from
the improvement from $O(1/k)$ to $o(1/k)$, Theorem~\ref{thm:proxcd} is
the first time that a convergence rate for proximal stochastic
coordinate descent with arbitrary sampling for the coordinates is
proven without additional assumptions such as \eqref{eq:bddset}. By
manipulating \eqref{eq:tosumcd3}, one can also observe how different
probability distributions affect the upper bound, and it might also be
possible to get better upper bounds by using norms different from
\eqref{eq:r}.
\section{Tightness of the $o(1/k)$ Estimate}
We demonstrate that the $o(1/k)$ estimate of convergence of $\{f(x_k)
\}$ is tight by showing that for any $\epsilon \in (0,1]$, there is a
convex smooth function for which the sequence of function values
generated by gradient descent with a fixed step size converges
slower than $O(1/k^{1+\epsilon})$. The example problem we provide
is a simple one-dimensional function, so it serves also as a special
case of stochastic coordinate descent and the proximal methods
(where $\psi \equiv 0$) as well. Thus, this example shows tightness
of our analysis for all methods without line search considered in
this paper.
Consider the one-dimensional real convex function
\begin{equation}
f(x) = x^p,
\label{eq:xp}
\end{equation}
where
$p$ is an even integer greater than $2$. The minimizer of this
function is clearly at
$x^*=0$, for which $f(0)=f^*=0$. Suppose that the gradient descent method
is applied starting from $x_0=1$. For any descent method, the
iterates $x_k$ are confined to $[-1,1]$ and we have
\[
\| \nabla^2 f(x) \| \le p(p-1) \;\; \mbox{for all $x$ with
$|x| \le 1$,}
\]
so we set $L = p(p-1)$. Suppose that $\bar\alpha \in
(0,2/L)$ as above. Then the iteration formula is
\begin{equation}
\label{eq:update}
x_{k+1} = x_k - \bar\alpha \nabla f(x_k) = x_k \left( 1- p \bar\alpha x_k^{p-2} \right),
\end{equation}
and by Lemma \ref{lemma:rk}, all iterates lie in a bounded set: the
level set $[-1,1]$ defined by $x_0$. In fact, since $p \ge 4$ and
$\bar\alpha \in (0,2/L)$, we have that
\begin{align*}
x_k \in (0,1] \; \Rightarrow \; 1-p\bar\alpha x_k^{p-2} & \in \left(
1-\frac{2p}{p(p-1)} x_k^{p-2}, 1 \right)
\subseteq \left( 1-\frac{2}{p-1},1 \right) \subseteq \left( \frac23,
1 \right),
\end{align*}
so that $x_{k+1} \in \left(\tfrac23 x_k,x_k \right)$ and the value of $L$
remains valid for all iterates.
We show by an informal argument that there exists a constant $C$ such that
\begin{equation} \label{eq:jw0}
f(x_k) \approx \frac{C}{k^{p/(p-2)}}, \quad \mbox{for all $k$ sufficiently large.}
\end{equation}
From \eqref{eq:update} we have
\begin{equation} \label{eq:yj8}
f(x_{k+1}) = x_{k+1}^p = x_k^p \left( 1- p \bar\alpha x_k^{p-2} \right)^p =
f(x_k) \left( 1- p \bar\alpha f(x_k)^{(p-2)/p} \right)^p.
\end{equation}
By substituting the hypothesis \eqref{eq:jw0} into \eqref{eq:yj8}, and
taking $k$ to be large, we obtain the following sequence of equivalent
approximate equalities:
\begin{alignat*}{2}
&& \frac{C}{(k+1)^{p/(p-2)}} & \approx \frac{C}{k^{p/(p-2)}}
\left( 1- p \bar\alpha \frac{C^{(p-2)/p}}{k} \right)^p \\
& \Leftrightarrow & \;\; \left( \frac{k}{k+1} \right)^{p/(p-2)} & \approx
\left( 1- p \bar\alpha \frac{C^{(p-2)/p}}{k} \right)^p \\
& \Leftrightarrow &\;\; \left( 1- \frac{1}{k+1} \right)^{p/(p-2)} & \approx
1- p^2 \bar\alpha \frac{C^{(p-2)/p}}{k} \\
& \Leftrightarrow & \;\; 1- \frac{p}{p-2} \frac{1}{k+1} & \approx
1- p^2 \bar\alpha \frac{C^{(p-2)/p}}{k}
\end{alignat*}
This last expression is approximately satisfied for large $k$ if $C$
satisfies the expression
\[
\frac{p}{p-2} = p^2 \bar\alpha C^{(p-2)/p}.
\]
Stated another way, our result \eqref{eq:jw0} indicates that a
convergence rate faster than $O(1/k^{1+\epsilon})$ is not possible
when steepest descent with fixed steplength is applied to the function
$f(x) =x^p$ provided that
\[
\frac{p}{p-2} \le 1+\epsilon,
\]
that is,
\[
p \ge 2 \frac{1+\epsilon}{\epsilon} \;\; \makebox{and $p$ is a positive even integer.}
\]
We follow \cite{AttCPR18a} to provide a continuous-time analysis of
the same objective function, using a gradient flow argument. For the
function $f$ defined by \eqref{eq:xp}, consider the following
differential equation:
\begin{equation}
x'(t) = -\alpha \nabla f(x(t)).
\label{eq:gradflow}
\end{equation}
Suppose that
\begin{equation}
x(t) = t^{-\theta}
\label{eq:xt}
\end{equation}
for some $\theta > 0$, which indicates that starting from any $t > 0$,
$x(t)$ lies in a bounded area. Substituting \eqref{eq:xt} into
\eqref{eq:gradflow}, we obtain
\begin{equation*}
-\theta t^{-\theta - 1} = -\alpha p t^{-\theta (p-1)},
\end{equation*}
which holds true if and only if the following equations are satisfied:
\begin{equation*}
\begin{cases}
\theta &= \alpha p,\\
-\theta - 1 &= -\theta p + \theta,
\end{cases}
\end{equation*}
from which we obtain
\begin{equation*}
\begin{cases}
\theta &= \frac{1}{p-2},\\
\alpha &= \frac{1}{p(p-2)}.
\end{cases}
\end{equation*}
Since $x$ decreases monotonically to zero, for all $ t \geq (p-1) / (p-2)$,
\[
L = p \left( p-1 \right)
\left(\frac{p -1}{p-2}\right)^{-\theta (p-2)} = p(p-2)
\]
is an appropriate value for a bound on $\| \nabla^2 f(x) \|$. These
values of $\alpha$ and $L$ satisfy $0<\alpha \leq \frac{1}{L}$, making
$\alpha$ a valid step size.
The objective value is $f(x(t)) = t^{-p / (p-2)}$, matching the rate
of \eqref{eq:jw0}.
\section*{Acknowledgment}
The authors thank Yixin Tao for a discussion that helped us to improve
the clarity of this work.
\bibliographystyle{spmpsci}
| {
"timestamp": "2019-05-15T02:07:05",
"yymm": "1812",
"arxiv_id": "1812.08485",
"language": "en",
"url": "https://arxiv.org/abs/1812.08485",
"abstract": "It is well known that both gradient descent and stochastic coordinate descent achieve a global convergence rate of $O(1/k)$ in the objective value, when applied to a scheme for minimizing a Lipschitz-continuously differentiable, unconstrained convex function. In this work, we improve this rate to $o(1/k)$. We extend the result to proximal gradient and proximal coordinate descent on regularized problems to show similar $o(1/k)$ convergence rates. The result is tight in the sense that a rate of $O(1/k^{1+\\epsilon})$ is not generally attainable for any $\\epsilon>0$, for any of these methods.",
"subjects": "Optimization and Control (math.OC)",
"title": "First-order algorithms converge faster than $O(1/k)$ on convex problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363503693295,
"lm_q2_score": 0.8031737940012418,
"lm_q1q2_score": 0.7905931610994701
} |
https://arxiv.org/abs/1305.3113 | Hypergeometric type functions and their symmetries | We give a systematic and unified discussion of various classes of hypergeometric type equations: the hypergeometric equation, the confluent equation, the F_1 equation (equivalent to the Bessel equation), the Gegenbauer equation and the Hermite equation. In particular, we discuss recurrence relations of their solutions, their integral representations and discrete symmetries. | \section{Introduction}
Following \cite{NU}, we adopt the following terminology. Equations of the form
\begin{equation}\left(\sigma(z) \partial_z^2+\tau(z) \partial_z+
\eta\right) f(z)=0,\ \ \
\label{req}\end{equation}
where
$\sigma$ is a polynomial of degree $\leq2$,
\hspace{2.5ex} $\tau$ is a polynomial of degree $\leq1$,
\hspace{2.5ex} $\eta$ is a number,\\
will be called {\em hypergeometric type equations}, and their solutions ---{\em hypergeometric type functions}. Differential operators
of the form $\sigma(z) \partial_z^2+\tau(z) \partial_z+
\eta$ will be called {\em hypergeometric type operators}.
The theory of hypergeometric type functions is one of the oldest and most useful chapters of mathematics. In usual presentations it appears complicated and messy. The main purpose of this paper is an attempt to present its basics in a way that shows clearly its internal structure and beauty.
\subsection{Classification}
Let us start with a short review of basic classes of hypergeometric type equations.
We will always assume that $\sigma(z)\neq0$.
Every class, except for (9),
will be simplified
by dividing by a constant and an affine change of the complex variable $z$.
\begin{arabicenumerate}
\item
\noindent
{\bf The ${}_2F_1$
or hypergeometric equation}
\[\left(z(1-z) \partial_z^2+(c-(a+b+1)z) \partial_z-ab\right)f(z)=0.\]
\item
\noindent
{\bf The ${}_2F_0$ equation}
\[\left(z^2 \partial_z^2+(1+(1+a+b)z) \partial_z+ab\right)f(z)=0.\]
\item
\noindent
{\bf The ${}_1F_1$
or confluent equation}
\[
(z \partial_z^2+(c-z) \partial_z-a)f(z)=0.\]
\item
\noindent
{\bf The ${}_0F_1$ equation
}
\[
(z \partial_z^2+c \partial_z-1)f(z)=0.\]
\item
\noindent
{\bf The
Gegenbauer equation}
\[\left((1-z^2) \partial_z^2-(a+b+1)z \partial_z-ab\right)f(z)=0.\]
\item
\noindent
{\bf The Hermite equation}
\[
( \partial_z^2-2 z \partial_z-2a)f(z)=0.\]
\item
\noindent
{\bf 2nd order Euler equation}
\[
\left(z^2 \partial_z^2+bz \partial_z+a\right)f(z)=0.\]
\item
\noindent
{\bf 1st order Euler equation for the derivative}
\[
(z \partial_z^2+c \partial_z)f(z)=0.\]
\item
\noindent
{\bf 2nd order equation with constant coefficients}
\[
( \partial_z^2+c \partial_z+a)f(z)=0.\]
\end{arabicenumerate}
One can divide these classes into 3 families:
\begin{enumerate}\item (1), (2), (3), (4);
\item (5), (6);
\item (7), (8), (9).
\end{enumerate}
Each equation in the first family has a solution equal to the hypergeometric function ${}_pF_q$ with appropriate $p,q$. This function gives a name to the corresponding class of equations.
The second family consists of reflection invariant equations.
The third family consists of equations solvable in elementary functions. Therefore, it will not be considered in what follows.
The ${}_2F_0$ and ${}_1F_1$ equation are equivalent by a simple substitution, therefore they can be discussed together.
Up to an affine
transformation, (5) is
a subclass of (1). However, it has additional properties, therefore it is useful to discuss it separately.
The main part of our paper consists of 5 sections corresponding to the classes (1), (2)-(3), (4), (5) and (6).
The discussion will be divided into two levels:
\begin{enumerate} \item Properties of the operator that defines the equation.
\item Properties of functions solving the equation.
\end{enumerate}
\subsection{Properties of hypergeometric type operators}
\label{Properties of hypergeometric type operators}
We will discuss the following types of properties of hypergeometric type operators:
\begin{romanenumerate}
\item equivalence between various classes,
\item integral representations of solutions,
\item discrete symmetries,
\item factorizations,
\item commutation relations.
\end{romanenumerate}
Let us give some examples of these properties. All these examples will be related to the ${}_1F_1$ equation.
We have
\begin{eqnarray}\label{cono1}
&&(-w)^{a+1}\left(w^2 \partial_w^2+(-1+(1+a+b)w) \partial_w+ab\right)w^{-a}\\
\ \ &=&z \partial_z^2+(c-z) \partial_z-a,\ \ \ \ \ \ \ w=-z^{-1}.\label{cono0}
\end{eqnarray}
Therefore the ${}_1F_1$ operator, appearing in (\ref{cono0}), is equivalent to the
${}_2F_0$ operator, which is inside the brackets of (\ref{cono1}). This is an example of (i).
As an example of (ii) we quote the following fact:
The integral
\begin{equation} \int_\gamma t^{a-c}{\rm e}^t(t-z)^{-a}{\rm d} t\label{cono3}\end{equation}
is a solution of the ${}_1F_1$ equation provided that the values of the fuction \begin{equation}
t\mapsto
t^{a-c+1}{\rm e}^t(t-z)^{-a-1}\label{cono2}\end{equation}
at the endpoints of
the curve $\gamma$ are equal to one another.
Note that the integrand of (\ref{cono3}) is an elementary function. The condition on the curve $\gamma$ can often be satisfied in a number of non-equivalent ways, giving rise to distinct natural solutions.
An example of (iii) is the following identity:
\begin{eqnarray}\nonumber&&
w \partial_w^2+(c-w) \partial_w-a\\&=&-{\rm e}^{-z}\left(z \partial_z^2+(c-z) \partial_z-c+a\right){\rm e}^z,\ \
w=-z.\label{cono4}\end{eqnarray}
Thus the ${}_1F_{1}$ operator is transformed into a ${}_1F_{1}$ operator with different parameters.
Here is a pair of examples of (iv):
\begin{eqnarray}\nonumber
&&z(z\partial_z^2+(c-z) \partial_z-a)\\\label{facto1}
&=&
\big(z \partial_z+a-1\big)\big(z \partial_z+c-a-z\big)+(a-1)(a-c)\\
\label{facto2}&=&
\big(z \partial_z+c-a-1-z\big)\big(z \partial_z+a\big)+a(a+1-c).
\end{eqnarray}
An example of (v) is
\begin{eqnarray}\nonumber&&
\left(z\partial_z+a\right)z\left(z \partial_z^2+(c-z) \partial_z-a\right)\\&&=\ \ \
z\left(z \partial_z^2+(c-z) \partial_z-a-1\right)\left(z\partial_z+a\right).\label{cono5}
\end{eqnarray}
On both sides of the identity we see the ${}_1F_1$ operators whose parameters are contiguous.
The commutation properties can be derived from the factorizations. Let us
show, for example, how (\ref{facto1}) and (\ref{facto2}) imply (\ref{cono5}).
First we rewrite (\ref{facto1}) as
\begin{eqnarray}\nonumber
&&z(z\partial_z^2+(c-z) \partial_z-a-1)\\\label{facto3}
&=&
\big(z \partial_z+a\big)\big(z \partial_z+c-a-1-z\big)+a(a+1-c).
\end{eqnarray}
Then we multiply (\ref{facto2}) from the left and
(\ref{facto3}) from the right by $\left(z\partial_z+a\right)$, obtaining identical right hand sides. This yields (\ref{cono5}).
\subsection{Hypergeometric type functions}
After the analysis of hypergeometric type operators, we discuss hypergeometric type functions, that is, functions annihilated by hypergeometric type operators. In particular, we will distinguish the so-called {\em standard solutions} which have a simple behavior around a singular point of the equation. In particular, if $z_0$ is a regular singular point, the Frobenius method gives us two solutions behaving as $ (z-z_0)^{\lambda_i}$, where $\lambda_1,\lambda_2$ are the indices of $z_0$.
One can often find solutions with a simple behavior also around
irregular singular points.
For reflection invariant classes (5) and (6) one can also define another pair of natural solutions: the even solution $S^+$, which we normalize by $S^+(0)=1$, and the odd solution $S^-$, which we normalize by $(S^-)'(0)=2$.
Discrete symmetries can be used to derive properties of hypergeometric type functions. For instance,
(\ref{cono4}) implies that if $f(z)$ solves the confluent equation for parameters $c-a,c$, then so does ${\rm e}^zf(-z)$ for the parameters $a,c$. In particular, both functions $F(a;c;z)$ and ${\rm e}^{z}F(c-a;c;-z)$ solve the confluent equation for the parameters $a,c$. Both are analytic around $z=0$ and equal $1$ at $z=0$. By the uniqueness of the solution to the Frobenius method they should coincide. Hence we obtain the identity
\begin{equation}
F(a;c;z)={\rm e}^zF(c-a;c;-z).\label{cono6}\end{equation}
Commutation relations are also useful. For example, it follows immediately from (\ref{cono5}) that $(z\partial_z+a)F(a;c;z)$ is a solution of the confluent equation for the parameters $a+1,c$. At zero it is analytic and its value is $a$. Hence we obtain the recurrence relation
\begin{equation}
(z\partial_z+a)F(a;c;z)=aF(a+1;c;z).\label{cono7}\end{equation}
For each class of equations we describe a whole family of recurrence relations. Every such a recurrence relation involves an operator of the following form:
a 1st order differential operator with no dependence on the parameters +
a multiplication operator depending linearly on the parameters.
We will call them {\em basic recurrence relations}.
Sometimes there also exist more complicated recurrence relations. We do not give their complete list, we only mention some of their examples. We call them {\em additional recurrence relations}.
Each of the standard solutions has simple integral representations of the form analogous to (\ref{cono3}). Each of these integral representations are associated to a pair of (possibly infinite and possibly coinciding) points where the integrand has a singularity. We will use two basic kinds of contours for standard solutions:
\begin{romanenumerate}\item[(a)] The contour starts at one singularity and ends at the other singularity; we assume that at both singularities the analog of (\ref{cono2}) is zero (hence, trivially, has equal values).
\item[(b)]
The contour starts at the first singularity, goes around the second singularity and returns to the first singularity; we assume that the analog of (\ref{cono2}) is zero at the first singularity.
\end{romanenumerate}
If available, we will always treat the type (a) contour as the basic one.
For instance, under appropriate conditions on the parameters,
the ${}_1F_1$ function has the following two integral representations:
\begin{eqnarray}\hbox{type (a):} \ \ \ \ \ \ \ \ \ \int\limits_{[1,+\infty[}{\rm e}^{\frac{z}{t}}t^{-c}(t-1)^{c-a-1}{\rm d} t
&=&\frac{\Gamma(a)\Gamma(c-a)}{\Gamma(c)}F(a;c;z),\nonumber \\
\hbox{type (b):} \ \ \frac{1}{2\pi{\rm i}}\int\limits_{[1,0^+,1]}{\rm e}^{\frac{z}{t}}(-t)^{-c}(-t+1)^{c-a-1}{\rm d} t
&=& \frac{\Gamma(c-a)}{\Gamma(1-a)\Gamma(c)}F (a;c;z).\nonumber
\end{eqnarray}
($0^+$ means that we bypass $0$ in the counterclockwise direction; in this case it is equivalent to bypassing $\infty$ in the clockwise direction).
There are various natural ways to normalize hypergeometric type functions.
The most obvious normalization for a solution analytic at a given regular singular point is to demand that its value there is $1$. (For the ${}_2F_0$ equation, the point 0 is not regular singular, however there is a natural generalization of this normalization condition).
For equations (1)--(4), this function will be denoted by the letter $F$, consistently with the conventional usage. (Note the use of the italic font).
In the case of reflection symmetric equations
(5) and (6), we will use the letter $S$.
However, it is often preferable to use different normalizations, which involve appropriate values of the Gamma funtion or its products. Such normalizations arise naturally when we consider integral representations. They will be denoted by ${\bf F}$ for equations (1) -- (4) (a similar notation can be found in \cite{NIST}), and ${\bf S}$ for (5) and (6).
(Note the use of the boldface roman font).
Sometimes there will be several varieties of these normalizations denoted by an appropriate superscript, related to various integral representations. The functions with these normalizations have often better properties than
the $F$ and $S$ functions. This is especially visible in recurrence relations, where the coefficient on the right (such as $a$ in (\ref{cono7})) depends on the normalization.
For example, for the ${}_1F_1$ function we introduce the following normalizations:
\begin{eqnarray*}
{\bf F}(a;c;z)&:=&\frac{1}{\Gamma(c)}F(a;c;z),\\
{\bf F}^{\rm\scriptscriptstyle I}(a;c;z)&:=&\frac{\Gamma(a)\Gamma(c-a)}{\Gamma(c)}F(a;c;z),
\end{eqnarray*}
the latter suggested by the type (a) integral representation given above.
\subsection{Degenerate case}
For some values of parameters hypergeometric type functions have special properties. This happens in particular when the difference of the indices at a given regular singular point is an integer. Then the two standard solutions related to this point are proportional to one another. We call them {\em degenerate solutions}. (The best known example of such a situation are the Bessel functions of integer parameters).
In this case we have a simple generating function and an additional integral representation, which involves integrating over a closed loop.
\subsection{Canonical forms}
\label{s-canon}
Obviously,
hypergeometric type operators coincide with differential operators of the form
\begin{eqnarray}\label{e1}
&&\sigma(z) \partial_z^2+(\sigma'(z)+\kappa(z)) \partial_z+\frac{1}{2}\kappa'+
\lambda\\&=& \partial_z\sigma(z) \partial_z+\frac{1}{2}( \partial_z\kappa(z)+\kappa(z) \partial_z)+\lambda,\nonumber
\ \ \hbox{where}
\end{eqnarray}
$\sigma$ is a polynomial of degree $\leq2$,\\$\kappa$ is a polynomial of degree $\leq1$,\\
$\lambda$ is a number.
One can argue that it is natural to use $\sigma,\kappa,\lambda$ to parametrize the hypergeometric
type operators (more natural than $\sigma,\tau,\eta$). (\ref{e1}) will be denoted
${\cal C}(\sigma,\kappa,\lambda;z, \partial_z)$, or, for brevity,
${\cal C}(\sigma,\kappa,\lambda)$.
Let $\rho(z)$ be a solution of the equation
\begin{equation}(\sigma(z) \partial_z-\kappa(z))\rho(z)=0.\label{e1b}\end{equation}
(Note that equation (\ref{e1b}) is solvable in elementary functions).
We have the identity
\begin{equation}
{\cal C}(\sigma,\kappa,\lambda)=
\rho^{-1}(z) \partial_z\sigma(z)\rho(z) \partial_z+\frac{1}{2}\kappa'+\lambda,
\label{e1a}\end{equation}
We will call $\rho$ the {\em natural weight}. To justify this name note that if $\lambda$ is real, $\sigma,\kappa$ are real and $\rho$ is positive and nonsingular on $]a,b[\subset {\mathbb R}$, then ${\cal C}(\sigma,\kappa,\lambda)$ is Hermitian on the weighted space $L^2(]a,b[,\rho)$, when as the domain we take $C_{\rm c}^\infty(]a,b[)$.
It is sometimes useful to replace the operator ${\cal C}(\sigma,\kappa,\lambda)$ with
\begin{eqnarray}\label{adva}
\rho(z)^{\frac12}{\cal C}(\sigma,\kappa,\lambda)\rho(z)^{-\frac12}
&=&
\partial_z\sigma(z)\partial_z-\frac{\kappa(z)^2}{4\sigma(z)}+\lambda.
\end{eqnarray}
We will call (\ref{adva}) the {\em balanced form of
${\cal C}(\sigma,\kappa,\lambda)$}.
Sometimes one replaces (\ref{req}) by the 1-dimensional Schr\"odinger equation
\begin{equation} \big(\partial_z^2-V(z)\big)f=0,\label{schro}\end{equation}
where
\begin{eqnarray*}
V(z):&=&
\frac12\big(\sigma(z)^{-1}\sigma'(z)\big)'
+\frac14\big(\sigma(z)^{-1}\sigma'(z)\big)^2+\frac{\kappa(z)^2}{4\sigma(z)^2}
-\frac{\lambda}{\sigma(z)}.
\end{eqnarray*}
(\ref{schro}) is equivalent to (\ref{req}), because
\begin{eqnarray}\label{adva1}
&&\sigma(z)^{-\frac12}\rho(z)^{\frac12}{\cal C}(\sigma,\kappa,\lambda)\rho(z)^{-\frac12}
\sigma(z)^{-\frac12}
=\partial_z^2-V(z),\end{eqnarray}
It will be called the {\em Schr\"odinger-type form of the equation}
${\cal C}(\sigma,\kappa,\lambda)f=0$.
Some of the
symmetries of hypergeometric type equations are obvious in
the balanced and Schr\"odinger-type forms
these forms. This is
partly due to the fact that they do not change when we switch the
sign in front of $\kappa$. This is a serious advantage of these forms.
In the literature various forms of hypergeometric type equations are
used. Instead of
the Gegenbauer equation one usually finds its balanced form, called
the {\em associated Legendre equation}. The {\em modified Bessel equation} and the {\em Bessel equation}, equivalent to
the rarely used ${}_0F_1$ equation, is the balanced form of a
special case of the ${}_1F_1$ equation. Instead of the ${}_1F_1$
equation one often finds its Schr\"odinger-type form,
the {\em Whittaker equation}. This usage, due
mostly to historical traditions, makes the subject more complicated
than necessary.
We will always use (\ref{req}) as the basic form. Its main advantage is
that
in almost all cases the equation in the form (\ref{req}) has at least
one solution analytic around a given finite singular point. Even in the case of the ${}_2F_0$
equation, whose all solutions have a branch point at $0$, there exists a distinguished
solution particularly well behaved at zero.
\subsection{Hypergeometric type polynomials}
\label{Hypergeometric type polynomials}
{\em Hypergeometric type polynomials}, that is,
polynomial solutions of hypergeometric type equations deserve a separate analysis. They have traditional names involving various 19th century mathematicians. Note in particular that the (rarely used) polynomial cases of the ${}_2F_0$ function are called {\em Bessel polynomials}, however they do not have a direct relation to the better-known Bessel functions.
There exists
a well-known elegant approach to their theory that allows us to derive most of their basic properties in a unified way, see e.g. \cite{NU,R}. Let us sketch this approach.
Fix $\sigma,\kappa,\rho$, as in Subsect. \ref{s-canon}.
For any $n=0,1,2,\dots$ we
define
\begin{equation}
P_n(\sigma,\rho;z)
:=\frac{1}{n!}\rho^{-1}(z) \partial_z^n\rho(z)\sigma^n(z).\label{rodrig}\end{equation}
We will call (\ref{rodrig}) a {\em Rodriguez-type formula}, since it is a generalization of the Rodriguez formula for Legendre polynomials.
One can show that $P_n$ solves the equation
\begin{equation}
\Big(\sigma(z) \partial_z^2+(\sigma'(z)+\kappa(z)) \partial_z
-n(n+1)\frac{\sigma''}{2}-n\kappa'\Big)P_n(\sigma,\rho;z)=0.
\label{pop1}\end{equation}
$P_n$ is a polynomial, typically of degree $n$, more precisely its
degree is given as follows:
\begin{enumerate} \item If $\sigma''=\kappa'=0$, then $\deg P_n=0$.
\item If $\sigma''\neq0$ and $-\frac{2\kappa'}{\sigma''}-1=m$ is a positive
integer, then
\[\deg P_n=\left\{\begin{array}{ll}
n,& n=0,1,\dots,m;\\n-m-1,&n=m+1,m+2,\dots.\end{array}\right.\]
\item Otherwise, $\deg P_n=n$.
\end{enumerate}
We have a generating function
\[\frac{\rho(z+t\sigma(z))}{\rho(z)}=\sum_{n=0}^\infty t^nP_n(\sigma,\rho\sigma^{-n};z),\]
an integral representation
\begin{equation}
P_n(\sigma,\rho;z)=\frac{1}{2\pi {\rm i}}\rho^{-1}(z)\int\limits_{[z^+]}
\sigma^n(z+t)\rho(z+t)t^{-n-1}{\rm d} t
\label{rod}\end{equation}
and recurrence relations
\begin{eqnarray*}
\bigl(\sigma(z)\partial_z+(\kappa(z)-n\sigma'(z)\big)
P_n(\sigma,\rho\sigma^{-n};z) &=&P_{n+1}(\sigma,\rho\sigma^{-n-1};z),\\
\partial_zP_{n+1}(\sigma,\rho\sigma^{-n-1};z)&=&\Big(-n\frac{\sigma''}{2}+\kappa'\Big)
P_n(\sigma,\rho\sigma^{-n};z).
\end{eqnarray*}
In almost all sections we devote a separate subsection to the
corresponding class of polynomials. Beside the properties that follow
immediately from the unified theory presented above we describe
additional properties valid in a given class.
The ${}_0F_1$ equation does not have polynomial solutions, hence the corresponding section is the only one without a subsection about polynomials.
Another special situation arises in the case of the Gegenbauer equation. The standard Gegenbauer polynomials found in the literature
do not have the normalization given by the Rodriguez-type formula. The Rodriguez-type formula yields the Jacobi polynomials, which for $\alpha=\beta$ coincide with the Gegenbaquer polynomials up to a nontrivial coefficient. Thus for the Gegenbauer equation it is natural to consider two classes of polynomials differing by normalization. This is related to an interesting symmetry called the Whipple transformation, which is responsible for two kinds of integral representations.
\subsection{Parametrization}
Each class (1)--(6) depends on a number of complex parameters, denoted by Latin letters belonging to the set $\{a,b,c\}$. They will be called the {\em classical parameters}. They are convenient when we discuss power series expansions of standard solutions.
Unfortunately, the classical parameters are not convenient to describe discrete symmetries. Therefore, for each class (1)--(6) we introduce an alternative set of parameters, which we will call the {\em Lie-algebraic parameters}. They will be denoted by Greek letters such as $\alpha,\beta,\mu,\theta,\lambda$, and will be given by certain linear (possibly, inhomogeneous) combinations of the classical parameters. Discrete symmetries of hypergeometric type equations will simply involve signed permutations of the Lie algebraic parameters -- in the classical parameters they look much more complicated. Recurrence relations also become simpler in the Lie-algebraic parameters.
For polynomials of hypergeometric type a third kind of parametrization is traditionally used. They are characterized by their degree $n$, which coincides with $-a$, where $a$ is one of the classical parameters. The Lie-algebraic parameters appearing inside the 1st order part of the equation are used as the remaining parameters.
Let us stress that all these parametrizations are natural and useful. Therefore, we sometimes face the dilemma which parametrization to use for a given set of identities. We usually try to choose the one that gives the simplest formulas.
We sum up the information about various parametrizations in the following table:
\[\begin{array}{cccccc}
\hbox{Equation}& \begin{array}{c}\hbox{classical}\\\hbox{parameters}\end{array}
&\begin{array}{c}\hbox{Lie-algebraic}\\\hbox{parameters}\end{array}&\hbox{Polynomial}&
\begin{array}{c}\hbox{parameters}\\\hbox{for polynomials}\end{array}\\[2ex]\hline
\\[2ex]
{}_2F_1&
a,b,c&\begin{array}{c}\alpha=c-1\\
\beta=a+b-c\\
\gamma=b-a\end{array}
&\hbox{Jacobi}
&\begin{array}{c}\alpha=c-1\\
\beta=a+b-c\\
n=-a\end{array}
\\[5ex]
{}_2F_0&
a,b&\begin{array}{c}\theta=-1+a+b\\\alpha=a-b
\end{array}
&\hbox{Bessel}
&\begin{array}{c}
\theta=-1+a+b\\
n=-a\end{array}
\\[5ex]
{}_1F_1&
a,c&\begin{array}{c}\theta=-c+2a\\
\alpha=c-1\end{array}
&\hbox{Laguerre}
&\begin{array}{c}
\alpha=c-1\\n=-a\end{array}\\[5ex]
{}_0F_1&
c&
\alpha=c-1
&-----
&-----\\[5ex]
\hbox{Gegenbauer}&
a,b&\begin{array}{c}
\alpha=\frac{a+b-1}{2}\\
\lambda=\frac{b-a}{2}
\end{array}
&\begin{array}{c}
\hbox{$\alpha=\beta$ Jacobi}\\\hbox{or Gegenbauer}\end{array}
&\begin{array}{c}
\alpha=\frac{a+b-1}{2}\\
n=-a\end{array}\\[5ex]
\hbox{Hermite}&
a&\lambda =a-\frac12
&\hbox{Hermite}
&n=-a
\end{array}\]
\subsection{Group-theoretical background}
Identities for hypergeometric type operators and functions
have a high degree of symmetry.
Therefore, it is natural to expect that a certain group-theoretical structure is responsible for these identities.
There exists a large literature about the relations between special functions and the group theory \cite{V,Wa,M1, VK}. Nevertheless, as far as we know, the arguments found in the literature give a rather incomplete explanation of the properties that we describe. In a seperate publication \cite{DM} we would like to present a group-theoretical approach to hypergeometric type functions with, we believe, a more satisfactory justification of their high symmetry.
Below we would like to briefly sketch the main ideas of \cite{DM}.
Each hypergeometric type equation can be obtained by separating the variables of a certain 2nd order PDE of the complex variable with constant coefficients. One can introduce the Lie algebra of generalized symmetries of this PDE. In this Lie algebra we fix
a certain maximal commutative algebra, which we will call the ``Cartan algebra''. Operators whose adjoint action is diagonal in the ``Cartan algebra'' will be called ``root operators''.
Automorphisms of the Lie algebra leaving invariant the ``Cartan algebra'' will be called ``Weyl symmetries''.
(Note that in some cases the Lie algebra of symmetries is simple, and then the names {\em Cartan algebra}, {\em root operators} amd {\em Weyl symmetries} correspond to the standard names. In other cases the Lie algebra is non-semisimple, and then the names are less standard -- this is the reason for the quotation marks that we use).
Now the parameters of hypergeometric type equation can be interpreted as the eigenvalues of elements of the ``Cartan algebra''. In particular, the Lie agebraic parameters correspond to a certain natural choice of the ``Cartan algebra''.
Each recurrence relation is related to a ``root operator''.
Finally, each symmetry of a hypergeometric type operator corresponds to a Weyl symmetry of the Lie algebra.
We can distinguish 3 kinds of PDE's with constant coefficients:
\begin{enumerate} \item The {\em Helmholtz equation} on ${\mathbb C}^n$ given by $\Delta_n+1$, whose Lie algebra of symmetries is ${\mathbb C}^n\rtimes so(n,{\mathbb C})$;
\item The {\em Laplace equation} on ${\mathbb C}^n$ given by $\Delta_n$, whose Lie algebra of generalized symmetries is $so(n+2,{\mathbb C})$
\item The {\em heat equation} on ${\mathbb C}^n\oplus{\mathbb C}$ given by $\Delta_n+\partial_s$, whose Lie algebra of generalized symmetries is $sch(n,{\mathbb C})$ (the so-called {\em (complex) Schr\"odinger Lie algebra}.\end{enumerate}
Separating the variables in these equations usually leads to differential equations with many variables. Only in a few cases it leads to ordinary differential equations, which turn out to be of hypergeometric type. Here is a table of these cases:
\[\begin{array}{ccccc}
\hbox{PDE}&\begin{array}{c}\hbox{Lie}\\ \hbox{algebra}\end{array}
&\begin{array}{c}\hbox{dimension of}\\ \hbox{Cartan algebra}\end{array}
&\begin{array}{c}\hbox{discrete}\\ \hbox{symmetries}\end{array}&
\hbox{equation}\\[1ex]
\hline\\[1ex]
\Delta_2+1&{\mathbb C}^2\rtimes so(2,{\mathbb C})&1&{\mathbb Z}_2&{}_0F_1;
\\[1.5ex]
\Delta_4&so(6,{\mathbb C})&3&\hbox{cube}& {}_2F_1; \\[1.5ex]
\Delta_3&so(5,{\mathbb C})&2&\hbox{square}&\hbox{Gegenbauer};\\[1.5ex]
\Delta_2+ \partial_s&sch(2,{\mathbb C})&2&{\mathbb Z}_2\times
{\mathbb Z}_2&
{}_1F_1\hbox{ or }{}_2F_0;\\[1.5ex]
\Delta_1+ \partial_s&sch(1,{\mathbb C})&1&{\mathbb Z}_4&
\hbox{Hermite}.
\end{array}\]
\subsection{Comparison with the literature}
There exist many works that discuss hypergeometric type functions, e.g.
\cite{NIST,Ho,MOS,AAR,R,WW,Ol,Tr}. Some of them are meant to be encyclopedic collections of formulas, other try to show mathematical structure that underlies their properties.
In our opinion, this work differs substantially from the existing literature.
In our presentation we try to follow the intrinsic logic of the subject, without too much regard for the traditions.
If possible, we apply the same pattern to each class of hypergeometric type equations.
This sometimes
forces us to introduce unconventional notation.
We believe that the intricacy of usual presentations of hypergeometric type functions can be partly explained by historical reasons. In the literature various classes of these functions are often described with help of different conventions.
Sometimes we will give short remarks devoted to the conventions found
in the literature. These remarks will always be clearly separated from
the main text.
Of course, our presentation does not contain all useful identities and properties of hypergeometric functions. Some of them are on purpose left out, e.g. the so-called addition formulas. We restrict ourselves to what we view as the most basic theory. On the other hand, we try to be complete for each type of properties that we consider.
Our work is strongly inspired by the book by Nikiforov and Uvarov
\cite{NU}, who tried to develop a unified approach to hypergeometric
type functions. They stressed in particular the role of integral
representations and of recurrence relations.
Another important influence are
the works of Miller \cite{M1,M2} who stressed
the Lie-algebraic structure behind the recurrence relations.
The method of factorization can be traced back at least to \cite{IH}.
\medskip
\noindent{\small{\bf Acknowledgement.} I acknowledge the help of Laurent
Bruneau, Micha{\rm l}{} Godli\'{n}ski, and especially Micha{\rm l}{} Wrochna and Przemys{\rm l}{}aw Majewski who
proofread parts of previous versions of this work.
The research of the author was supported in part by the National
Science
Center (NCN) grant No. 2011/01/B/ST1/04929.}
\section{Preliminaries}
In this section we fix basic terminology, notation and collect a number of well known useful facts, mostly from complex analysis. It is supposed to serve as a reference and can be skipped at the first reading.
\subsection{Differential equations}
The main object of our paper are ordinary homogeneous 2nd order
linear differential
equations in the complex domain, that is equations of the form
\begin{equation} \left(a(z)\partial^2_z+b(z)\partial_z+c(z)\right)\phi(z)=0.\label{equo}\end{equation}
It will be convenient to treat (\ref{equo}) as the problem of
finding the kernel of the operator
\begin{equation}{\mathcal A}(z,\partial_z):=
a(z)\partial^2_z+b(z)\partial_z+c(z).\label{equo2}\end{equation}
We will then say that {\em the equation (\ref{equo}) is given by the operator (\ref{equo2})}. When we do not consider the change of the variable, we will often write ${\mathcal A}$ for ${\mathcal A}(z,\partial_z)$.
\subsection{The principal branch of the logarithm and the power function}
\label{a.1}
The function
\[\{z\in{\mathbb C}\ :\ -\pi< {\rm Im} z<\pi\}\ni z\mapsto {\rm e}^z\in{\mathbb C}\backslash]-\infty,0]\]
is bijective. Its inverse will be called the {\em principal branch of the logarithm}
and will be denoted simply $\log z$.
If $\mu\in{\mathbb C}$ then the {\em principal branch of the power function} is defined
as
\[{\mathbb C}\backslash]-\infty,0]\ni z\mapsto z^\mu:={\rm e}^{\mu\log z}.\]
Consequently, if $\alpha\in{\mathbb C}\backslash\{0\}$, then
the functions $\log(\alpha(z-z_0))$ and $(\alpha(z-z_0))^\mu$
have the domain
${\mathbb C}\backslash(z_0+\alpha^{-1}]-\infty,0])$.
Of course, if needed we will use the analytic continuation to extend the
definition of the logarithm and the power function beyond
${\mathbb C}\backslash]-\infty,0]$ onto the appropriate covering of
${\mathbb C}\backslash\{0\}$.
\subsection{Contours}
\label{a.3}
We will write
\[f(z)\Big|_{z_0}^{z_1}:=f(z_1)-f(z_0).\]
In particular, if $]0,1[\ni t\mapsto \gamma(t)\in{\mathbb C}$ is a curve, then
\begin{equation} f(z)\Big|_{\gamma(0)}^{\gamma(1)
=\int_\gamma f'(z){\rm d} z.\label{mainth}\end{equation}
In order to avoid making pictures,
we will use special notation for contours of integration.
Broken lines will be denoted as in the following example:
\[[w_0,u,w_1]:=[w_0,u]\cup[u,w_1].\]
\begin{center}
\includegraphics[width=8cm,totalheight=2cm]{curve-0.pdf}
\end{center}
This contour may be inappropriate if the function has a nonintegrable
singularity at $u$. Then we might want
to bypass $u$ with a small arc counterclockwise or clockwise. In such a case we can use the
curves
\begin{eqnarray}&&\label{zn3}
[w_0,u^+,w_1].
\end{eqnarray}
\begin{center}
\includegraphics[width=12cm,totalheight=1.6cm]{curve-3.pdf}
\end{center}
\begin{eqnarray}\label{z4}
&&[w_0,u^-,w_1].
\end{eqnarray}
\begin{center}
\includegraphics[width=12cm,totalheight=1.4cm]{curve-1.pdf}
\end{center}
We may want to bypass a group of points, say $u_1,u_2$. Such contours are denoted by
\[[w_0,(u_0,u_1)^+,w_1],\]
\begin{center}
\includegraphics[width=12cm,totalheight=2cm]{curve-4.pdf}
\end{center}
\[[w_0,(u_0,u_1)^-,w_1].\]
\begin{center}
\includegraphics[width=12cm,totalheight=2cm]{curve-5.pdf}
\end{center}
A small counterclockwise/clockwise loop around $u$ is denoted
\[[u^+],\hskip 26ex [u^-]\]
\begin{center}
\includegraphics[width=7cm,totalheight=1cm]{curve-2.pdf}
\end{center}
A counterclockwise/clockwise loop around a group of points, say, $u_1,u_2$ is denoted
\[[(u_1,u_2)^+],\hskip 16ex [(u_1,u_2)^-].\]
\begin{center}
\includegraphics[width=12cm,totalheight=2cm]{curve-6.pdf}
\end{center}
A half-line starting at $u$ and inclined at the angle $\phi$ is denoted
\begin{equation}[u,{\rm e}^{{\rm i}\phi}\infty[:=\{u+{\rm e}^{{\rm i}\phi}t\ :\ t>0\}:\end{equation}
\vskip 0.2cm
\begin{center}
\includegraphics[width=12cm,totalheight=2cm]{curve-8.pdf}
\end{center}
We will also need slightly more complicated contours:
\begin{eqnarray}
&&[(u+{\rm e}^{{\rm i}\phi}\cdot 0)^+,w]
\nonumber
\end{eqnarray}
\begin{center}
\includegraphics[width=12cm,totalheight=1cm]{curve-9.pdf}
\end{center}
Here,
the contour departs from $u$ at the angle $\phi$, then it bypasses $u$
with a small arc counterclockwise
and then it goes in the direction of $w$.
The following countour has the shape of a kidney:
\begin{eqnarray}&&
[(u+{\rm e}^{{\rm i}\phi}\cdot 0)^+]
\nonumber\end{eqnarray}
\begin{center}
\includegraphics[width=12cm,totalheight=2cm]{curve-7.pdf}
\end{center}
This contour departs from $u$ at the angle $\phi$, then it
goes around $u$ and returns to $u$ again at the angle $\phi$.
Instead of $u+{\rm e}^{\i0}\cdot 0$ we will write $u+0$. Likewise, instead of
$u+{\rm e}^{{\rm i}\pi}\cdot 0$ we will write $u-0$.
\subsection{Reflection invariant differential equations}
\label{a.5}
Consider a 2nd order differential operator
\begin{equation} \partial_z^2+b(z) \partial_z+c(z).\label{e5}\end{equation}
Assume that (\ref{e5})
is invariant w.r.t. the reflection $z\mapsto -z$. This means that
for some functions $\pi$, $\rho$ we have
\[b(z)=z\pi(z^2),\ \ \ \ c(z)=\rho(z^2).\]
Then it is natural to make a quadratic change of coordinates:
\begin{eqnarray}\nonumber
&& \partial_z^2+b(z) \partial_z+c(z)\\&=&
4u\left( \partial_u^2+\Big(\frac{1}{2u}+\frac{\pi(u)}{2}\Big) \partial_u+\frac{\rho(u)}{4u}\right),\label{refl1}\\[2ex]\nonumber
&&z^{-1}( \partial_z^2+b(z) \partial_z+c(z))z\\&=&
4u\left( \partial_u^2+\Big(\frac{3}{2u}+\frac{\pi(u)}{2}\Big) \partial_u++\frac{\pi(u)+\rho(u)}{4u}\right),\label{refl2}\end{eqnarray}
where
\[u=z^2,\ \ \ z=\sqrt u.\]
Thus if $g_+(u)$, resp. $g_-(u)$ satisfy
\begin{eqnarray*}
\left( \partial_u^2+\Big(\frac{1}{2u}+\frac{\pi(u)}{2}\Big) \partial_u+\frac{\rho(u)}{4u}\right)
g_+(u)&=&0,\\
\left( \partial_u^2+\Big(\frac{3}{2u}+\frac{\pi(u)}{2}\Big) \partial_u+\frac{\pi(u)+\rho(u)}{4u}\right)g_-(u)&=&0,\end{eqnarray*}
then $g_+(z^2)$ is an even solution, resp. $zg_-(z^2)$ is an odd solution of the equation given by (\ref{e5}).
Note that if $\pi,\rho$ are holomorphic, then $0$ is a regular singular point of (\ref{refl1}) with indices $0,\frac{1}{2}$ and of
(\ref{refl2}) with indices $0,-\frac{1}{2}$.
\subsection{Regular singular points}
\label{a4}
In this subsection we recall well known facts about regular singular points
of differential equations
We will write
\[f(z)\sim (z-z_0)^\lambda\ \ \ \ \hbox{at}\ \ \ z_0\]
if $f(z)(z-z_0)^{-\lambda}$ is analytic at $z_0$ and
$\lim\limits_{z\to z_0}f(z)(z-z_0)^{-\lambda}=1$.
In particular, we write
\[f(z)\sim 1\ \ \ \ \hbox{at}\ \ \ z_0\]
if $f$ is analytic in a neighborhood of $z_0$ and $f(z_0)=1$.
An equation given by the operator
\begin{equation}
\partial_z^2+b(z) \partial_z+c(z)
\label{e9}\end{equation} with meromorphic coefficients $a(z)$,
$c(z)$ has a {\em regular singular point at $z_0$} if
\[b_0:=\lim_{z\to z_0}b(z)(z-z_0),\ \ \ \
c_0:=\lim_{z\to z_0}c(z)(z-z_0)^2\]
exist. The {\em indices $\lambda_1$, $\lambda_2$ of $z_0$} are the solutions of the
indicial equation
\[\lambda(\lambda-1)+b_0\lambda+c_0=0.\]
\begin{theoreme}[The Frobenius method] If
$\lambda_1-\lambda_2\neq-1,-2,\cdots$, then
there exists a unique solution $f(z)$ of the equation given by (\ref{e9}) such that
$f(z)\sim(z-z_0)^{\lambda_1}$ at $z_0$.
\label{fro}\end{theoreme}
The case
$\lambda_1-\lambda_2\in{\mathbb Z}$ is called the {\em degenerate case.} In this case the Frobenius method gives one solution corresponding to the point $z_0$.
Likewise, (\ref{e9}) has a
{\em regular singular point at $\infty$} if
\[\tilde b_0:=\lim_{z\to \infty}b(z)z,\ \ \ \
\tilde c_0:=\lim_{z\to \infty}c(z)z^2\]
exist. The {\em indices $\tilde\lambda_1$, $\tilde\lambda_2$ of $\infty$} are the solutions of the
indicial equation
\[\tilde\lambda(\tilde\lambda+1)-\tilde b_0\tilde\lambda+\tilde c_0=0.\]
\begin{theoreme}[The Frobenius method at infinity] If
$-\tilde\lambda_1+\tilde\lambda_2\neq-1,-2,\cdots$, then
there exists a unique solution $\tilde f_1(z)$ of (\ref{e9}) such that
$\tilde f_1(z)\sim z^{-\tilde\lambda_1}$ at $\infty$.
\end{theoreme}
Note the identity
\begin{eqnarray}\label{shifto}&&
(z-z_0)^{-\theta}\left( \partial_z^2+b(z) \partial_z+c(z)\right)(z-z_0)^\theta\\
&=&\partial_z^2+\big(2\theta(z-z_0)^{-1}+b(z)\big)\partial_z+
(\theta^2-\theta)(z-z_0)^{-2}+\theta b(z)(z-z_0)^{-1}+c(z).
\nonumber\end{eqnarray}
If $z_0$ is a regular singular point, then the corresponding indices of (\ref{shifto})
equal those of (\ref{e9}) $+\theta$. Likewise, if $\infty$ is a regular sigular point, then the corresponding indices are shifted by $-\theta$. The indices corresponding to other points are left unchanged.
\subsection{The Gamma function}
In this section we collect basic identities related to {\em Euler's Gamma function} that we will use.
\begin{eqnarray}{\bf Relationship\ to\ factorial}&&\Gamma(n+1)=n!,\ \ n=0,1,2,\dots,\label{g5a}\\
{\bf Recurrence\ \ relation}&&\Gamma(z+1)=z\Gamma(z),\label{g5}\\
{\bf Reflection\ \ formula}&&
\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z},\\
{\bf II \ \ Euler's\ \ integral.}&&
\Gamma(z):=\int_0^\infty{\rm e}^{-t}t^{z-1}{\rm d} t,\ \ \ {\rm Re} z>0
\label{gam}
\\
{\bf Hankel's\ \ formula.}
&&
\frac{1}{\Gamma(-z+1)}=\frac{1}{2\pi {\rm i}}\!\!\!\!\!\!\!\!
\int\limits_{[-\infty,0^+,-\infty[}\!\!\!\!\!\!\!{\rm e}^{t}t^{z-1}{\rm d} t,\\
{\bf Legendre's\ \ formula}&&
2^{2z-1}\Gamma(z)\Gamma\left(z+1\slash2\right)=
\sqrt{\pi}\Gamma(2z).
\label{g17aa}\end{eqnarray}
{\bf I Euler's integral and its consequences.}\begin{eqnarray}
\frac{\Gamma(u)\Gamma(v)}{\Gamma(u+v)}
&=&\int_0^1 t^{u-1}(1-t)^{v-1}{\rm d} t
\ \ {\rm Re} u>0,\ {\rm Re} v>0,\label{b0}
\\[3ex]
\frac{\Gamma(u)\Gamma(v)}{\Gamma(u+v)}\frac{\sin\pi
u}{\sin\pi(u+v)}&&\label{b1}\\
=\
\frac{\Gamma(1-u-v)\Gamma(v)}{\Gamma(1-u)}&
=&\int_1^\infty t^{u-1}(t-1)^{v-1}{\rm d} t
\ \ {\rm Re} v>0,\ {\rm Re} (1-u-v)>0,
\nonumber\\[3ex]
\frac{\Gamma(-u)\Gamma(-v)}{\Gamma(-u-v)}\frac{\sin\pi
u}{\sin\pi(u+v)}&&\label{b2}\\
=\ \frac{\Gamma(-u-v+1)}
{\Gamma(-u+1)\Gamma(-v+1)}&=&
\frac{1}{2\pi {\rm i}}
\int\limits_{]-\infty,0^+,-\infty[}t^{u-1}(1-t)^{v-1}{\rm d} t\nonumber\\&=&
\frac{1}{2\pi {\rm i}}
\int\limits_{]\infty,1^-,\infty[}t^{u-1}(1-t)^{v-1}{\rm d} t,\ \ \ {\rm Re}(-u-v+1)>0.\nonumber
\\[3ex]
\frac{\Gamma(u)\Gamma(v)}{\Gamma(u+v)}\sin\pi u&&\label{put}\\
=\ \frac{\Gamma(v)}{\Gamma(1-u)\Gamma(u+v)}&=&
\frac{1}{2\pi {\rm i}}
\int\limits_{]1,0^+,1]}t^{u-1}(1-t)^{v-1}{\rm d} t,\ \ \ {\rm Re} v>0.\nonumber
\\[3ex]
\frac{\Gamma(u)\sqrt\pi}{\Gamma(u+\frac12)}&=&
\int_{-1}^1(1-s^2)^{u-1}{\rm d} s,\\[3ex]
\frac{\Gamma(u)\sqrt\pi}{2\cos\pi
u\Gamma(u+\frac12)}
&=&\int_1^\infty(s^2-1)^{u-1}{\rm d} s.\end{eqnarray}
\subsection{The Pochhammer symbol}
\label{a.2}
If $a\in{\mathbb C}$ and $n\in{\mathbb Z}$, then the so-called {\em Pochhammer symbol}
is defined as follows:
\[\begin{array}{ll}
(a)_0=1,\\[3MM]
(a)_n:=a(a+1)\dots(a+n-1),&\ n=1,2,\dots\\[3mm]
(a)_n:=\frac{1}{(a-n)\dots(a-1)},&\ n=\dots,-2,-1.
\end{array}\]
Note the identities
\begin{eqnarray}
&&(a)_n=\frac{\Gamma(a+n)}{\Gamma(a)}=
(-1)^n\frac{\Gamma(1-a)}{\Gamma(1-a-n)}=(-1)^n(1-n-a)_n,\nonumber\\
&&(1-z)^{-a}=\sum\limits_{n=0}^\infty\frac{(a)_n}{n!}z^n,\ \ \ |z|<1,\label{double1}\\
&&(1/2)_nn!=\frac{(2n)!}{2^{2n}},\ \ \ (3/2)_nn!=\frac{(2n+1)!}{2^{2n}}.\label{double}
\end{eqnarray}
\section{The ${}_2F_1$ or the hypergeometric equation}
\label{s3}
\subsection{Introduction}
Let $a,b,c\in{\mathbb C}$.
Traditionally, the {\em hypergeometric equation} is given by the operator
\begin{equation}
{\cal F}(a,b;c;z, \partial_z):=z(1-z) \partial_z^2+\big(c-(a+b+1)z\big) \partial_z-ab.\label{hy1-tra}\end{equation}
The {
\em classical parameters $a,b,c$} will be often replaced by another set of parameters $\alpha,\beta,\mu\in{\mathbb C}$, called
{\em Lie-algebraic}. They are related to one another by
\[\begin{array}{rll}
\alpha:=c-1,&\ \ \beta: =a+b-c,&\ \ \mu:=b-a;\\[2ex]
\label{newnot}
a=\frac{1+\alpha+\beta -\mu}{2},&\ \ b=\frac{1+\alpha+\beta +\mu}{2},&\ \ c=1+\alpha.
\end{array}\]
In the Lie-algebraic parameters the hypergeometric operator (\ref{hy1-tra}) becomes
\begin{eqnarray}&&
{\cal F}_{\alpha,\beta ,\mu}(z, \partial_z)\label{hy1}\\
&=&z(1-z) \partial_z^2+
\big((1+\alpha)(1-z)-(1+\beta )z\big) \partial_z+\frac14\mu^2
-\frac14(\alpha+\beta +1)^2.\nonumber\end{eqnarray}
The Lie-algebraic parameters have an interesting interpretation in terms of the natural basis of the Cartan algebra of the Lie algebra $so(6)$ \cite{DM}.
The singular points of the hypergeometric operator are located at $0,1,\infty$. All of them are regular singular. The indices of these points are
\begin{center}
\begin{tabular}{ccc}
$z=0$&$z=1$&$z=\infty$\\\hline\\
$1-c=-\alpha$ & $c-a-b=-\beta$ &$a=\frac{1+\alpha+\beta-\mu}{2}$\\
$0$&$0$&$ b=\frac{1+\alpha+\beta+\mu}{2}$
\end{tabular}
\end{center}
Thus the Lie-algebraic parameters are the differences of the indices.
The hypergeometric operator remains the same if we interchange $a$ and $b$ (replace $\mu$ with $-\mu$).
\subsection{Integral representations}
\begin{theoreme}
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\[t^{b-c+1}(1-t)^{c-a}(t-z)^{-b-1}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\begin{equation}
{\cal F}(a,b;c;z, \partial_z)
\int_\gamma t^{b-c}(1-t)^{c-a-1}(t-z)^{-b}{\rm d} t=0.
\label{f4}\end{equation}
\label{intr}\end{theoreme}
\noindent{\bf Proof.}\ \
We check that for any contour $\gamma$ (\ref{f4}) equals
\[-b \int_\gamma
\Big( \partial_t t^{b-c+1}(1-t)^{c-a}(t-z)^{-b-1}\Big){\rm d} t
.\]
\hfill$\Box$\medskip
Analogous (and nonequivalent)
integral representations can be obtained by interchanging $a$ and $b$
in Theorem \ref{intr}.
\subsection{Symmetries}
\label{s-sym}
To every permutation of the set
of singularities $\{0,1,\infty\}$ we can associate
exactly one homography
$z\mapsto w(z)$.
Using the method described at the end of Subsect. \ref{a4}, with every such a homography we can associate 4 substitutions
that preserve the
form of the hypergeometric equation.
Altogether there are $6\times4=24$ substitutions. They form a group isomorphic
to the group of proper symmetries of the cube. If we take into account
the fact that replacing $\mu$ with $-\mu$ is also an obvious symmetry of the
hypergeometric equation, then we obtain a group of $2\times 24=48$
elements, isomorphic to the group of all (proper and improper) symmetries of
a cube, which is the Weyl group of $so(6)$.
Below we describe the table of
symmetries of the hypergeometric operator
except for those obtained by switching the sign of the last
parameter.
We fix the sign of the last parameter by demanding that the number of minus signs is even.
Note that the table looks much simpler in the Lie-algebraic parameters than in the classical parameters.
All the operators below equal ${\cal F}_{\alpha,\beta ,\mu}(w, \partial_w)$ for the
corresponding $w$:
\[\begin{array}{rrcl}
w=z:&&&\\[1ex]
&&{\cal F}_{\alpha,\beta ,\mu}(z, \partial_z),&
\\[1ex]
&(-z)^{-\alpha}(z-1)^{-\beta }&{\cal F}_{-\alpha,-\beta ,\mu}(z, \partial_z)&(-z)^{\alpha}(z-1)^{\beta }
\\[1ex]
&(z-1)^{-\beta }&{\cal F}_{\alpha,-\beta ,-\mu}(z, \partial_z)&(z-1)^{\beta },\\[1ex]
&(-z)^{-\alpha}&{\cal F}_{-\alpha,\beta ,-\mu}(z, \partial_z)&
(-z)^{\alpha};\\
w=1-z:&&&\\
&&{\cal F}_{\beta ,\alpha,\mu}(z, \partial_z),&
\\[1ex]
&(z-1)^{-\alpha}(-z)^{-\beta }&{\cal F}_{-\beta ,-\alpha,\mu}(z, \partial_z)&(z-1)^{\alpha}(-z)^{\beta },
\\[1ex]
&(z-1)^{-\alpha}&{\cal F}_{\beta ,-\alpha,-\mu}(z, \partial_z)&
(z-1)^{\alpha},\\[1ex]
&(-z)^{-\beta }&{\cal F}_{-\beta ,\alpha,-\mu}(z, \partial_z)
&(-z)^{\beta };\\
w=\frac{1}{z}:&&&\\
&(-z)^{\frac{1}{2}(\alpha+\beta +\mu +1)}&(-z){\cal F}_{\mu ,\beta ,\alpha}(z, \partial_z)&
(-z)^{\frac{1}{2}(-\alpha-\beta -\mu -1)},\\[1ex]
&(-z)^{\frac{1}{2}(\alpha+\beta -\mu +1)}(z-1)^{-\beta }&
(-z){\cal F}_{-\mu ,-\beta ,\alpha}(z, \partial_z) &(-z)^{\frac{1}{2}(-\alpha-\beta +\mu -1)}(z-1)^{\beta },
\\[1ex]
&(-z)^{\frac{1}{2}(\alpha+\beta +\mu +1)}(z-1)^{-\beta }&
(-z)
{\cal F}_{\mu ,-\beta ,-\alpha}(z, \partial_z)& (-z)^{\frac{1}{2}(-\alpha-\beta -\mu -1)}(z-1)^{\beta }
,\\[1ex]
&(-z)^{\frac{1}{2}(\alpha+\beta -\mu +1)}&(-z){\cal F}_{-\mu ,\beta ,-\alpha}(z, \partial_z)
& (-z)^{\frac{1}{2}(-\alpha-\beta +\mu -1)}
;\\
w=1-\frac{1}{z}:&&&\\
&(-z)^{\frac{1}{2}(\alpha+\beta +\mu +1)}&
(-z){\cal F}_{\mu ,\alpha,\beta }(z, \partial_z)&(-z)^{\frac{1}{2}(-\alpha-\beta -\mu -1)},\\[1ex]
&(-z)^{\frac{1}{2}(\alpha+\beta -\mu +1)}(z-1)^{-\alpha}&
(-z){\cal F}_{-\mu ,-\alpha,\beta }(z, \partial_z) &(-z)^{\frac{1}{2}(-\alpha-\beta +\mu -1)}(z-1)^{\alpha},\\[1ex]
&(-z)^{\frac{1}{2}(\alpha+\beta +\mu +1)}(z-1)^{-\alpha}&
(-z) {\cal F}_{\mu ,-\alpha,-\beta }(z, \partial_z)& (-z)^{\frac{1}{2}(-\alpha-\beta -\mu -1)}(z-1)^{\alpha},
\\[1ex]
&(-z)^{\frac{1}{2}(\alpha+\beta -\mu +1)}&(-z){\cal F}_{-\mu ,\alpha,-\beta }(z, \partial_z)&
(-z)^{\frac{1}{2}(-\alpha-\beta +\mu -1)};\\
w=\frac{1}{1-z}:&&&\\
& (z-1)^{\frac{1}{2}(\alpha+\beta +\mu +1)}&
(z-1){\cal F}_{\beta ,\mu ,\alpha}(z, \partial_z)& (z-1)^{\frac{1}{2}(-\alpha-\beta -\mu -1)},\\[1ex]
& (-z)^{-\beta }(z-1)^{\frac{1}{2}(\alpha+\beta -\mu +1)}
&(z-1){\cal F}_{-\beta ,-\mu ,\alpha}(z, \partial_z) & (-z)^{\beta }(z-1)^{\frac{1}{2}(-\alpha-\beta +\mu -1)},\\[1ex]
& (z-1)^{\frac{1}{2}(\alpha+\beta -\mu +1)}
&(z-1){\cal F}_{\beta ,-\mu ,-\alpha}(z, \partial_z) & (z-1)^{\frac{1}{2}(-\alpha-\beta +\mu -1)},\\[1ex]
&(-z)^{-\beta }(z-1)^{\frac{1}{2}(\alpha+\beta +\mu +1)}&(z-1)
{\cal F}_{-\beta ,\mu ,-\alpha}(z, \partial_z)&
(-z)^{\beta }(z-1)^{\frac{1}{2}(-\alpha-\beta -\mu -1)};
\\
w=\frac{z}{z-1}:&&\\
&(z-1)^{\frac{1}{2}(\alpha+\beta +\mu+1)}&(z-1){\cal F}_{\alpha,\mu,\beta }(z, \partial_z)&
(z-1)^{\frac{1}{2}(-\alpha-\beta -\mu-1)}
,\\[1ex]
& (-z)^{-\alpha}(z-1)^{\frac{1}{2}(\alpha+\beta -\mu+1)}&(z-1){\cal F}_{-\alpha,-\mu,\beta }(z, \partial_z) &
(-z)^{\alpha}(z-1)^{\frac{1}{2}(-\alpha-\beta +\mu-1)}
,\\[1ex]
& (z-1)^{\frac{1}{2}(\alpha+\beta -\mu+1)}&
(z-1){\cal F}_{\alpha,-\mu,-\beta }(z, \partial_z) & (z-1)^{\frac{1}{2}(-\alpha-\beta +\mu-1)}
,\\[1ex]
& (-z)^{-\alpha}(z-1)^{\frac{1}{2}(\alpha+\beta +\mu +1)}
&(z-1){\cal F}_{-\alpha,\mu ,-\beta }(z, \partial_z)&
(-z)^{\alpha}(z-1)^{\frac{1}{2}(-\alpha-\beta -\mu -1)}.
\end{array}\]
\subsection{Factorization and commutation relations}
\label{commu}
The hypergeometric operator can be factorized in several ways:
\begin{eqnarray*}
{\cal F}_{\alpha,\beta,\mu}&=&
\Big(z(1-z)\partial_z+\big((1+\alpha)(1-z)-(1+\beta) z\big)\Big)\partial_z\\&&-\frac14(\alpha+\beta+\mu+1)(\alpha+\beta-\mu+1),\\
&=&
\partial_z\Big(z(1-z)\partial_z+\big(\alpha(1-z)-\beta z\big)\Big)\\&&-\frac14(\alpha+\beta+\mu-1)(\alpha+\beta-\mu-1),\\
&=&
\Big((1-z)\partial_z-\beta-1\Big)\Big(z\partial_z+\alpha\Big)\\
&&-\frac14(\alpha+\beta+\mu+1)(\alpha+\beta-\mu+1),\\
&=&
\Big(z\partial_z+\alpha+1\Big)\Big((1-z)\partial_z-\beta\Big)\\
&&-\frac14(\alpha+\beta+\mu+1)(\alpha+\beta-\mu+1);
\end{eqnarray*}\begin{eqnarray*}
z{\cal F}_{\alpha,\beta,\mu}&=&
\Big(z\partial_z+\frac12(\alpha+\beta+\mu-1)\Big)
\Big(z(1-z)\partial_z+\frac12(1-z)(\alpha+\beta-\mu+1)-\beta\Big)\\
&&-\frac14(\alpha+\beta+\mu-1)(\alpha-\beta-\mu+1),\\
&=&
\Big(z(1-z)\partial_z+\frac12(1-z)(\alpha+\beta-\mu+1)-\beta-1\Big)
\Big(z\partial_z+\frac12(\alpha+\beta+\mu+1)\Big)
\\
&&-\frac14(\alpha+\beta+\mu+1)(\alpha-\beta-\mu-1),\\
&=&
\Big(z\partial_z+\frac12(\alpha+\beta-\mu-1)\Big)
\Big(z(1-z)\partial_z+\frac12(1-z)(\alpha+\beta+\mu+1)-\beta\Big)\\
&&-\frac14(\alpha+\beta-\mu-1)(\alpha-\beta+\mu+1),\\
&=&
\Big(z(1-z)\partial_z+\frac12(1-z)(\alpha+\beta+\mu+1)-\beta-1\Big)
\Big(z\partial_z+\frac12(\alpha+\beta-\mu+1)\Big)
\\
&&-\frac14(\alpha+\beta-\mu+1)(\alpha-\beta+\mu-1);
\end{eqnarray*}\begin{eqnarray*}
(z-1){\cal F}_{\alpha,\beta,\mu}&=&
\Big((z-1)\partial_z+\frac12(\alpha+\beta+\mu-1)\Big)
\Big(z(1-z)\partial_z+\frac12z(-\alpha-\beta+\mu-1)+\alpha\Big)\\
&&-\frac14(\alpha+\beta+\mu-1)(\alpha-\beta+\mu-1),\\
&=&
\Big(z(1-z)\partial_z+\frac12z(-\alpha-\beta+\mu-1)+\alpha+1\Big)
\Big((z-1)\partial_z+\frac12(\alpha+\beta+\mu+1)\Big)
\\
&&-\frac14(\alpha+\beta+\mu+1)(\alpha-\beta+\mu+1),\\
&=&
\Big((z-1)\partial_z+\frac12(\alpha+\beta-\mu-1)\Big)
\Big(z(1-z)\partial_z+\frac12z(-\alpha-\beta-\mu-1)+\alpha\Big)\\
&&-\frac14(\alpha+\beta-\mu-1)(\alpha-\beta-\mu-1),\\
&=&
\Big(z(1-z)\partial_z+\frac12z(-\alpha-\beta-\mu-1)+\alpha+1\Big)
\Big((z-1)\partial_z+\frac12(\alpha+\beta-\mu+1)\Big)
\\
&&-\frac14(\alpha+\beta-\mu+1)(\alpha-\beta-\mu+1).
\end{eqnarray*}
One way of showing the above factorizations is as follows: We
start with deriving the first one,
and then we apply the symmetries of Subsect. \ref{s-sym}.
The factorizations can be used to derive the following commutation relations:
\[\begin{array}{rrl}
&
\partial_z&{\cal F}_{\alpha,\beta ,\mu }\\[1ex]
&=\ \ \ \ \ \ {\cal F}_{\alpha+1,\beta +1,\mu } & \partial_z,\\[3ex]
&(z(1-z) \partial_z+(1-z)\alpha-z\beta )&{\cal F}_{\alpha,\beta ,\mu }\\[1ex]
&=\ \ \ \ \ \ {\cal F}_{\alpha-1,\beta -1,\mu }&(z(1-z) \partial_z+(1-z)\alpha-z\beta ),\\[3ex]
& ((1-z) \partial_z -\beta )&{\cal F}_{\alpha,\beta ,\mu }\\[1ex]
&=\ \ \ \ \ \ {\cal F}_{\alpha+1,\beta -1,\mu }& ((1-z) \partial_z -\beta ),
\\[3ex]&(z \partial_z+\alpha)&{\cal F}_{\alpha,\beta ,\mu }
\\[1ex]
&=\ \ \ \ \ \ {\cal F}_{\alpha-1,\beta +1,\mu }&(z \partial_z+\alpha);
\end{array}\]
\[\begin{array}{rrl}
&(z \partial_z+\frac{1}{2} (\alpha+ \beta +\mu +1))&z{\cal F}_{\alpha,\beta ,\mu }
\\[1ex]
&=\ \ \ \ \ \ z{\cal F}_{\alpha,\beta +1,\mu +1}&(z \partial_z+\frac{1}{2} (\alpha+ \beta +\mu +1)),
\\[3ex]
&(z(1{-}z) \partial_z{+}\frac12(1{-}z)(\alpha{+}\beta {-}\mu {+}1){-}\beta )&z{\cal F}_{\alpha,\beta ,\mu }\\[1ex]
&=\ \ \ \ \ \ z{\cal F}_{\alpha,\beta {-}1,\mu {-}1}&
(z(1{-}z) \partial_z{+}\frac12(1{-}z)(\alpha{+}\beta {-}\mu {+}1){-}\beta ),\\[3ex]
&( z \partial_z{+}\frac{1}{2}(\alpha{+}\beta {-}\mu {+}1))&z{\cal F}_{\alpha,\beta ,\mu }\\[1ex]
&=\ \ \ \ \ \ z{\cal F}_{\alpha,\beta {+}1,\mu {-}1}&( z \partial_z{+}\frac{1}{2}(\alpha{+}\beta {-}\mu {+}1),
\\[3ex]
&
(z(z{-}1) \partial_z{-}\frac{1}{2}(1{-}z)(\alpha{+}\beta {+}\mu {+}1)
{+}\beta )&z{\cal F}_{\alpha,\beta ,\mu }\\[1ex]
&=\ \ \ \ \ \ z{\cal F}_{\alpha,\beta -1,\mu +1}&(z(z{-}1) \partial_z{-}\frac{1}{2}(1{-}z)(\alpha{+}\beta {+}\mu {+}1)
{+}\beta );
\end{array}\]
\[\begin{array}{rrl}
&((z-1) \partial_z+\frac{1}{2}(\alpha+\beta +\mu +1))&(1-z){\cal F}_{\alpha,\beta ,\mu }\\[1ex]
&=\ \ \ \ \ \ (1-z){\cal F}_{\alpha+1,\beta ,\mu +1}& ((z-1) \partial_z+\frac{1}{2}(\alpha+\beta +\mu +1)
,
\\[3ex]
&(z(1{-}z) \partial_z{-}\12z(\alpha{+}\beta {-}\mu {+}1)
{+}\alpha)&(1-z){\cal F}_{\alpha,\beta ,\mu }
\\[1ex]
&=\ \ \ \ \ \ (1-z){\cal F}_{\alpha-1,\beta ,\mu -1}&(z(1{-}z) \partial_z{-}\12z(\alpha{+}\beta {-}\mu {+}1)
{+}\alpha),
\\[3ex]
&((z-1) \partial_z+\frac{1}{2}(\alpha+\beta -\mu +1))&
(1-z){\cal F}_{\alpha,\beta ,\mu }
\\[1ex]&=\ \ \ \ \ \
(1-z){\cal F}_{\alpha+1,\beta ,\mu -1}&((z-1) \partial_z+\frac{1}{2}(\alpha+\beta -\mu +1)),
\\[3ex]
&(z(z{-}1) \partial_z{+}\12z(\alpha{+}\beta {+}\mu {+}1)
-\alpha)&(1-z){\cal F}_{\alpha,\beta ,\mu }
\\[1ex]
&=\ \ \ \ \ \ (1-z){\cal F}_{\alpha-1,\beta ,\mu +1}&(z(z{-}1) \partial_z{+}\12z(\alpha{+}\beta {+}\mu {+}1)
{-}\alpha).
\end{array}\]
Each of these commutation relations corresponds to a
root of the Lie
algebra $so(6)$.
\subsection{Canonical forms}
The natural weight of the
hypergeometric operator is $z^\alpha(1-z)^\beta$, so that
\[{\cal F}_{\alpha,\beta,\mu}=
z^{-\alpha}(1-z)^{-\beta}\partial_zz^{\alpha+1}(1-z)^{\beta+1}\partial_z
+\frac{\mu^2}{4}-\frac{(\alpha+\beta+1)^2}{4}.\]
The balanced form of the
hypergeometric operator is
\begin{eqnarray*}
&&z^{\frac{\alpha}{2}}(1-z)^{\frac{\beta}{2}}{\cal F}_{\alpha,\beta,\mu}
z^{-\frac{\alpha}{2}}(1-z)^{-\frac{\beta}{2}}\\
&=&
\partial_zz(1-z)\partial_z-\frac{\alpha^2}{4z}-\frac{\beta^2}{4(1-z)}
+\frac{\mu^2-1}{4}.
\end{eqnarray*}
Note that the symmetries $\alpha\to-\alpha$, $\beta\to-\beta$ and $\mu\to-\mu$ are obvious in the balanced form.
\begin{remark} In the literature, the balanced form of the hypergeometric equation is sometimes called the {\em generalized associated Legendre equation}. Its standard form according to \cite{NIST} is
\begin{equation}
(1-w^2)\partial_w^2-2w\partial_w+\nu(\nu+1)-\frac{\mu_1^2}{2(1-w)}-\frac{\mu_1^2}{2(1+w)}.
\end{equation} Thus $z=\frac{w+1}{2}$, moreover, $\mu_1$, $\mu_2$ and $\nu$ correspond to $\beta,\alpha$ and $\frac{\mu}{2}-\frac12$.
\end{remark}
\subsection{The hypergeometric function}
\label{ss-hyp}
$0$ is a regular singular point of
the hypergeometric equation. Its indices are
$0$ and $1-c$.
The Frobenius method implies that, for $c\neq 0,-1,-2,\dots$, the unique solution of
the hypergeometric equation equal to $1$
at $0$ is given by the series
\[F(a,b;c;z)=\sum_{j=0}^\infty
\frac{(a)_j(b)_j}{
(c)_j}\frac{z^j}{j!},\]
convergent for $|z|<1$. The function extends to the whole complex plane cut at $[1,\infty[$ and is
called the {\em hypegeometric function}.
Sometimes it is more convenient to consider the function
\[ {\bf F} (a,b;c;z):=\frac{F(a,b,c,z)}{\Gamma(c)}
=\sum_{j=0}^\infty
\frac{(a)_j(b)_j}{
\Gamma(c+j)}\frac{z^j}{j!}\]
defined for all $a,b,c\in{\mathbb C}$.
Another useful function proportional to ${}_2F_1$ is
\[ {\bf F}^{\rm\scriptscriptstyle I} (a,b;c;z):=\frac{\Gamma(a)\Gamma(c-a)}{\Gamma(c)}
F(a,b;c;z)
=\sum_{j=0}^\infty
\frac{\Gamma(a+j)\Gamma(c-a)(b)_j}{
\Gamma(c+j)}\frac{z^j}{j!}.
\]It has
the integral representation
\begin{eqnarray}\label{eqa1}
&&\int_1^\infty t^{b-c}(t-1)^{c-a-1}(t-z)^{-b}{\rm d} t\\
&=&
{\bf F}^{\rm\scriptscriptstyle I} (a,b;c;z),\ \ \ \ {\rm Re}(c-a)>0,\ {\rm Re} a>0,\ \ \ z\not\in[1,\infty[.
\nonumber\end{eqnarray}
Indeed, by Theorem \ref{intr} the left hand side of (\ref{eqa1})
is annihilated by
the hypergeometric operator
(\ref{hy1-tra}). Besides, by (\ref{b1}) it equals
$\frac{\Gamma(a)\Gamma(c-a)}{\Gamma(c)}$ at $0$. So does
the right hand side. Therefore, Equation (\ref{eqa1}) follows by the uniqueness of the solution by the Frobenius method.
Another, closely related
integral representation is
\begin{equation}
\frac{\sin\pi a}{\pi}{\bf F}^{{\rm\scriptscriptstyle I}}(a,b;c;z)=\frac{1}{2\pi{\rm i}}\int\limits_{[1,(z,0)^+,1]} (-t)^{b-c}(1-t)^{c-a-1}(z-t)^{-b}{\rm d} t.
\label{bequ}\end{equation}
It is proven essentially in the same way as (\ref{eqa1}), except that instead of
(\ref{b1}) we use (\ref{put}).
We have the identities
\begin{eqnarray}\nonumber
&&F(a,b;c;z)\\\nonumber
&=&(1-z)^{c-a-b}F\left(c-a,c-b;c;z\right)\\\nonumber
&=&(1-z)^{-a}F\left(a,c-b;c;\frac{z}{z-1}\right)
\\\label{stan}
&=&(1-z)^{-b}F\left(c-a,b;c;\frac{z}{z-1}\right).
\end{eqnarray}
In fact, by the 3rd, 9th and 11th symmetry of Subsect. \ref{s-sym} all
these functions are annihilated by the hypergeometric operator. All of
them are $\sim1$ at $1$. Hence, by the uniqueness of the Frobenius method they coincide, at least for $c\neq0,-1,\dots$. By continuity, the identities hold for all $c\in{\mathbb C}$.
Let us introduce new notation for various varieties of the
hypergeometric function involving the Lie-algebraic parameters instead of the
classical parameters.
\begin{eqnarray*}
F_{\alpha,\beta ,\mu }(z)&=&F\Bigl(
\frac{1+\alpha+\beta -\mu}{2},\frac{1+\alpha+\beta +\mu}{2};1+\alpha;z\Bigr),\\
{\bf F}_{\alpha,\beta ,\mu }(z)&=&{\bf F} \Bigl(
\frac{1+\alpha+\beta -\mu}{2},\frac{1+\alpha+\beta +\mu}{2};1+\alpha;z\Bigr)\\
&=&
\frac{1}{\Gamma(\alpha+1)} F_{\alpha,\beta ,\mu }(z),\\
{\bf F}_{\alpha,\beta ,\mu }^{{\rm\scriptscriptstyle I}}(z)&=&{\bf F}^{{\rm\scriptscriptstyle I}}\Bigl(
\frac{1+\alpha+\beta -\mu}{2},\frac{1+\alpha+\beta +\mu}{2};1+\alpha;z\Bigr)\\
&=&
\frac{\Gamma\big(\frac{1+\alpha+\beta-\mu}{2}\big)\Gamma\big(\frac{1+\alpha-\beta+\mu}{2}\big)}{\Gamma(\alpha+1)}
F_{\alpha,\beta ,\mu }(z).
\end{eqnarray*}
\subsection{Standard solutions -- Kummer's table}
To each of the singular points $0,1,\infty$ we can associate two solutions corresponding to its indices. Thus we obtain $3\times2=6$ solutions, which we will call {\em standard solutions}. Using the identites (\ref{stan}), each solution can be written in 4 distinct ways (not counting the trivial change of the sign in front of the last parameter). Thus we obtain a list of $6\times 4=24$ expressions for solutions of the hypergeometric equation, called sometimes {\em Kummer's table}.
We describe the standard solutions to the hypergeometric equation in this section.
We will use consistently the Lie-algebraic parameters, which give much simpler expressions.
It follows from Thm \ref{intr} that for appropriate contours $\gamma$ integrals of the form
\begin{equation}
\int_\gamma t^{\frac{-1-\alpha+\beta +\mu }{2}}
(t-1)^{\frac{-1+\alpha-\beta +\mu }{2}}(t-z)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t\label{inte1}\end{equation}
are solutions of the hypergeometric equation. The integrand
has four singularities: $\{0,1,\infty,z\}$. It is natural to chose
$\gamma$ as the interval joining a pair of singularities. This choice leads to $6$ standard solutions with the $\rm I$-type normalization.
\subsubsection{Solution $\sim1$ at $0$}
\label{ss-1}
If $\alpha\neq-1,-2,\dots$, then the following function is the unique solution $\sim1$ at $0$:
\begin{eqnarray*}
&& F_{\alpha,\beta ,\mu }(z)\\
&=&(1-z)^{-\beta } F_{\alpha,-\beta ,-\mu }(z)\\
&=&(1-z)^{\frac{-1-\alpha-\beta +\mu }{2}}
F_{\alpha,-\mu ,-\beta }(\frac{z}{z-1})\\
&=&(1-z)^{\frac{-1-\alpha-\beta -\mu }{2}} F_{\alpha,\mu ,\beta }(\frac{z}{z-1}).
\end{eqnarray*}
An integral representation for ${\rm Re}(1+\alpha)> |{\rm Re}(\beta -\mu )|$:
\begin{eqnarray*}
\int_1^\infty t^{\frac{-1-\alpha+\beta +\mu }{2}}
(t-1)^{\frac{-1+\alpha-\beta +\mu }{2}}(t-z)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=& {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z),\\&&
\ \ \ z\not\in[1,\infty[.\nonumber
\end{eqnarray*}
Note that all the identities of this subsubsection are the transcriptions of identities of Subsect.
\ref{ss-hyp} to the Lie-algebraic parameters.
\subsubsection{Solution $\sim z^{-\alpha}$ at $0$}
\label{ss-2}
If $\alpha\neq 1,2,\dots$, then the following function is the unique solution behaving
as $z^{-\alpha}$ at $0$:
\begin{eqnarray*}
&&z^{-\alpha} F_{-\alpha,\beta ,-\mu }(z)\\
&=&z^{-\alpha}(1-z)^{-\beta } F_{-\alpha,-\beta ,\mu }(z)\\
&=&z^{-\alpha}(1-z)^{\frac{-1+\alpha-\beta +\mu }{2}}
F_{-\alpha,-\mu ,\beta }(\frac{z}{z-1})\\
&=&z^{-\alpha}(1-z)^{\frac{-1+\alpha-\beta -\mu }{2}} F_{-\alpha,\mu ,-\beta }(\frac{z}{z-1}).
\end{eqnarray*}
Integral representations for ${\rm Re}(1-\alpha)> |{\rm Re}(\beta -\mu )|$:
\begin{eqnarray*}
\int_0^z t^{\frac{-1-\alpha+\beta +\mu }{2}}
(1-t)^{\frac{-1+\alpha-\beta +\mu }{2}}(z-t)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&
z^{-\alpha} {\bf F}^{\rm\scriptscriptstyle I} _{-\alpha,\beta ,-\mu }(z),\\&&z\not\in]{-}\infty,0]{\cup}[1,\infty[
;\nonumber\\
\int_z^0 (-t)^{\frac{-1-\alpha+\beta +\mu }{2}}
(1-t)^{\frac{-1+\alpha-\beta +\mu }{2}}(t-z)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&
(-z)^{-\alpha} {\bf F}^{\rm\scriptscriptstyle I} _{-\alpha,\beta ,-\mu }(z),\\&&z\not\in[0,\infty[
.\nonumber
\end{eqnarray*}
To check these identities we note first that the integrals are solutions of the hypergeometric equation. By substituting $t=zs$ we easily check that they have the correct behavior at zero.
Of course, it is elementary to pass from the first identity, which is adapted to the region on the right of the singularity $z=0$ to the second, adapted to the region on the left of the singularity. For convenience we give both identities.
\subsubsection{Solution $\sim1$ at $1$}
\label{ss-3}
If $\beta \neq-1,-2,\dots$, then the following function is the unique solution
$\sim1$ at $1$:
\begin{eqnarray*}
&& F_{\beta ,\alpha,\mu }(1-z)\\
&=&z^{-\alpha} F_{\beta ,-\alpha,-\mu }(1-z)\\
&=&z^{\frac{-1-\alpha-\beta +\mu }{2}}
F_{\beta ,-\mu ,-\alpha}(1-z^{-1})\\
&=&z^{\frac{-1-\alpha-\beta -\mu }{2}}
F_{\beta ,\mu ,\alpha}(1-z^{-1})
\end{eqnarray*}
Integral representation for ${\rm Re}(1+\beta )> |{\rm Re}(\alpha-\mu )|$:
\begin{eqnarray*}
\int_{-\infty}^0 (-t)^{\frac{-1-\alpha+\beta +\mu }{2}}
(1-t)^{\frac{-1+\alpha-\beta +\mu }{2}}(z-t)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&
{\bf F}^{\rm\scriptscriptstyle I} _{\beta ,\alpha ,\mu }(1-z),\\
&&z\not\in]-\infty,0].
\end{eqnarray*}
\subsubsection{Solution $\sim(1-z)^{-\beta }$ at $1$}
\label{ss-4}
If $\beta \neq1,2,\dots$, then the following function is the unique solution of the
hypergeometric equation $\sim(1-z)^{-\beta }$ at $1$:
\begin{eqnarray*}
&&(1-z)^{-\beta } F_{-\beta ,\alpha,-\mu }(1-z)\\
&=&z^{-\alpha}(1-z)^{-\beta } F_{-\beta ,-\alpha,\mu }(1-z)\\
&=&z^{\frac{-1-\alpha+\beta -\mu }{2}}(1-z)^{-\beta }
F_{-\beta ,\mu ,-\alpha}(1-z^{-1})\\
&=&z^{\frac{-1-\alpha+\beta +\mu }{2}}(1-z)^{-\beta }
F_{-\beta ,-\mu ,\alpha}(1-z^{-1}).
\end{eqnarray*}
Integral representations for ${\rm Re}(1-\beta )> |{\rm Re}(\alpha+\mu )|$:
\begin{eqnarray*}
\int_z^1 t^{\frac{-1-\alpha+\beta +\mu }{2}}
(1-t)^{\frac{-1+\alpha-\beta +\mu }{2}}(t-z)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&(1-z)^{-\beta } {\bf F}^{\rm\scriptscriptstyle I} _{-\beta ,\alpha,-\mu }(1-z),\\
&&z\not\in]-\infty,0]\cup[1,\infty[;\\
\int_1^z t^{\frac{-1-\alpha+\beta +\mu }{2}}
(t-1)^{\frac{-1+\alpha-\beta +\mu }{2}}(z-t)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&(z-1)^{-\beta } {\bf F}^{\rm\scriptscriptstyle I} _{-\beta ,\alpha,-\mu }(1-z),\\
&&z\not\in]-\infty,1].
\end{eqnarray*}
\subsubsection{Solution $\sim z^{-a}$ at $\infty$}
\label{ss-5}
If $\mu \neq 1,2\dots$, then the following function is the unique solution of the
hypergeometric equation $\sim (-z)^{-a}=(-z)^{\frac{-1-\alpha-\beta +\mu }{2}}$ at $\infty$:
\begin{eqnarray*}
&&(-z)^{\frac{-1-\alpha-\beta +\mu }{2}} F_{-\mu ,\beta ,-\alpha}(z^{-1})\\
&=&(-z)^{\frac{-1-\alpha+\beta +\mu }{2}}(1-z)^{-\beta } F_{-\mu ,-\beta ,\alpha}(z^{-1})\\
&=&(1-z)^{\frac{-1-\alpha-\beta +\mu }{2}} F_{-\mu ,\alpha,-\beta }((1-z)^{-1})\\
&=&(-z)^{-\alpha}(1-z)^{\frac{-1+\alpha-\beta +\mu }{2}} F_{-\mu ,-\alpha,\beta }((1-z)^{-1}).
\end{eqnarray*}
Integral representations for ${\rm Re}(1-\mu )> |{\rm Re}(\alpha+\beta )|$:
\begin{eqnarray*}
\int_z^\infty t^{\frac{-1-\alpha+\beta +\mu }{2}}
(t-1)^{\frac{-1+\alpha-\beta +\mu }{2}}(t-z)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&
z^{\frac{-1-\alpha-\beta -\mu }{2}} {\bf F}^{\rm\scriptscriptstyle I} _{-\mu ,\beta ,-\alpha}(z^{-1})
,\\
&&z\not\in]-\infty,1];\\
\int_{-\infty}^z (-t)^{\frac{-1-\alpha+\beta +\mu }{2}}
(1-t)^{\frac{-1+\alpha-\beta +\mu }{2}}(z-t)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&
(-z)^{\frac{-1-\alpha-\beta -\mu }{2}} {\bf F}^{\rm\scriptscriptstyle I} _{-\mu ,\beta ,-\alpha}(z^{-1})
,\\
&&z\not\in]0,\infty].
\end{eqnarray*}
\subsubsection{Solution $\sim z^{-b}$ at $\infty$}
\label{ss-6}
If $\mu \neq-1,-2,\dots$, then the following function is the unique solution of the
hypergeometric equation
$\sim(-z)^{-b}=(-z)^{\frac{-1-\alpha-\beta -\mu }{2}}$
at $\infty$:
\begin{eqnarray*}
&&(-z)^{\frac{-1-\alpha-\beta -\mu }{2}} F_{\mu ,\beta ,\alpha}(z^{-1})\\
&=&(-z)^{\frac{-1-\alpha+\beta -\mu }{2}}(1-z)^{-\beta } F_{\mu ,-\beta ,-\alpha}(z^{-1})\\
&=&(1-z)^{\frac{-1-\alpha-\beta -\mu }{2}} F_{\mu ,\alpha,\beta }((1-z)^{-1})\\
&=&(-z)^{-\alpha}(1-z)^{\frac{-1+\alpha-\beta -\mu }{2}} F_{\mu ,-\alpha,-\beta }((1-z)^{-1})
\end{eqnarray*}
Integral representations for ${\rm Re}(1+\mu )> |{\rm Re}(\alpha-\beta )|$:
\begin{eqnarray*}
\int_0^1 t^{\frac{-1-\alpha+\beta -\mu }{2}}
(1-t)^{\frac{-1+\alpha-\beta +\mu }{2}}(t-z)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&
(-z)^{\frac{-1-\alpha-\beta +\mu }{2}} {\bf F}^{\rm\scriptscriptstyle I} _{\mu ,\beta ,\alpha}(z^{-1}),\\
&&z\not\in[0,\infty[;\\
\int_0^1 t^{\frac{-1-\alpha+\beta -\mu }{2}}
(1-t)^{\frac{-1+\alpha-\beta +\mu }{2}}(z-t)^{\frac{-1-\alpha-\beta -\mu }{2}}{\rm d} t
&=&
z^{\frac{-1-\alpha-\beta +\mu }{2}} {\bf F}^{\rm\scriptscriptstyle I} _{\mu ,\beta ,\alpha}(z^{-1}),\\
&&z\not\in[-\infty,1[.
\end{eqnarray*}
\subsection{Connection formulas}
We use the solutions $\sim 1$ and $\sim z^{-\alpha}$ at $0$
as the basis. We show how the other solutions decompose in this basis.
For the first pair of relations we assume that $
z\not\in]-\infty,0]{\cup}[1,\infty[$:
\begin{eqnarray*}
{\bf F} _{\beta ,\alpha,\mu }(1-z)
&=&
\frac{\pi}{\sin\pi(-\alpha)
\Gamma\left(\frac{1-\alpha+\beta -\mu }{2}\right)
\Gamma\left(\frac{1-\alpha+\beta +\mu }{2}\right)} {\bf F} _{\alpha,\beta ,\mu }(z)\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\frac{\pi}{\sin\pi \alpha
\Gamma\left(\frac{1+\alpha+\beta -\mu }{2}\right)
\Gamma\left(\frac{1+\alpha+\beta +\mu }{2}\right)}z^{-\alpha} {\bf F} _{-\alpha,\beta ,-\mu }(z),\\[3ex]
(1-z)^{-\beta } {\bf F} _{-\beta ,\alpha,-\mu }(1-z)
&=&
\frac{\pi}{\sin\pi(-\alpha)
\Gamma\left(\frac{1-\alpha-\beta +\mu }{2}\right)
\Gamma\left(\frac{1-\alpha-\beta -\mu }{2}\right)} {\bf F} _{\alpha,\beta ,\mu }(z)\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\frac{\pi}{\sin\pi \alpha
\Gamma\left(\frac{1+\alpha-\beta +\mu }{2}\right)
\Gamma\left(\frac{1+\alpha-\beta -\mu }{2}\right)}z^{-\alpha} {\bf F} _{-\alpha,\beta ,-\mu }(z)
.\end{eqnarray*}
For the second pair we assume that $ z\not\in[0,\infty[$
\begin{eqnarray*}
(-z)^{\frac{-1-\alpha-\beta +\mu }{2}} {\bf F} _{-\mu ,\beta ,-\alpha}(z^{-1})
&=&
\frac{\pi}{\sin\pi(-\alpha)
\Gamma\left(\frac{1-\alpha-\beta -\mu }{2}\right)
\Gamma\left(\frac{1-\alpha+\beta -\mu }{2}\right)} {\bf F} _{\alpha,\beta ,\mu }(z)\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\frac{\pi}{\sin\pi \alpha
\Gamma\left(\frac{1+\alpha+\beta -\mu }{2}\right)
\Gamma\left(\frac{1+\alpha-\beta -\mu }{2}\right)}(-z)^{-\alpha}
{\bf F} _{-\alpha,\beta ,-\mu }(z),
\\(-z)^{\frac{-1-\alpha-\beta -\mu }{2}} {\bf F} _{\mu ,\beta ,\alpha}(z^{-1})
&=&
\frac{\pi}{\sin\pi(-\alpha)
\Gamma\left(\frac{1-\alpha-\beta +\mu }{2}\right)
\Gamma\left(\frac{1-\alpha+\beta +\mu }{2}\right)} {\bf F} _{\alpha,\beta ,\mu }(z)\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\frac{\pi}{\sin\pi \alpha
\Gamma\left(\frac{1+\alpha+\beta +\mu }{2}\right)
\Gamma\left(\frac{1+\alpha-\beta +\mu }{2}\right)}(-z)^{-\alpha} {\bf F} _{-\alpha,\beta ,-\mu }(z)
.\end{eqnarray*}
The connection formulas are easily derived from the integral representations by looking at the behavior around $0$.
\subsection{Recurrence relations}
\label{s3.15}
The following recurrence relations follow easily from the commutation
relations of Subsect. \ref{commu}:
\begin{eqnarray*}
\partial_z {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&
\frac{1{+}\alpha{+}\beta {+}\mu }{2} {\bf F}^{\rm\scriptscriptstyle I} _{\alpha+1,\beta +1,\mu }(z),\\[2ex]
(z(1{-}z) \partial_z{+}\alpha(1{-}z){-}\beta z) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&
\frac{{-}1{+}\alpha{+}\beta{+}\mu}{2}{\bf F}^{\rm\scriptscriptstyle I} _{\alpha-1,\beta -1,\mu }(z),\\[3ex]
((1-z) \partial_z-\beta ) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&\frac{1{+}\alpha{-}\beta {-}\mu }{2}{\bf F}^{\rm\scriptscriptstyle I} _{\alpha{+}1,\beta {-}1,\mu }(z),\\[2ex]
(z \partial_z+\alpha) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=& \frac{1{+}\alpha{-}\beta{+}\mu}{2}{\bf F}^{\rm\scriptscriptstyle I} _{\alpha-1,\beta +1,\mu }(z),\\[4ex]
\left(z \partial_z+\frac{1+\alpha+\beta +\mu }{2}\right) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&
\frac{1+\alpha+\beta +\mu }{2} {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta +1,\mu +1}(z)
, \\[2ex]
\left(z(1{-}z) \partial_z{-}\beta {+}\frac{1{+}\alpha{+}\beta {-}\mu }{2}(1{-}z)
\right) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&\frac{1+\alpha-\beta -\mu }{2} {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta -1,\mu -1}(z),\\[3ex]
\left(z \partial_z+\frac{1+\alpha+\beta -\mu }{2}\right) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&
\frac{1+\alpha+\beta -\mu }{2} {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta +1,\mu -1}(z)
, \\[2ex]
\left(z(1{-}z) \partial_z{-}\beta {+}\frac{1{+}\alpha{+}\beta {+}\mu }{2}(1{-}z)\right) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&
\frac{1+\alpha-\beta +\mu }{2}
{\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta -1,\mu +1}(z)
,\\[4ex]
\left((z-1) \partial_z+\frac{1+\alpha+\beta +\mu }{2}\right) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&
\frac{1{+}\alpha{+}\beta {+}\mu }{2}
{\bf F}^{\rm\scriptscriptstyle I} _{\alpha{+}1,\beta ,\mu {+}1}(z)
, \\[2ex]
\left(z(1{-}z) \partial_z{+}\alpha{-}\frac{1{+}\alpha{+}\beta {-}\mu }{2}z\right)
{\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&\frac{{-}1{+}\alpha{-}\beta{+}\mu}{2} {\bf F}^{\rm\scriptscriptstyle I} _{\alpha-1,\beta ,\mu -1}(z),
\\[3ex]
\left((z-1) \partial_z+\frac{1+\alpha+\beta -\mu }{2}\right)
{\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&\frac{1{+}\alpha{-}\beta {-}\mu }{2}
{\bf F}^{\rm\scriptscriptstyle I} _{\alpha{+}1,\beta ,\mu {-}1}(z)
, \\[2ex]
\left(z(1{-}z) \partial_z{+}\alpha{-}\frac{1{+}\alpha{+}\beta {+}\mu }{2}z\right) {\bf F}^{\rm\scriptscriptstyle I} _{\alpha,\beta ,\mu }(z)&=&\frac{{-}1{+}\alpha{+}\beta{-}\mu}{2} {\bf F}^{\rm\scriptscriptstyle I} _{\alpha-1,\beta ,\mu +1}(z)
.
\end{eqnarray*}
\subsection{Additional recurrence relations}
\label{addi}
There exist other, more complicated recurrence relations for hypergeometric functions, for example
\begin{eqnarray}
&&\Big(\frac{(1{+}\alpha{+}\beta {+}\mu )({-}1{-}\alpha{+}\beta {-}\mu )}{4}\nonumber\\
&&+\frac{(1{+}\alpha{+}\beta {+}\mu )(\mu {+}1)}{2}z
-(1{+}\mu )z(1{-}z) \partial_z\Big){\bf F}_{\alpha,\beta ,\mu }\nonumber
\\
&=&\frac{(1{+}\alpha{+}\beta {+}\mu )({-}1{-}\alpha{+}\beta {-}\mu )}{4}{\bf F}_{\alpha,\beta ,\mu +2}(z),
\label{r1}\\[5ex]
&&\Big(\frac{(1{+}\alpha{+}\beta {-}\mu )({-}1{-}\alpha{+}\beta {+}\mu )}{4}\nonumber\\
&&+\frac{(1{+}\alpha{+}\beta {-}\mu )(-\mu {+}1)}{2}z
-(1{-}\mu )z(1{-}z) \partial_z\Big){\bf F}_{\alpha,\beta ,\mu }\nonumber
\\
&=&\frac{(1{+}\alpha{+}\beta {-}\mu )({-}1{-}\alpha{+}\beta {+}\mu )}{4}{\bf F}_{\alpha,\beta ,\mu -2}(z)
\label{r2}.
\end{eqnarray}
Note that (\ref{r1}) follows from the 6th and 7th recurrence relation,
and (\ref{r2}) follows from the 5th and 8th of Subsect.
\ref{s3.15}.
\subsection{Degenerate case}
$\alpha=m\in{\mathbb Z}$ is the degenerate case of the hypergeometric equation at $0$.
We have then
\[{\bf F}
(a,b;1+m;z)=\sum_{n=\max(0,-m)}\frac{(a)_n(b)_n}{n!(m+n)!}z^n.\]
This easily implies the identity
\begin{equation}(a-m)_m(b-m)_m{\bf F}(a,b;1+m;z)=z^{-m}
{\bf F}(a-m,b-m;1-m;z).
\label{pwr}\end{equation}
Thus the two standard solutions determined by the behavior at zero are proportional to one another.
One can also see the degenerate case in the integral representation (\ref{f4}).
If we go around $0,z$, the phase of the integrand changes by ${\rm e}^{\i2\pi c}={\rm e}^{\i2\pi\alpha}$. Therefore, if
$\alpha=m\in{\mathbb Z}$, then the loop around $0,z$ is closed on the Riemann surface of the integrand.
We have an additional integral representation and a generating function:
\begin{eqnarray*}
\frac{1}{2\pi {\rm i}}\int\limits_{[(0,z)^+]}(1-t)^{-a}(1-z\slash t)^{-b}
t^{-m-1}{\rm d} t&=&
(a)_m {\bf F}_{m,a+b-1,-a+b-m}(z)
\\&=&
z^{-m}(b)_{-m} {\bf F} _{-m,a+b-1,a-b+m}(z),\\[4ex]
(1-t)^{-a}(1-z\slash t)^{-b}
&=&\sum_{m\in{\mathbb Z}}t^m(a)_m {\bf F} _{m,a+b-1,b-a-m}(z).\nonumber
\end{eqnarray*}
To see the integral representation we note that the integral on the l.h.s. is annihilated by the hypergeometric operator. Then we check that its value at zero equals
\[\frac{1}{2\pi{\rm i}}\int\limits_{[0^+]}(1-t)^{-a}t^{-m-1}{\rm d} t=\frac{(a)_m}{m!},\]
see (\ref{double1}).
The second identity follows from (\ref{pwr}). Another way to see it is
to make the substitution $t=\frac{z}{s}$. Note that $[(0,z)^+]$ becomes $[(\infty,1)^+]$, which coincides with $[(0,z)^-]$. Then we change the sign in front of the integral and the orientation of the contour of integration, obtaining
\[\frac{z^{-m}}{2\pi{\rm i}}\int_{[(0,z)^+]}(1-s)^{-b}(1-z/s)^{-a}s^{-m-1}{\rm d} s.\]
Finally, we apply the first integral representation again.
The generating function follows from the integral representation.
\subsection{Jacobi polynomials}
If $-a=n=0,1,\dots$, then hypergeometic functions are polynomials.
We will call them the {\em Jacobi polynomials}.
Following
Subsect.
\ref{Hypergeometric type polynomials}, the Jacobi polynomials are
defined by the
Rodriguez-type formula
\[
\begin{array}{rl}R^{\alpha,\beta}_n(z):=
\frac{(-1)^n}{n!}z^{-\alpha}(z-1)^{-\beta} \partial_z^nz^{\alpha+n}(z-1)^{\beta+n}
.\end{array}\]
\begin{remark}
In most of the literature, the Jacobi polynomials are slightly different:
\[P_n^{\alpha,\beta}(z):=R_n^{\alpha,\beta}\Big(\frac{1-z}{2}\Big)
=(-1)^nR_n^{\beta,\alpha}\Big(\frac{1+z}{2}\Big).\]
\label{jaco}\end{remark}
The equation:
\begin{eqnarray*}
&&0={\cal F}(-n,1+\alpha+\beta+n;\beta+1;z, \partial_z)P_n^{\alpha,\beta}(z)\\
&&=\Big(z(1-z) \partial_z^2+
\big((1+\alpha)(1-z)-(1+\beta
)z\big) \partial_z+n(n+\alpha+\beta+1)\Big)P_n^{\alpha,\beta}(z).
\end{eqnarray*}
Generating functions:
\begin{eqnarray*}
(1+t(1-z))^{\alpha}(1-tz)^{\beta}
&=&\sum\limits_{n=0}^\infty t^nR_n^{\alpha-n,\beta-n}(z),\\
(1+zt)^{-1-\alpha-\beta}(1+t)^{\alpha}
&=&\sum\limits_{n=0}^\infty t^nR_n^{\alpha-n,\beta}(z),\\
(1+(z-1)t)^{-1-\alpha-\beta}(1-t)^{\beta}
&=&\sum\limits_{n=0}^\infty t^nR_n^{\alpha,\beta-n}(z).
\end{eqnarray*}
Integral representations:
\begin{eqnarray*}
R_n^{\alpha,\beta}(z)
&=&\frac{1}{2\pi {\rm i}}\int\limits_{[0^+]}
(1+(1-z)t)^{\alpha+n}(1-zt)^{\beta+n}t^{-n-1}{\rm d} t\\
&=&\frac{1}{2\pi {\rm i}}\int\limits_{[0^+]}
(1+zt)^{-\alpha-\beta-n-1}(1+t)^{\alpha+n}t^{-n-1}{\rm d} t
\\
&=&\frac{1}{2\pi {\rm i}}\int\limits_{[0^+]}
(1+(z-1)t)^{-\alpha-\beta-n-1}(1-t)^{\beta+n}t^{-n-1}{\rm d} t.\end{eqnarray*}
Discrete symmetries:
\begin{eqnarray*}
R_n^{\alpha,\beta}(z)
&=&(1-z)^n
R_n^{\alpha,-1-\alpha-\beta-2n}\Big(\frac{z}{z-1}\Big)\\
=\ (-1)^nR_n^{\beta,\alpha}(1-z)
&=& (-z)^nR_n^{\beta,-1-\alpha-\beta-2n}\Big(\frac{z-1}{z}\Big)
\\=\
z^nR_n^{-1-\alpha-\beta-2n,\beta}\Big(\frac{1}{z}\Big)&=&
(z-1)^nR_n^{-1-\alpha-\beta-2n,\alpha}\Big(\frac{1}{1-z}\Big).
\end{eqnarray*}
Recurrence relations:
\begin{eqnarray*}
\partial_zR_n^{\alpha,\beta}(z)&=&-(\alpha+\beta+n+1)
R_{n-1}^{\alpha+1,\beta+1}(z),\\
(z(1-z) \partial_z-\alpha(z-1)-\beta z)R_n^{\alpha,\beta}(z)&=&(n+1)
R_{n+1}^{\alpha-1,\beta-1}(z),\\[4ex]
((1-z) \partial_z-\beta)R_n^{\alpha,\beta}(z)&=&-(\beta+n)
R_n^{\alpha-1,\beta+1}(z),\\
(z \partial_z+\alpha)R_n^{\alpha,\beta}(z)&=&(\beta+n)R_n^{\alpha-1,\beta+1}(z),\\[6ex]
\left(z \partial_z-n\right)R_n^{\alpha,\beta}(z)&=&-(\alpha+n)
R_{n-1}^{\alpha,\beta+1}(z)
, \\
\left(z(1-z) \partial_z+1+\alpha+n-(1+\alpha+\beta+n)z\right)
R_n^{\alpha,\beta}(z)&=&(n+1)
R_{n+1}^{\alpha,\beta-1}(z)
,\\[4ex]
\left(z \partial_z+1+\alpha+\beta+n\right)R_n^{\alpha,\beta}(z)&=&(1+\alpha+\beta+n)
R_n^{\alpha,\beta+1}(z)
, \\
\left(z(1-z) \partial_z-n-\beta+nz\right)R_n^{\alpha,\beta}(z)&=&
-(\beta+n)R_n^{\alpha,\beta-1}(z),
\\[6ex]
\left((z-1) \partial_z-n\right)R_n^{\alpha,\beta}(z)&=&
(\beta+n)R_{n-1}^{\alpha+1,\beta}(z)
, \\
\left(z(1-z) \partial_z+\alpha-(1+\alpha+\beta+n)z\right)R_n^{\alpha,\beta}(z)
&=&(n+1)R_{n+1}^{\alpha-1,\beta}(z)
,\\[4ex]
\left((z-1) \partial_z+1+n+\alpha+\beta\right)R_n^{\alpha,\beta}(z)
&=&(1+n+\alpha+\beta)R_n^{\alpha+1,\beta}(z)
, \\
\left(z(1-z) \partial_z+\alpha+nz\right)R_n^{\alpha,\beta}(z)&=&
(n+\alpha)R_n^{\alpha-1,\beta}(z).
\end{eqnarray*}
The first, second, resp. third integral representation is
easily seen to be equivalent to the first, second, resp. third
generating function. The first follows immediately from the
Rodriguez-type formula.
The symmetries can be interpreted as a subset of Kummer's table. The first line corresponds to the symmetries of the solution regular at $0$, see
(\ref{stan}) (or Subsubsect. \ref{ss-1}). Note that from 4 expressions
in (\ref{stan}) only the first and the third survive, since $n=-a$ should not change. The second line corresponds to the solution regular at
$1$ (Subsubsect. \ref{ss-3}{}), finally the third line to the solution $\sim z^{-a}=z^n$ (Subsubsect. \ref{ss-5}{}).
The differential equation, the Rodriguez-type formula, the
first generating function, the first integral representation and the
first pair of recurrence relations are special cases of the corresponding formulas of Subsect.
\ref{Hypergeometric type polynomials}.
Note that Jacobi polynomials are regular at $0$, $1$, and behave as $z^n$ in infinity. Thus (up to coefficients) they coincide with the 3 standard solutions.
They have the following
values at $0$, $1$ and the behavior at $\infty$:
\begin{eqnarray*}R_n^{\alpha,\beta}(0)&=&\frac{(\alpha+1)_n}{n!},\ \ \
R_n^{\alpha,\beta}(1)\ =\ (-1)^n\frac{(\beta+1)_n}{n!},\\
\lim\limits_{z\to\infty}\frac{R_n^{\alpha,\beta}(z)}{z^n}&=&
(-1)^n\frac{(\alpha+\beta+n+1)_n}{n!}.\end{eqnarray*}
We have several alternative expressions for Jacobi polynomials:
\begin{eqnarray*}
R_n^{\alpha,\beta}(z)&:=&\lim\limits_{\nu\to n}
(-1)^n(\nu-n
{\bf F}_{\alpha,\beta,2\nu+\alpha+\beta+1}^{{\rm\scriptscriptstyle I}}(z)
=\frac{(\alpha+1)_n}{n!}F_{\alpha,\beta,2n+\alpha+\beta+1}(z)\\
&=&\frac{\Gamma(\alpha+1+n)}{\Gamma(\alpha+1)\Gamma(n+1)}F(-n,n+\alpha+\beta+1;\alpha+1;z)\\
&=&\sum\limits_{j=0}^n
\frac{(1+\alpha+j)_{n-j}(1+\alpha+\beta+n)_j}{j!(n-j)!}(-z)^j.
\end{eqnarray*}
One way to derive the first of the above identities is to use integral representation
(\ref{bequ}). Using that $a$ is an integer we can replace the open curve $[1,(0,z)^+,1]$
with a closed loop $[\infty^-]$:
\begin{eqnarray*}&&
\lim\limits_{\nu\to n}
(-1)^n(\nu-n
{\bf F}_{\alpha,\beta,2\nu+\alpha+\beta+1}^{{\rm\scriptscriptstyle I}}(z)\\
&=&
\lim\limits_{\nu\to n}\frac{\sin \nu\pi}{\pi}{\bf F}_{\alpha,\beta,2\nu+\alpha+\beta+1}^{{\rm\scriptscriptstyle I}}(z)\\
&=&\frac{1}{2\pi{\rm i}}
\int_{[\infty^-]} (-s)^{\beta+n}(1-s)^{\alpha+n}
(z-s)^{-1-\alpha-\beta-n}{\rm d} s.
\end{eqnarray*}
Then, making the substitions
$s=z-\frac1t$, $s=zt$, resp. $s=(z-1)t$ we obtain the 1st, 2nd, resp. 3rd integral representation.
Additional identities valid in the degenerate case:
\begin{eqnarray*}
R_n^{\alpha,\beta}(z)&=&\frac{(n+1)_\alpha}{(\beta+n+1)_\alpha}(-z)^{-\alpha}
R_{n+\alpha}^{-\alpha,\beta}(z),\ \ \alpha\in{\mathbb Z};\\
R_n^{\alpha,\beta}(z)&=&\frac{(n+1)_\beta}{(\alpha+n+1)_\beta}(1-z)^{-\beta}
R_{n+\beta}^{\alpha,-\beta}(z),\ \ \beta\in{\mathbb Z};\\
R_n^{\alpha,\beta}(z)&=&
(-z)^{-\alpha}
(1-z)^{-\beta}
R_{n+\alpha+\beta}^{-\alpha,-\beta}(z),\ \ \alpha,\beta\in{\mathbb Z}.
\end{eqnarray*}
There is a region where Jacobi polynomials are zero. This happens iff
$\alpha,\beta\in{\mathbb Z}$ and $\alpha,\beta$ are in the triangle
\begin{eqnarray}\nonumber
0&\leq&\alpha+n,\\\nonumber
0&\leq&\beta+n,\\
0&\leq&-\alpha-\beta-n-1.\label{mumu}
\end{eqnarray}
In the analysis of symmetries of Jacobi polynomials it is useful to go back to the Lie-algebraic parameters, more precisely, to set $\mu:=-\alpha-\beta-2n-1$.
Then (\ref{mumu}) acquires a more symmetric form, since we can replace its last condition by
\[0 \leq\mu+n.\]
One can distinguish 3 strips where Jacobi polynomials have special properties. Note that the intersection of the strips below is precisely the triangle described in (\ref{mumu}).
\begin{enumerate}
\item $\mu\in{\mathbb Z}$ and $-n\leq\mu\leq-1$ or, equivalently, $\alpha+\beta\in{\mathbb Z}$ and $-2n\leq\alpha+\beta\leq-n-1$. Then $R_n^{\alpha,\beta}=0$ or
\[\deg R_n^{\alpha,\beta}=\mu+n=-\alpha-\beta-n-1.\]
\item $\alpha\in{\mathbb Z}$ and $-n\leq\alpha\leq-1$. Then $R_n^{\alpha,\beta}=0$ or
\[R_n^{\alpha,\beta}=z^{-\alpha} W,\ \ \ \ W\ \ \hbox{not divisible by} \ z.\]
\item $\beta\in{\mathbb Z}$ and $-n\leq\beta\leq-1$. Then $R_n^{\alpha,\beta}=0$ or
\[R_n^{\alpha,\beta}=(z-1)^{-\beta} V,\ \ \ \ V\ \ \hbox{not divisible by} \ z-1.\]
\end{enumerate}
These regions are presented in the following picture:
\begin{center}\includegraphics[width=7cm,totalheight=7cm]{jacobi.pdf}
\end{center}
Finally Jacobi polynomials satisfy some identities related to Subsect. \ref{addi}.
An additional generating function:
\begin{eqnarray}\nonumber
2^{\alpha+\beta}r^{-1}(1-t+r)^{-\alpha}(1+t+r)^{-\beta}
&=&\sum\limits_{n=0}^\infty t^nR_n^{\alpha,\beta}(z),\\
\hbox{where}\ \ \ r=\sqrt{(1-t)^2+4zt}.&&
\label{gg1}\end{eqnarray}
Additional recurrence relations:
\begin{eqnarray*}
\Big((n+\alpha+\beta+1)\big((n+\beta+1)-
(2n+\alpha+\beta+2)z\big) \hspace{-10ex} &&\\
\!\!\!\!\!+(2n+\alpha+\beta+2)z(1-z) \partial_z\Big)R_n^{\alpha,\beta}(z)
&=&(n+\alpha+\beta+1)(n+1)R_{n+1}^{\alpha,\beta}(z),\\[3ex]
\Big(n\big((n+\alpha)-(2n+\alpha+\beta)z\big)\ \ \ \ \ \ \ \ \ &&\\-(2n+\alpha+\beta)z(1-z) \partial_z\Big)
R_n^{\alpha,\beta}(z)&=&(n+\alpha)(n+\beta)R_{n-1}^{\alpha,\beta}(z).
\end{eqnarray*}
\subsection{Special cases}
Beside the polynomial and degenerate cases, the hypergeometric equation has a number of other special cases. In their description most of the time we will use the Lie-algebraic parameters,
which are here more convenient than the classical parameters.
\subsubsection{Gegenbauer equation through an affine transformation}
Consider a
hypergeometric equation whose two parameters coincide up to a sign.
After applying an appropriate symmetry we can assume that they are at the first and second place, and that they are equal to one another. In other words,
$\alpha=\beta$. A simple affine transformation (\ref{ha2}) can be then applied to obtain a reflection invariant equation called the Gegenbauer equation. We study it separately in Sect. \ref{s7}.
\subsubsection{Gegenbauer equation through a quadratic transformation}
Hypergeometric equations with one of the parameters equal to
$\frac12$ or $-\frac12$ also enjoy special properties. After applying,
if needed, one of the symmetries, we can assume that $\mu=\pm\frac12$.
Then identity (\ref{1/2}) or (\ref{-1/2}) leads to the Gegenbauer equation.
\subsubsection{Chebyshev equation}
Even more special properties have equations with a pair of parameters $\pm\frac12$. After applying one of the symmetries we can assume that $\alpha=\beta=\frac12$. Thus we are reduced to the Chebyshev equation of the first kind; see (\ref{cheb1}). Another option is to reduce it to the Chebyshev equation of the second kind, which corresponds to $\alpha=\beta=-\frac12$;
see (\ref{cheb2}).
\subsubsection{Legendre equation}
Let ${\mathcal L}$ be the sublattice of ${\mathbb Z}^3$ consisting of points whose
sum of
coordinates is even. It is a sublattice of ${\mathbb Z}^3$ of degree $2$.
By using recurrence relations of
Subsect. \ref{s3.15} we can pass from hypergeometric functions with
given Lie-algebraic parameters $(\alpha,\beta,\mu)$ to parameters from
$(\alpha,\beta,\mu)+{\mathcal L}$.
This is especially useful in the degenerate
case, when some of the parameters are integers. In particular, if two
of the parameters are integers, by applying recurrence relations we
can make both of them zero. By applying an appropriate symmetry
we can assume that $\alpha=\beta=0$. Thus we obtain the
Legendre equation, see (\ref{legen}).
\subsubsection{Elementary solutions}
One can easily check that
\[F(a,b;b;;z)=F_{b-1,a,b-a}(z)=(1-z)^{-a}.\]
Therefore, using Kummer's table and recurrence relations we see that if
\begin{equation}\epsilon_1\alpha+\epsilon_2\beta+\epsilon_3\mu\ \ \hbox{ is an odd
integer for some}\ \ \
\epsilon_1,\epsilon_2,\epsilon_3\in\{-1,1\}\label{condi}\end{equation}
then $F_{\alpha,\beta,\mu}$ is an elementary function involving power
functions, but not logarithms.
\subsubsection{Fully degenerate case}
An interesting situation arises if $\alpha,\beta,\mu\in{\mathbb Z}$,
that is, we have the degenerate case at all singular points. We can distinguish two situations:
\begin{enumerate}
\item If $\alpha+\beta+\mu$ is even, by walking on the lattice ${\mathcal L}$ we can reduce ourselves to the
equation for the {\em complete elliptic integral}, which corresponds to $\alpha=\beta=\mu=0$.
\item If $\alpha+\beta+\mu$ is odd, by walking on the lattice ${\mathcal L}$
we can reduce ourselves to the equation for the Legendre polynomial of degree $0$, which corresponds to
$\alpha=\beta=0$,
$\mu=1$. This equation is solved by
\begin{eqnarray*}F_{0,0,1}(z)&=F(0,1;1;z)&=1,\\
z^{-1}F_{1,0,0}(1-z^{-1})&=z^{-1}F(1,1;2;1-z^{-1})&=\log(z-1)-\log z,
\end{eqnarray*}
where we used Kummer's table and
\[F(1,1;2;w)=-w^{-1}\log(1-w).\]
\end{enumerate}
\section{The ${}_1F_1$ and ${}_2F_0$ equation}
\label{s4}
\subsection{The ${}_1F_1$ equation }
Let $a,c\in{\mathbb C}$.
The {\em confluent} or the {\em ${}_1F_1$ equation} is given by the operator
\begin{equation}
{\cal F}(a;c;z, \partial_z):=z \partial_z^2+(c-z) \partial_z-a.
\label{f1c}\end{equation}
This equation is a limiting case of the hypergeometric
equation:
\[\lim_{b\to\infty}
\frac1b{\cal F}(a,b;c;z/b, \partial_{z/b})=
{\cal F}(a;c;z, \partial_z).\]
\subsection{The ${}_2F_0$ equation}
Parallel to the ${}_1F_1$ equation we will consider
the {\em ${}_2F_0$ equation}, given by the operator
\begin{equation}
{\cal F}(a,b;-;z, \partial_z):=z^2 \partial_z^2+(-1+(1+a+b)z) \partial_z+ab,
\label{g8}\end{equation}
where $a,b\in {\mathbb C}$.
This
equation is another limiting case of the hypergeometric
equation:
\[\lim_{c\to\infty}
{\cal F}(a,b;c;cz, \partial_{(cz)})=
-{\cal F}(a,b;-;z, \partial_z).\]
\subsection{Equivalence of the ${}_1F_1$ and ${}_2F_0$ equation}
Note that
\[{\cal F}(a,b;-;z, \partial_z)=w^2 \partial_w^2+(-w^2+(1-a-b)w) \partial_w+ab\,\]
where $w=-z^{-1}$, $z=-w^{-1}$.
Moreover,
\begin{equation}
(-z)^{a+1}{\cal F}(a,b;-;z, \partial_z)(-z)^{-a}
={\cal F}(a;1+a-b;w, \partial_w).
\label{g8a}\end{equation}
Hence the ${}_2F_0$ equation is equivalent to the ${}_1F_1$
equation. We will treat the ${}_1F_1$ equation as the principal one.
The relationship between the parameters is
\[c=1+a-b,\ \ \ \ b=1+a-c.\]
\subsection{Lie-algebraic parameters}
Instead of the classical
parameters we usually prefer the Lie-algebraic parameters $\alpha,\theta$:
\[\begin{array}{lll}
\alpha:=c-1=a-b,&& \theta: =-c+2a=-1+a+b;\\[2ex]
a=\frac{1+\alpha+\theta }{2},& b=\frac{1 -\alpha+\theta}{2},\ \ \ &
c=1+\alpha.
\end{array}
\]
In these parameters the ${}_1F_1$ operator (\ref{f1c}) becomes
\begin{eqnarray*}
{\cal F}_{\theta ,\alpha}(z, \partial_z)
&=&z \partial_z^2+(1+\alpha-z) \partial_z-\frac{1}{2}(1+\theta +\alpha)
,\end{eqnarray*}
and the ${}_2F_0$ operator (\ref{g8}) becomes
\begin{eqnarray*}
\tilde{\cal F}_{\theta,\alpha}(z, \partial_z)&=&
z^2 \partial_z^2+(-1+(2+\theta)z) \partial_z+\frac14(1+\theta)^2-\frac14\alpha^2.
\end{eqnarray*}
The Lie-algebraic parameters have an interesting interpretation in terms of a natural basis of a ``Cartan algebra'' of the Lie algebra $sch(2)$ \cite{DM}.
\subsection{Integral representations}
Two kinds of integral representations of solutions to the
${}_1F_1$ equation
are described below:
\begin{theoreme}\begin{enumerate} \item
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\[t^{a-c+1}{\rm e}^t(t-z)^{-a-1}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\begin{equation}
{\cal F}(a;c;z, \partial_z)
\int_\gamma t^{a-c}{\rm e}^t(t-z)^{-a}{\rm d} t=0.\label{dad1}\end{equation}
\item
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\[{\rm e}^{\frac{z}{t}}t^{-c}(1-t)^{c-a}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\begin{equation}
{\cal F}(a;c;z, \partial_z)\int_\gamma{\rm e}^{\frac{z}{t}}t^{-c}(1-t)^{c-a-1}{\rm d} t
=0.
\label{dad}
\end{equation}
\end{enumerate}\label{dad4}\end{theoreme}
\noindent{\bf Proof.}\ \
We check that for any contour $\gamma$ the l.h.s of (\ref{dad1}) and (\ref{dad}) equal
\begin{eqnarray*}
&&
-a\int_\gamma\Big( \partial_t
t^{a-c+1}{\rm e}^t(t-z)^{-a-1}\Big){\rm d} t,\\
&&-\int_\gamma\Big( \partial_t
{\rm e}^{\frac{z}{t}}t^{-c}(1-t)^{c-a}\Big){\rm d} t
\end{eqnarray*}
respectively. \hfill$\Box$\medskip
For solutions of the ${}_2F_0$ equation we also have two kinds of
integral representations:
\begin{theoreme}
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\[{\rm e}^{-\frac{1}{t}}t^{b-a-1}(t-z)^{-b-1}
\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\begin{equation}{\cal F}(a,b;-;z, \partial_z)\int_\gamma{\rm e}^{-\frac{1}{t}}t^{b-a-1}(t-z)^{-b}{\rm d} t
\label{dad2}\end{equation}
\end{theoreme}
\noindent{\bf Proof.}\ \
We check that for any contour $\gamma$ (\ref{dad2}) equals
\[-b\int_\gamma\Big( \partial_t{\rm e}^{-\frac{1}{t}}t^{b-a-1}(t-z)^{-b-1}\Big){\rm d} t.\]
\hfill$\Box$\medskip
The second integral representation is obtained if we interchange $a$ and $b$.
\subsection{Symmetries}
\label{symcom1}
The following operators equal ${\cal F}_{\theta ,\alpha}(w, \partial_w)$ for the appropriate $w$:
\[\begin{array}{rrcl}
w=z:&&&\\
&&{\cal F}_{\theta ,\alpha}(z, \partial_z),&\\[1ex]
&z^{-\alpha}&{\cal F}_{\theta ,-\alpha}(z, \partial_z)&z^{\alpha},\\
w=-z:&&&\\
&-{\rm e}^{-z}&{\cal F}_{-\theta ,\alpha}(z, \partial_z)&{\rm e}^z,
\\[1ex]
&-{\rm e}^{-z}z^{-\alpha}&{\cal F}_{-\theta ,-\alpha}(z, \partial_z)&{\rm e}^zz^{\alpha}
.\end{array}\label{newnot1}\]
The third symmetry is sometimes called the {\em 1st Kummer transformation}.
Symmetries of the ${}_1F_1$ operators can be interpreted as the ``Weyl group'' of the Lie algebra $sch(2)$.
\subsection{Factorizations and commutation relations}
\label{symcom1a}
There are several ways of factorizing the ${}_1F_1$ operator.
\begin{eqnarray*}
{\cal F}_{\theta,\alpha}&=&\Big(z\partial_z+1+\alpha-z\Big)\partial_z-\frac12(\theta+\alpha+1),\\
&=&\partial_z\Big(z\partial_z+\alpha-z\Big)-\frac12(\theta+\alpha-1),\\
&=&\Big(z\partial_z+1+\alpha\Big)\Big(\partial_z-1\Big)+\frac12(-\theta+\alpha+1),\\
&=&\Big(\partial_z-1\Big)\Big(z\partial_z+\alpha\Big)+\frac12(-\theta+\alpha-1);
\end{eqnarray*}
\begin{eqnarray*}
z{\cal F}_{\theta,\alpha}&=&\Big(z\partial_z+\frac12(\theta+\alpha-1)\Big)\Big(
z\partial_z+\frac12(-\theta+\alpha+1)-z\Big)
\\&&-\frac14(-\theta+\alpha+1)(\theta+\alpha-1),\\
&=&\Big(
z\partial_z+\frac12(-\theta+\alpha-1)-z\Big)
\Big(z\partial_z+\frac12(\theta+\alpha+1)\Big)
\\&&-\frac14(-\theta+\alpha-1)(\theta+\alpha+1).
\end{eqnarray*}
One can use the factorizations to derive the following commutation relations:
\[\begin{array}{rrl}
& \partial_z&{\cal F}_{\theta ,\alpha}\\[1ex]
&=\ \ \ {\cal F}_{\theta +1,\alpha+1}& \partial_z,\\[3ex]
&(z \partial_z+\alpha-z)&{\cal F}_{\theta ,\alpha}\\[1ex]
&=\ \ \ {\cal F}_{\theta -1,\alpha-1}&(z \partial_z+\alpha-z),\\[3ex]
&(z \partial_z+\alpha)&
{\cal F}_{\theta ,\alpha}\\[1ex]
&=\ \ \ {\cal F}_{\theta +1,\alpha-1}&(z \partial_z+\alpha),\\[3ex]
& ( \partial_z-1)&{\cal F}_{\theta ,\alpha},\\[1ex]
&=\ \ \ {\cal F}_{\theta -1,\alpha+1}&( \partial_z-1);\\[3ex]
&\big( z \partial_z+\frac{1}{2}(\theta + \alpha+1)\big)&z{\cal F}_{\theta ,\alpha}\\[1ex]
&=\ \ \ z{\cal F}_{\theta +2,\alpha}&\big( z \partial_z+\frac{1}{2}(\theta + \alpha+1)\big),\\[3ex]
&\big(z \partial_z+\frac{1}{2}(-\theta +\alpha+1)-z)
&z{\cal F}_{\theta ,\alpha}\\[1ex]
&=\ \ \ z{\cal F}_{\theta -2,\alpha}&\big(z \partial_z+\frac{1}{2}(-\theta +\alpha+1)-z\big).
\end{array}\]
Each of these commutation relations can be associated with
a ``root'' of the Lie algebra
$sch(2)$.
\subsection{Canonical forms}
The natural weight of the ${}_1F_{1}$ operator equals $z^{\alpha}{\rm e}^{-z}$, so that
\[{\cal F}_{\theta,\alpha}=
z^{-\alpha}{\rm e}^z\partial_zz^{\alpha+1}{\rm e}^{-z}\partial_z
-\frac12(1+\alpha+\theta).\]
The balanced form of the ${}_1F_{1}$ operator is
\begin{eqnarray*}
z^{\frac{\alpha}{2}}{\rm e}^{-\frac{z}{2}}{\cal F}_{\theta,\alpha}
z^{-\frac{\alpha}{2}}{\rm e}^{\frac{z}{2}}&=&
\partial_zz\partial_z-\frac{z}{4}-\frac{\theta}{2}-\frac{\alpha^2}{4z}.
\end{eqnarray*}
\begin{remark}
We have
\begin{eqnarray*}
2z^{\frac{\alpha}{2}-1}{\rm e}^{-\frac{z}{2}}{\cal F}_{0,\alpha}(z,\partial_z)
z^{-\frac{\alpha}{2}}{\rm e}^{\frac{z}{2}}&=&
\partial_w^2+\frac{1}{w}\partial_w
-1-\frac{\alpha^2}{w^2},\ \ \ \ z=2w;\\
2{\rm i} z^{\frac{\alpha}{2}-1}{\rm e}^{-\frac{z}{2}}{\cal F}_{0,\alpha}(z,\partial_z)
z^{-\frac{\alpha}{2}}{\rm e}^{\frac{z}{2}}&=&
\partial_u^2+\frac{1}{u}\partial_u
+1-\frac{\alpha^2}{u^2},\ \ \ \ z=2{\rm i} u.
\end{eqnarray*}
which are the operators for the {\em modified Bessel} and {\em Bessel equations}.
Thus both these equations essentially coincide with the balanced form
of the ${}_1F_1$ equation with $\theta=0$. We will discuss them further
in Rem. \ref{bessel0}.
\label{bess}
\end{remark}
The Schr\"odinger form of the ${}_1F_1$ equation is
\begin{eqnarray}
z^{\frac{\alpha}{2}-\frac12}{\rm e}^{-\frac{z}{2}}{\cal F}_{\theta,\alpha}
z^{-\frac{\alpha}{2}-\frac12}{\rm e}^{\frac{z}{2}}&=&
\partial_z^2
-\frac{1}{4}-\frac{\theta}{2z}+\Big(\frac14-\frac{\alpha^2}{4}\Big)\frac{1}{z^2}.\label{whitta}
\end{eqnarray}
\begin{remark}
In the literature the equation given by (\ref{whitta}) is often called the {\em Whittaker equation}. Its standard form is
\[ \partial_z^2
-\frac{1}{4}+\frac{\kappa}{z}+\Big(\frac14-\mu^2\Big)\frac{1}{z^2}.\label{witta}
\]
Thus, $\kappa$, $\mu$ correspond to $-\frac{\theta}{2}$, $\frac{\alpha}{2}$.
\end{remark}
The natural weight of the ${}_2F_0$ operator
equals $z^{\theta}{\rm e}^{\frac{1}{z}}$, so that
\[\tilde{\cal F}_{\theta,\alpha}=
z^{-\theta}{\rm e}^{-\frac{1}{z}}\partial_zz^{\theta+2}{\rm e}^{\frac{1}{z}}\partial_z
+\frac{(1+\theta)^2}{4}-\frac{\alpha^2}{4}.\]
The balanced form of the ${}_2F_0$ operator is
\begin{eqnarray}
z^{\frac{\theta}{2}}{\rm e}^{\frac{1}{2z}}\tilde{\cal F}_{\theta,\alpha}
z^{-\frac{\theta}{2}}{\rm e}^{-\frac{1}{2z}}
&=&
\partial_zz^2\partial_z-\frac{1}{4z^2}+\frac{\theta}{2z}+\frac{1-\alpha^2}{4}.
\label{besslo}\end{eqnarray}
The symmetries $\alpha\mapsto-\alpha$, as well as $(z,\theta)\mapsto(-z,-\theta)$ are obvious in both balanced forms and in the Whittaker equation.
\subsection{The ${}_1F_1$ function}
Equation (\ref{f1c}) has a regular singular point at $0$.
Its indices at $0$ are equal $0$, $1-c$.
For $c\neq 0,-1,-2,\dots$, the unique solution of the confluent equation analytic
at $0$ and equal to 1 at 0 is called
the ${}_1F_1$ hypergeometric function or
the confluent function. It is equal to
\[F(a;c;z):=\sum_{n=0}^\infty
\frac{(a)_n}{
(c)_n}\frac{z^n}{n!}.\]
It is defined for
$c\neq0,-1,-2,\dots$.
Sometimes it is more convenient to consider
the function
\[ {\bf F} (a;c;z):=\frac{F(a;c;z)}{\Gamma(c)}=
\sum_{n=0}^\infty
\frac{(a)_n}{
\Gamma(c+n)}\frac{z^n}{n!}.\]
Another useful function proportional to ${}_1F_1$ is
\begin{eqnarray*}
{\bf F}^{\rm\scriptscriptstyle I} (a;c;z)&:=&\frac{\Gamma(a)\Gamma(c-a)}{\Gamma(c)}F(a;c;z).
\end{eqnarray*}
The confluent function can be obtained as the limit of the hypergeometric
function:
\[F(a;c;z)=\lim_{b\to\infty}F(a,b;c;z/b).\]
It satisfies the so-called {\em Kummer's identity}:
\begin{equation} F(a;c;z)={\rm e}^z F\left(c-a;c;-z\right).
\end{equation}
Integral representations for all parameters
\begin{eqnarray*}
\frac{1}{2\pi {\rm i}}\int\limits
_{]-\infty,(0,z)^+,-\infty[} t^{a-c}{\rm e}^t(t-z)^{-a}{\rm d} t
&=& {\bf F} (a;c;z),\end{eqnarray*}
for ${\rm Re} a>0,\ {\rm Re} (c-a)>0$
\begin{eqnarray*}\int\limits_{[1,+\infty[}{\rm e}^{\frac{z}{t}}t^{-c}(t-1)^{c-a-1}{\rm d} t
&=& {\bf F}^{\rm\scriptscriptstyle I} (a;c;z),\end{eqnarray*}
and for ${\rm Re} (c-a)>0$
\begin{eqnarray}\frac{1}{2\pi{\rm i}}\int\limits_{[1,0^+,1]}{\rm e}^{\frac{z}{t}}(-t)^{-c}(-t+1)^{c-a-1}{\rm d} t
&=& \frac{\sin\pi a}{\pi}{\bf F}^{{\rm\scriptscriptstyle I}} (a;c;z).
\label{kumme}\end{eqnarray}
In the Lie-algebraic parameters:
\begin{eqnarray*}
F_{\theta ,\alpha}(z)&:=&F\Bigl(\frac{1+\alpha+\theta }{2};1+\alpha;z\Bigr)
,\\
{\bf F} _{\theta ,\alpha}(z)&:=&
{\bf F} \Bigl(\frac{1+\alpha+\theta }{2};1+\alpha;z\Bigr)\\
&=&
\frac{1}{\Gamma(\alpha+1)}F_{\theta ,\alpha}(z),\\
{\bf F}^{\rm\scriptscriptstyle I} _{\theta ,\alpha}(z)&:=&
{\bf F}^{\rm\scriptscriptstyle I} \Bigl(\frac{1+\alpha+\theta }{2};1+\alpha;z\Bigr)\\&=&
\frac{\Gamma(\frac{1+\alpha+\theta}{2})
\Gamma(\frac{1+\alpha-\theta}{2})}{\Gamma(\alpha+1)}
F_{\theta ,\alpha}(z).\end{eqnarray*}
\begin{remark} In the literature the ${}_1F_1$ function is often called {\em Kummer's function} and denoted
\[M(a,c,z):=F(a;c;z).\]
One also uses the {\em Whittaker function of the 1st kind}
\[M_{\kappa,\mu}(z):=\exp(-z/2)z^{\mu+1/2}M\Big(\mu-\kappa+\frac12,1+2\mu,z\Big),\]
which solves the Whittaker equation. \end{remark}
\subsection{The ${}_2F_0$
function}
We define, for $z\in{\mathbb C}\backslash[0,+\infty[$,
\[F(a,b;-;z):=\lim_{c\to\infty}F(a,b;c;cz),\]
where $|\arg c-\pi|<\pi-\epsilon$, $\epsilon>0$.
It extends to an analytic function on the universal cover of
${\mathbb C}\backslash\{0\}$
with a branch point of an infinite order at 0.
It has the following asymptotic expansion:
\[
F(a,b;-;z)\sim\sum_{n=0}^\infty\frac{(a)_n(b)_n}{n!}z^n,
\ |\arg z-\pi|<\pi-\epsilon.
\]
Sometimes instead of ${}_2F_0$ it is useful to consider the function
\begin{eqnarray*}
{\bf F}^{\rm\scriptscriptstyle I} (a,b;-;z)&:=&\Gamma(a)F(a,b;-;z).
\end{eqnarray*}
We have an integral representation for ${\rm Re} a>0$
\[\int_0^\infty
{\rm e}^{-\frac{1}{t}}t^{b-a-1}(t-z)^{-b}{\rm d} t
= {\bf F}^{\rm\scriptscriptstyle I} (a,b;-;z), \ \ z\not\in[0,\infty[,
\]and without a restriction on parameters
\[\frac{1}{2\pi{\rm i}}\int\limits_{[0,z^+,0]}
{\rm e}^{-\frac{1}{t}}t^{b-a-1}(t-z)^{-b}{\rm d} t
= \frac{\sin\pi a}{\pi}{\bf F}^{\rm\scriptscriptstyle I} (a,b;-;z), \ \ z\not\in[0,\infty[.
\]
When we use the Lie-algebraic parameters, we denote the ${}_2F_0$ function by
$\tilde F$ and $\tilde {\bf F}$. The tilde is needed to avoid the confusion with the ${}_1F_1$ function:
\begin{eqnarray*}
\tilde F_{\theta ,\alpha}(z)&:=&F\Bigl(\frac{1+\alpha+\theta }{2},\frac{1-\alpha+\theta }{2};-;z\Bigr),\\
\tilde {\bf F}^{\rm\scriptscriptstyle I} _{\theta ,\alpha}(z)&:=& {\bf F}^{\rm\scriptscriptstyle I} \Bigl(\frac{1+\alpha+\theta }{2},
\frac{1 -\alpha+\theta}{2};-;z\Bigr)\\
&=& \Gamma\Big(\frac{1-\alpha+\theta}{2}\Big)\tilde F_{\theta ,\alpha}(z) .
\end{eqnarray*}
\begin{remark} In the literature the ${}_2F_0$ function is seldom used.
Instead one uses {\em Tricomi's function}
\[U(a,c,z):=z^{-a}F(a;a-b-1;-;z^{-1}).\]
It is one of solutions of the ${}_1F_1$ equation, which we will discuss in
Subsubsect \ref{s4.6-}.
One also uses the {\em Whittaker function of the 2nd kind}
\[W_{\kappa,\mu}(z):=\exp(-z/2)z^{\mu+1/2}U\Big(\mu-\kappa+\frac12;1+2\mu;z\Big),\]
which solves the Whittaker equation. \end{remark}
\subsection{Standard solutions}
The ${}_1F_1$ equation has two singular points. $0$ is a regular
singular point and with each of its two indices we can associate the
corresponding solution. $\infty$ is not a regular singular point. However we can define two solutions with a simple behavior around $\infty$. Altogether we obtain 4 {\em standard solutions}, which we will describe in this subsection.
It follows by Thm \ref{dad4} that, for appropriate contours $\gamma_1$, $\gamma_2$, the integrals
\begin{eqnarray*}
&&\int\limits
_{\gamma_1}
t^{\frac{-1+\theta -\alpha}{2}}{\rm e}^t(t-z)^{\frac{-1-\theta -\alpha}{2}}{\rm d} t,
\\
&&
\int\limits_{\gamma_2}{\rm e}^{\frac{z}{t}}t^{-1-\alpha}(t-1)^{\frac{-1-\theta +\alpha}{2}}
{\rm d} t
\end{eqnarray*}
solve the
${}_1F_1$ equation.
In the first integral the natural candidates for the endpoints of the intervals of integration are $\{-\infty,0,z\}$. We will see that all 4 standard solutions can be obtained as such integrals.
In the second integral the natural candidates for endpoints are
$\{1,0-0,\infty\}$. (Recall from Subsect \ref{a.3} that $0-0$ denotes $0$ approached from the left). The 4 standard solutions can be obtained also from the integrals with these endpoints.
\subsubsection{Solution $\sim1$ at $0$}
For $\alpha\neq-1,-2,\dots$, the only solution $\sim 1$ around $0$
is
\begin{eqnarray*}
F_{\theta ,\alpha}(z)&=&{\rm e}^zF_{-\theta ,\alpha}(-z).\end{eqnarray*}
The first integral representation is valid for all parameters:
\begin{eqnarray*}
\frac{1}{2\pi {\rm i}}\int\limits
_{]-\infty,(0,z)^+-\infty[}
t^{\frac{-1+\theta -\alpha}{2}}{\rm e}^t(t-z)^{\frac{-1-\theta -\alpha}{2}}{\rm d} t
&=& {\bf F} _{\theta ,\alpha}(z).
\end{eqnarray*}
The second is valid for ${\rm Re}(1+\alpha)>|{\rm Re} \theta |$:
\begin{eqnarray*}
\int\limits_{[1,+\infty[}{\rm e}^{\frac{z}{t}}t^{-1-\alpha}(t-1)^{\frac{-1-\theta +\alpha}{2}}
{\rm d} t
&=& {\bf F}^{\rm\scriptscriptstyle I} _{\theta ,\alpha}(z).\end{eqnarray*}
\subsubsection{Solution $\sim z^{-\alpha}$ at $0$}
\label{s4.6}
If $\alpha\neq1,2,\dots$, then the
only solution of the confluent equation behaving as
$z^{-\alpha}$ at
$0$ is equal to
\begin{eqnarray*}
z^{-\alpha} F _{\theta ,-\alpha}(z)&=&z^{-\alpha}
{\rm e}^z F _{-\theta ,-\alpha}(-z).\end{eqnarray*}
Integral representation
for ${\rm Re}(1-\alpha)>|{\rm Re} \theta |$:
\begin{eqnarray*}
\int_0^z
t^{\frac{-1+\theta -\alpha}{2}}{\rm e}^t(z-t)^{\frac{-1-\theta -\alpha}{2}}{\rm d} t
&=&
z^{-\alpha} {\bf F}^{\rm\scriptscriptstyle I} _{\theta ,-\alpha}(z),\ \ z\not\in]-\infty,0];\\
\int_z^0
(-t)^{\frac{-1+\theta -\alpha}{2}}{\rm e}^t(t-z)^{\frac{-1-\theta -\alpha}{2}}{\rm d} t
&=&
(-z)^{-\alpha} {\bf F}^{\rm\scriptscriptstyle I} _{\theta ,-\alpha}(z),\ \ z\not\in[0,\infty[;
\end{eqnarray*}
and without a restriction on parameters:
\begin{eqnarray*}
\frac{1}{2\pi{\rm i}}
\int\limits_{(0-0)^+}
{\rm e}^{\frac{z}{t}}t^{-1-\alpha}(1-t)^{\frac{-1-\theta +\alpha}{2}}
{\rm d} t
&=&z^{-\alpha} {\bf F} _{\theta ,-\alpha}(z),\ \ {\rm Re} z>0.
\end{eqnarray*}
\subsubsection{Solution $\sim z^{-a}$ at $+\infty$ }
\label{s4.6-}
The following solution to the confluent equation
behaves as $\sim z^{-a}=z^{-\frac{1+\theta +\alpha}{2}}$
at $+\infty$ for $|\arg z|<\pi-\epsilon$:
\begin{eqnarray*}
z^{\frac{-1-\theta -\alpha}{2}}\tilde F_{\theta ,\pm \alpha}(-z^{-1}).
&&
\end{eqnarray*}
Integral representations for ${\rm Re}(1+\theta -\alpha)>0$:
\begin{eqnarray*}
\int_{-\infty}^0
(-t)^{\frac{-1+\theta -\alpha}{2}}{\rm e}^t(z-t)^{\frac{-1-\theta -\alpha}{2}}{\rm d} t
&=&
z^{\frac{-1-\theta -\alpha}{2}} \tilde {\bf F}^{\rm\scriptscriptstyle I} _{\theta ,\alpha}(-z^{-1}),\ \ \ \
\ \ z\not\in]-\infty,0];\end{eqnarray*}
and, for $ {\rm Re}(1+\theta +\alpha)>0$:
\begin{eqnarray*}\int_{-\infty}^0
{\rm e}^{\frac{z}{t}}(-t)^{-1-\alpha}(1-t)^{\frac{-1+\theta +\alpha}{2}}
{\rm d} t&=&
z^{\frac{-1-\theta -\alpha}{2}}
\tilde {\bf F}^{\rm\scriptscriptstyle I} _{\theta ,-\alpha}(-z^{-1}),\ \ \ \ {\rm Re} z>0.
\end{eqnarray*}
\subsubsection{Solution $\sim (-z)^{-b}{\rm e}^z$ at $-\infty$ }
The following solution to the confluent equation
behaves as $\sim (-z)^{-b}{\rm e}^z= (-z)^{\frac{1+\theta -\alpha}{2}}{\rm e}^z$
at $\infty$ for $|\arg z-\pi|<\pi-\epsilon$:
\begin{eqnarray*}
{\rm e}^z(-z)^{\frac{-1-\theta -\alpha}{2}}\tilde F_{-\theta ,\pm\alpha}(z^{-1})
.&&
\end{eqnarray*}
Integral representation for ${\rm Re}(1+\theta +\alpha)>0$:
\begin{eqnarray*}
\int_{-\infty}^z
(-t)^{\frac{-1+\theta -\alpha}{2}}{\rm e}^t(z-t)^{\frac{1-\theta -\alpha}{2}}{\rm d} t
&=&
{\rm e}^z(-z)^{\frac{-1-\theta -\alpha}{2}} \tilde {\bf F}^{\rm\scriptscriptstyle I} _{-\theta ,-\alpha}(z^{-1}),\ \ \ \ \ z\not\in[0,\infty[;\end{eqnarray*}
and for ${\rm Re}(1+\theta -\alpha)>0$:
\begin{eqnarray*}\int_0^1
{\rm e}^{\frac{z}{t}}t^{-1-\alpha}(1-t)^{\frac{-1+\theta +\alpha}{2}}
{\rm d} t&=&
{\rm e}^z(-z)^{\frac{-1-\theta -\alpha}{2}}
\tilde {\bf F}^{\rm\scriptscriptstyle I} _{-\theta ,\alpha}(z^{-1}),\ \ \ {\rm Re} z<0.
\end{eqnarray*}
\subsection{Connection formulas}
We decompose standard solutions in pair of solutions with a simple behavior around zero.
\begin{eqnarray*}
z^{\frac{-1-\theta -\alpha}{2}}\tilde F_{\theta ,\pm \alpha}(-z^{-1})
&=&\frac{\pi}{\sin{\pi(-\alpha)}\Gamma\left(\frac{1+\theta -\alpha}{2}\right)}
{\bf F} _{\theta ,\alpha}(z)\\
&&+\frac{\pi}{\sin\pi \alpha\Gamma\left(\frac{1+\theta +\alpha}{2}\right)}
z^{-\alpha} {\bf F} _{\theta ,-\alpha}(z),\\
{\rm e}^z(-z)^{\frac{-1-\theta -\alpha}{2}}\tilde F_{-\theta ,\pm\alpha}(z^{-1})
&=&\frac{\pi}{\sin{\pi(-\alpha)}
\Gamma\left(\frac{1-\theta -\alpha}{2}\right)}
{\bf F} _{\theta ,\alpha}(z)\\
&&+\frac{\pi}
{\sin\pi \alpha\Gamma\left(\frac{1-\theta +\alpha}{2}\right)}
(-z)^{-\alpha} {\bf F}_{\theta ,-\alpha}(z).
\end{eqnarray*}
\subsection{Recurrence relations}
The following recurrence relations follow easily
from the commutation relations of Subsect. \ref{symcom1a}:
\begin{eqnarray*}
,\\[3ex]
\partial_z {\bf F} _{\theta ,\alpha}(z)&=&\frac{1+\theta +\alpha}{2} {\bf F} _{\theta +1,\alpha+1}(z),
\\
\left(z \partial_z+\alpha-z\right) {\bf F} _{\theta ,\alpha}(z)&=& {\bf F} _{\theta -1,\alpha-1}(z),\\[3ex]
\left(z \partial_z+\alpha\right) {\bf F} _{\theta ,\alpha}(z)&=& {\bf\rm
F} _{\theta +1,\alpha-1}(z),\\
\left( \partial_z-1\right) {\bf F} _{\theta ,\alpha}(z)&=&\frac{-1+\theta -\alpha}{2} {\bf F} _{\theta -1,\alpha+1}(z),
\\[3ex]
\left(z \partial_z+\frac{1+\theta +\alpha}{2}\right) {\bf F} _{\theta ,\alpha}(z)&=&\frac{1+\theta +\alpha}{2} {\bf F} _{\theta +2,\alpha}(z),
\\
\left(z \partial_z+\frac{1-\theta +\alpha}{2}-z\right) {\bf F} _{\theta ,\alpha}(z)&
=&\frac{1-\theta +\alpha}{2} {\bf F} _{\theta
-2,\alpha}(z).
\end{eqnarray*}
The recurrence relations for the ${}_2F_0$ functions
are similar:
\begin{eqnarray*}
\left(z \partial_z+\frac{1+\theta +\alpha}{2}\right)\tilde {\bf F}_{\theta ,\alpha}^{\rm\scriptscriptstyle I}(z)&=&\frac{1+\theta +\alpha}{2}
\tilde {\bf F}_{\theta +1,\alpha+1}^{\rm\scriptscriptstyle I}(z),\\
\left(z^2 \partial_z-1+\frac{1+\theta -\alpha}{2}z\right)\tilde {\bf F}_{\theta ,\alpha}^{\rm\scriptscriptstyle I}(z)&=&-\tilde {\bf F}_{\theta -1,\alpha-1}^{\rm\scriptscriptstyle I}(z),\\[2ex]
\left(z \partial_z+\frac{1+\theta -\alpha}{2}\right)\tilde {\bf F}_{\theta ,\alpha}^{\rm\scriptscriptstyle I}(z)&=&
\tilde {\bf F}_{\theta +1,\alpha-1}^{\rm\scriptscriptstyle I}(z),\\
\left(z^2 \partial_z-1+\frac{1+\theta +\alpha}{2}z\right)\tilde {\bf F}_{\theta ,\alpha}^{\rm\scriptscriptstyle I}(z)&=&
\frac{1-\theta+\alpha}{2}\tilde {\bf F}_{\theta -1,\alpha+1}^{\rm\scriptscriptstyle I}(z),\\[2ex]
\partial_z\tilde{\bf F}_{\theta ,\alpha}^{\rm\scriptscriptstyle I}(z)&=&\frac{1{+}\theta {+}\alpha}{2}
\tilde {\bf F}_{\theta +2,\alpha}^{\rm\scriptscriptstyle I}(z),\\
(z^2 \partial_z-1-\theta z)\tilde {\bf F}_{\theta ,\alpha}^{\rm\scriptscriptstyle I}(z)&=&\frac{1-\theta+\alpha}{2}\tilde {\bf F}_{\theta -2,\alpha}^{\rm\scriptscriptstyle I}(z).
\end{eqnarray*}
\subsection{Additional recurrence relations}
There exists an additional pair of recurrence relations:
\begin{eqnarray*}
\left((1{-}\alpha)z^2 \partial_z{+}\frac{(1{-}\alpha)(1-\alpha+\theta)}{2}z{+}\frac{{-}1{-}\theta {+}\alpha}{2}\right)
\tilde F_{\theta ,\alpha}(z)&=&\frac{-1{-}\theta {+}\alpha}{2}\tilde F_{\theta ,\alpha-2}(z),\\[2ex]
\left(
(1{+}\alpha)z^2 \partial_z{+}\frac{(1{+}\alpha)(1+\alpha+\theta)}{2}z{+}\frac{{-}1{-}\theta {-}\alpha}{2}\right)
\tilde F_{\theta ,\alpha}(z)&=&
\frac{{-}1{-}\theta {-}\alpha}{2}\tilde F_{\theta ,\alpha+2}(z).
\end{eqnarray*}
\subsection{Degenerate case}
$\alpha=m\in{\mathbb Z}$ is the degenerate case of the confluent equation at $0$.
We have then
\[{\bf F}
(a;1+m;z)=\sum_{n=\max(0,-m)}\frac{(a)_n}{n!(m+n)!}z^n.\]
This easily implies the identity
\[(a-m)_m{\bf F}(a;1+m;z)=z^{-m}
{\bf F}(a-m;1-m;z).
\]Thus the two standard solutions determined by the behavior at zero are proportional to one another.
One can also see the degenerate case in the integral representations:
\begin{eqnarray*}\frac{1}{2\pi{\rm i}}\int\limits
_{[(z,0)^+]}{\rm e}^{t}(1-z\slash t)^{-a}
t^{-m-1}{\rm d} t&=&
{\bf F} _{-1+2a-m ,m}(z)\\
&=&
\left(a\right)_{-m} z^{-m} {\bf F} _{-1+2a-m ,-m}(z),\\[4ex]
\frac{1}{2\pi{\rm i}}\int\limits_{[(0,1)^+]}
{\rm e}^{z\slash t}(1-t)^{-a}t^{-m-1}{\rm d} t
&=&\left(a\right)_{m}
{\bf F} _{-1+2a+m ,m}(z)\\
&=&
z^{-m} {\bf F} _{-1+2a+m ,-m}(z)
.\end{eqnarray*}
The corresponding generating functions are
\begin{eqnarray*}
{\rm e}^{t}(1-z\slash t)^{-a}&=&
\sum_{m\in{\mathbb Z}}t^m {\bf F} _{-1+2a-m,m}(z),\\
{\rm e}^{z\slash t}(1-t)^{-a}&=&
\sum_{m\in{\mathbb Z}}t^m(a)_{m} {\bf F} _{-1+2a+m,m}(z).\end{eqnarray*}
\subsection{Laguerre polynomials}
\label{s9.4}
${}_1F_1$ functions for $-a=n=0,1,2,\dots$ are polynomials.
They are
known as {\em Laguerre polynomials}.
Following
Subsect.
\ref{Hypergeometric type polynomials}, they can be defined by the following version of the Rodriguez-type formula:
\[
L_n^\alpha(z):=\frac{1}{n!}{\rm e}^zz^{-\alpha} \partial_z^n{\rm e}^{-z}z^{n+\alpha}.
\]
The differential equation:
\begin{eqnarray*}
{\cal F}(-n;\alpha+1;z, \partial_z)L_n^\alpha(z)&&\\
=\ \Big(z\partial_z^2+(1+\alpha-z)\partial_z+n\Big)L_n^\alpha(z)&=&0.
\end{eqnarray*}
Generating functions:
\[\begin{array}{l}
{\rm e}^{-tz}(1+t)^{\alpha}
=\sum\limits_{n=0}^\infty t^n L_n^{\alpha-n}(z),\\[5mm]
(1-t)^{-\alpha-1}
\exp{\frac{tz}{t-1}}=\sum\limits_{n=0}^\infty t^nL_n^{\alpha}(z).
\end{array}\]
Integral representations:
\[\begin{array}{rl}
L_n^\alpha(z)&=\frac{1}{2\pi{\rm i}}\int\limits_{[0^+]}
{\rm e}^{-tz}(1+t)^{\alpha+n}t^{-n-1}{\rm d} t\\[5mm]
&=\frac{1}{2\pi{\rm i}}\int\limits_{[0^+]}
(1-t)^{-\alpha-1}\exp(\frac{tz}{t-1})t^{-n-1}{\rm d} t.
\end{array}\]
Expression in terms of the Bessel polynomials (to be defined in the next subsection):
\[\begin{array}{rl}
L_n^\alpha(z)&=z^nB_n^{-2n-\alpha-1}(-z^{-1}).
\end{array}\]
Recurrence relations:
\begin{eqnarray*}
\partial_zL_n^\alpha(z)&=&-L_{n-1}^{\alpha+1}(z),\\
\left(z \partial_z+\alpha-z\right)L_n^\alpha(z)&=&(n+1)L_{n+1}^{\alpha-1}(z),\\[3ex]
\left(z \partial_z+\alpha\right)L_n^\alpha(z)&=&(\alpha+n)L_n^{\alpha-1}(z),\\
\left( \partial_z-1\right)L_n^\alpha(z)&=&-L_n^{\alpha+1}(z),
\\[3ex]
\left(z \partial_z-n\right)L_n^\alpha(z)&=&-(n+\alpha)L_{n-1}^\alpha(z),\\
\left(z \partial_z+n+\alpha+1-z\right)L_n^\alpha(z)&=&(n+1)L_{n+1}^\alpha(z).
\end{eqnarray*}
The first, resp. second integral representation is easily seen to be
equivalent to the first, resp. second generating function.
The differential equation, the Rodriguez-type formula, the first
generating function, the first integral representation and the first
pair of recurrence relations are
special cases of the corresponding formulas of Subsect.
\ref{Hypergeometric type polynomials}.
We have several alternative expressions for Laguerre polynomials:
\begin{eqnarray*}
L_n^\alpha(z)&=&\lim\limits_{\nu\to n}
(-1)^n(\nu-n)
{\bf F}_{1+\alpha-2\nu,\alpha}^{{\rm\scriptscriptstyle I}}(z)
=\frac{(1+\alpha)_n}{n!}F(-n;1+\alpha;z)\\
&=&z^n\lim\limits_{\nu\to n}(\nu-n)\tilde{\bf F}_{1+\alpha-2\nu,\alpha}^{{\rm\scriptscriptstyle I}}(z)
=\frac{1}{n!}(-z)^nF(-n,-\alpha-n;-;-z^{-1})
\\
&=&\sum\limits_{j=0}^n\frac{(1+\alpha+j)_{n-j}}{j!(n-j)!}(-z)^j.
\end{eqnarray*}
Let us derive the above identity using the integral representation
(\ref{kumme}).
Using that $a$ is an integer we can replace the open curve $[1,0^+,1]$
with a closed loop $[\infty^-]$:
\begin{eqnarray*}&&
\lim\limits_{\nu\to n}
(-1)^n(\nu-n)
{\bf F}_{1+\alpha-2\nu,\alpha}^{{\rm\scriptscriptstyle I}}(z)\\
&=&
\lim\limits_{\nu\to n}\frac{\sin \nu\pi}{\pi}{\bf F}_{1+\alpha-2\nu,\alpha}^{{\rm\scriptscriptstyle I}}(z)\\
&=&\frac{1}{2\pi{\rm i}}
\int_{[\infty^-]}{\rm e}^{\frac{z}{s}} (-s)^{-1-\alpha}(1-s)^{\alpha+n}{\rm d} s.
\end{eqnarray*}
Then we set $s=-\frac{1}{t}$, resp. $s=1-\frac1t$ to obtain the integral representations.
The
value at 0 and behavior at $\infty$:
\[
L_n^\alpha(0)=\frac{(\alpha+1)_n}{n!},\ \ \ \
\lim\limits_{z\to\infty}\frac{L_n^\alpha(z)}{z^n}=\frac{(-1)^n}{n!}.
\]
An additional identity valid in the degenerate case:
\begin{eqnarray*}
L_n^{\alpha}(z)&=&(n+1)_\alpha(-z)^{-\alpha}
L_{n+\alpha}^{-\alpha}(z),\ \ \alpha\in{\mathbb Z}.
\end{eqnarray*}
\subsection{Bessel polynomials}
The ${}_2F_0$ functions for $-a=n=0,1,2,\dots$ are
polynomials. Appropriately normalized they are called {\em Bessel
polynomials}. They are seldom used in the literature, because they
do not form an orthonormal basis in any weighted space and they are
easily expressed in terms of Laguerre polynomials.
Following
Subsect.
\ref{Hypergeometric type polynomials}, they can be defined by the following version of the Rodriguez-type formula:
\[\begin{array}{l}
B_n^\theta(z):=\frac{1}{n!}
z^{-\theta}{\rm e}^{z^{-1}} \partial_z^n{\rm e}^{-z^{-1}}z^{\theta+2n}.
\end{array}\]
Differential equation:
\begin{eqnarray*}
{\cal F}(-n,n+\theta+1;-; \partial_z,z)B_n^\theta(z)&&\\
=\Big(z^2 \partial_z^2+(-1+(2+\theta)z) \partial_z-\frac12n(1+\theta-\alpha)\Big)B_n^\theta(z)&=&0.
\end{eqnarray*}
Generating functions:
\[\begin{array}{l}
{\rm e}^{-t}(1-tz)^{-\theta-1}
=\sum\limits_{n=0}^\infty t^nB_n^{\theta-n}(z),\\[3mm]
(1+tz)^{\theta}\exp(\frac{-t}{1+tz})=\sum\limits_{n=0}^\infty
t^nB_n^{\theta-2n}(z).\end{array}\]
Integral representations:
\[\begin{array}{rl}
B_n^\theta(z)&=\frac{1}{2\pi{\rm i}}
\int\limits_{[0^+]}{\rm e}^{t}(1-tz)^{-\theta-n-1}t^{-n-1}{\rm d} t\\[3mm]
&=\frac{1}{2\pi{\rm i}}\int\limits_{[0^+]}
(1+tz)^{\theta+2n}\exp(\frac{-t}{1+tz})t^{-n-1}{\rm d} t.
\end{array}\]
Expression in terms of the Laguerre polynomials:
\[\begin{array}{l}
B_n^\theta(z)=(-z)^nL_n^{-\theta-2n-1}(-z^{-1}).
\end{array}\]
Recurrence relations:
\begin{eqnarray*}
\left(z \partial_z+n+\theta+1\right)B_n^\theta(z)&=&(n+\theta+1)B_n^{\theta+1}(z),
\\
\left(z^2 \partial_z-1-nz\right)B_n^\theta(z)&=&-B_n^{\theta-1}(z),\\[3ex]
\left( z \partial_z-n\right)B_n^\theta(z)&=&-B_{n-1}^{\theta+1}(z),\\
\left(z^2 \partial_z-1+(n+\theta+1)z\right)B_n^\theta(z)&=&-(n+1)B_{n+1}^{\theta-1}(z),
\\[3ex]
\partial_zB_n^\theta(z)&=&-(n+\theta+1)B_{n-1}^{\theta+2}(z),\\
\left(z^2 \partial_z-1-\theta z\right)B_n^\theta(z)&=&-(n+1)B_{n+1}^{\theta-2}(z).
\end{eqnarray*}
Most of the above identities can be directly obtained from the corresponding identities about Laguerre polynomials.
The differential equation, the Rodriguez-type formula, the second
generating function, the second integral representation and the last
pair of recurrence relations are special cases of the corresponding formulas of Subsect.
\ref{Hypergeometric type polynomials}.
We have several alternative expressions for Bessel polynomials:
\begin{eqnarray*}
B_n^\theta(z)&=&\lim\limits_{\nu\to n}(-1)^n(\nu-n)\tilde{\bf F}_{\theta,-1-\theta-2n}^{{\rm\scriptscriptstyle I}}(z)=\frac{1}{n!}F(-n,n+\theta+1;-;z)\\
&=&z^n\lim\limits_{\nu\to n}(\nu-n){\bf F}_{\theta,-1-\theta-2\nu}^{{\rm\scriptscriptstyle I}}(-z^{-1})\\&
=&\frac{(1+\theta+n)_n}{n!}(-z)^nF(-n;-\theta-2n;-z^{-1}).
\end{eqnarray*}
The value at zero and behavior at $\infty$:
\[
B_n^\theta(0)=\frac{1}{n!},\ \ \ \
\lim\limits_{z\to\infty}\frac{B_n^\theta(z)}{z^n}=\frac{(-1)^n(n+\theta+1)_n}{n!}.\]
Both for Laguerre and Bessel polynomials there exist additional recurrence relations and a generating function. Below we give a pair of such recurrence relations for Bessel polynomials.
\begin{eqnarray*}
\Big((2+2n+\theta)z^2 \partial_z+(2+2n+\theta)(n+\theta+1)z &&\\-(n+\theta+1)\Big)B_n^\theta(z)
&=&-(n+1)(n+\theta+1)B_{n+1}^\theta(z),\\[2ex]
\Big(-(2n+\theta)z^2 \partial_z+(2n+\theta)nz+n\Big)B_n^\theta(z)&=&
B_{n-1}^\theta(z).\end{eqnarray*}
They correspond to an additional generating function
\[\begin{array}{rl}2^\theta r^{-1}(1+r)^{-\theta}\exp(\frac{2t}{1+r})
&=\sum\limits_{n=0}^\infty t^n B_n^\theta(z),\\[3mm]
&\hbox{where}\ \ \ \ r:=\sqrt{1+4zt}.
\end{array}\]
\subsection{Special cases}
Apart from the polynomial case and the degenerate case,
the confluent equation has some other cases with special properties.
\subsubsection{Bessel equation}
If $\theta=0$, the confluent equation is equivalent to the (modified) Bessel equation, which we already remarked in Rem. \ref{bess}. By a square root substitution, it is also equivalent to the ${}_0F_1$ equation; see (\ref{gas}).
\subsubsection{Hermite equation}
If $\alpha=\pm\frac12$, the confluent equation is equivalent to the Hermite equation by the quadratic substitutions (\ref{ha6a}) and (\ref{ha6}).
\section{The ${}_0F_1$ equation}
\label{sa5}
\subsection{Introduction}
Let $c\in{\mathbb C}$. In this section we will consider
the {\em ${}_0F_1$ equation} given by the operator
\[{\cal F}(c;z, \partial_z):=z \partial_z^2+c \partial_z-1.\]
It is a limiting case of the ${}_1F_1$ and ${}_2F_1$ operator:
\[\lim_{a,b\to\infty}\frac1{ab}{\cal F}(a,b;c;z/ab, \partial_{(z/ab)})=\lim_{a\to\infty}\frac1a{\cal F}(a;c;z/a, \partial_{(z/a)})={\cal F}(c;z, \partial_z).\]
Instead of $c$ it is often more natural to use its {\em Lie-algebraic parameter }
\begin{equation} \alpha:=c-1,\ \ \ c=\alpha+1.\label{newnot3}\end{equation}
Thus we obtain the operator
\begin{eqnarray*}
{\cal F}_\alpha (z, \partial_z)&:=&z \partial_z^2+(\alpha +1) \partial_z-1.
\end{eqnarray*}
The Lie-algebraic parameter has well-known interpretation in terms of the ``Cartan element'' of the Lie algebra $aso(2)$, \cite{V,Wa,DM}.
\subsection{Equivalence with a subclass of the confluent equation}
The ${}_0F_1$ equation can be reduced to a special class of
the confluent equation by the so-called {\em Kummer's 2nd transformation}:
\begin{eqnarray}
{\cal F}(c;z, \partial_z)
=\frac{4}{w}{\rm e}^{-w/2}{\cal F}\Big(c-\frac{1}{2};2c-1;w, \partial_w\Big){\rm e}^{w/2},
\label{gas}\end{eqnarray}
where $w=\pm 4\sqrt{z}$, $z=\frac{1}{16}w^2$.
Using the Lie-algebraic parameters this can be rewritten as
\begin{equation}
{\cal F}_\alpha (z, \partial_z)
=\frac{4}{w}{\rm e}^{-w/2}{\cal F}_{0,2\alpha }(w, \partial_w){\rm e}^{w/2}.\label{gas1}\end{equation}
\subsection{Integral representations}
There are two kinds of integral representations of solutions to the
${}_0F_1$ equation. Thm \ref{schl} describes representations of the first kind, which
will be called {\em
Bessel-Schl\"afli representations}. They will be treated as the main ones.
\begin{theoreme}\label{schl}
Suppose that $[0,1]\ni t\mapsto\gamma(t)$ satisfies
\[{\rm e}^t{\rm e}^{\frac{z}{t}}t^{-c}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\begin{equation}{\cal F}(c;z, \partial_z)
\int_\gamma{\rm e}^t{\rm e}^{\frac{z}{t}}t^{-c}{\rm d} t=0.\label{dad5}\end{equation}
\end{theoreme}
\noindent{\bf Proof.}\ \
We check that for any contour $\gamma$ (\ref{dad5}) equals
\[-\int_\gamma\Big(
\partial_t{\rm e}^t{\rm e}^{\frac{z}{t}}t^{-c}\big){\rm d} t.\]
\hfill$\Box$\medskip
Integral representations that can be derived from the
representations for the confluent equation by 2nd Kummer's identity will be called {\em
Poisson-type representations}. They will be treated as secondary ones.
They are described in the following theorem.
\begin{theoreme}\begin{enumerate}\item Let the contour $\gamma$ satisfy
\[(t^2-z)^{-c+3/2}{\rm e}^{2t}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\[{\cal F}(c;z, \partial_z)\int_\gamma(t^2-z)^{-c+1/2}{\rm e}^{2t}{\rm d} t=0.\]
\item
Let the contour $\gamma$ satisfy
\[(t^2-1)^{c-1/2}{\rm e}^{2t\sqrt z}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\[{\cal F}(c;z, \partial_z)\int_\gamma(t^2-1)^{c-3/2}{\rm e}^{2t\sqrt z}{\rm d} t=0.\]\end{enumerate}
\label{dad7}\end{theoreme}
\noindent{\bf Proof.}\ \ By (\ref{gas}) and (\ref{dad1}), for appropriate contours $\gamma$ and $\gamma'$,
\begin{eqnarray*}
&&{\rm e}^{-2\sqrt z}\int_\gamma{\rm e}^s s^{-c+\frac12}(s-4\sqrt z)^{-c+\frac12}{\rm d} s\\
&=&2^{-2c+2}\int_{\gamma'}{\rm e}^{2t}(t^2-z)^{-c+\frac12}{\rm d} t
\end{eqnarray*}
is annihilated by ${\cal F}(c)$, where we set
$t=\frac{s}{2}-\sqrt z$. This proves 1.
By (\ref{gas}) and (\ref{dad}), for appropriate contours
$\gamma$ and $\gamma'$,
\begin{eqnarray*}
&&{\rm e}^{-2\sqrt z}\int_\gamma{\rm e}^{\frac{4\sqrt z}{s}} s^{-2c+1}(1-s)^{c-\frac32}{\rm d} s\\
&=&-2^{-2c+2}\int_{\gamma'}{\rm e}^{2t\sqrt z}(1-t^2)^{c-\frac32}{\rm d} t
\end{eqnarray*}
is annihilated by ${\cal F}(c)$, where we set
$t=\frac2s-1$. This proves 2.
\hfill$\Box$\medskip
\subsection{Symmetries}
\label{symcom2}
The only nontrivial symmetry is
\[\begin{array}{lcr}z^{-\alpha }\ {\cal F}_{-\alpha }\ z^\alpha &=&{\cal F}_\alpha .\end{array}\]
It can be interpreted as a ``Weyl symmetry'' of $aso(2)$.
\subsection{Factorizations and and commutation relations}
\label{symcom2a}
There are two ways to factorize the ${}_0F_1$ operator:
\begin{eqnarray*}
{\cal F}_\alpha&=&
\big(z\partial_z+\alpha+1\big)\partial_z-1\\
&=& \partial_z
\big(z\partial_z+\alpha\big)-1.
\end{eqnarray*}
The factorizations can be used to derive
the following commutation relations:
\[\begin{array}{rl}
\partial_z&{\cal F}_\alpha \\[1ex]
=\ \ {\cal F}_{\alpha +1}& \partial_z,\\[3ex]
(z \partial_z+\alpha )&{\cal F}_\alpha \\[1ex]
=\ \ {\cal F}_{\alpha -1}&(z \partial_z+\alpha ).\end{array}\]
Each commutation relation can be associated with a ``root'' of the Lie
algebra $aso(2)$.
\subsection{Canonical forms}
The natural weight of the ${}_0F_{1}$ operator is $z^\alpha$, so that
\[{\cal F}_{\alpha}=
z^{-\alpha}\partial_zz^{\alpha+1}\partial_z
-1.\]
The balanced form of the ${}_0F_{1}$ operator is
\begin{eqnarray*}
z^{\frac{\alpha}{2}}{\cal F}_{\alpha}
z^{-\frac{\alpha}{2}}&=&
\partial_zz\partial_z-1-\frac{\alpha^2}{4z}.
\end{eqnarray*}
The symmetry $\alpha\to-\alpha$ is obvious in the balanced form.
\begin{remark}\label{bessel0}
In the literature, the
${}_0F_1$ equation is seldom used. Much more frequent is
the {\em modified Bessel equation}, which is equivalent to the ${}_0F_1$ equation:
\begin{eqnarray*}
z^{\frac{\alpha}{2}}{\cal F}_{\alpha}(z,\partial_z)
z^{-\frac{\alpha}{2}}&=&
\partial_w^2+\frac{1}{w}\partial_w-1-\frac{\alpha^2}{w^2},\end{eqnarray*}
where $z=\frac{w^2}{4}$, $w=\pm 2\sqrt z$.
Even more frequent is the {\em Bessel equation}:
\begin{eqnarray*}
-z^{\frac{\alpha}{2}}{\cal F}_{\alpha}(z,\partial_z)
z^{-\frac{\alpha}{2}}&=&
\partial_u^2+\frac{1}{u}\partial_u+1-\frac{\alpha^2}{u^2},
\end{eqnarray*}
where $z=-\frac{u^2}{4}$, $u=\pm 2{\rm i}\sqrt z$.
Clearly, we can pass from the modified Bessel to the Bessel equation by
$w=\pm{\rm i} u$.
\end{remark}
\subsection{The ${}_0F_1$ function}
The ${}_0F_1$ equation
has a regular singular point at $0$.
Its indices at $0$ are equal to $0$, $1-c$.
If $c\neq0,-1,-2,\dots$, then the only solution of the ${}_0F_1$ equation
$\sim1$ at 0
is called
the {\em ${}_0F_1$ hypergeometric function}.
It is
\[F(c;z):=\sum_{j=0}^\infty
\frac{1}{
(c)_j}\frac{z^j}{j!}.\]
It is defined for $c\neq0,-1,-2,\dots$.
Sometimes it is more convenient to consider
the function
\[ {\bf F} (c;z):=\frac{F(c;z)}{\Gamma(c)}=
\sum_{j=0}^\infty
\frac{1}{
\Gamma(c+j)}\frac{z^j}{j!}\]
defined for all $c$.
We can express the ${}_0F_1$ function in terms of the confluent function
\begin{eqnarray*}F(c;z)&=&
{\rm e}^{-2\sqrt{z}}F\Big(\frac{2c-1}{2};
2c-1;4\sqrt{z}\Big)\\&=&
{\rm e}^{2\sqrt{z}}F\Big(\frac{2c-1}{2};
2c-1;-4\sqrt{z}\Big).
\end{eqnarray*}
It is also a limit of the confluent function.
\[F(c;z)=\lim_{a\to\infty} F(a;c;z/a).\]
For all parameters we have an integral representation called the {\em
Schl\"afli formula}:
\begin{eqnarray*}
\frac{1}{2\pi {\rm i}}\int\limits_{]-\infty,0^+,-\infty[}
{\rm e}^t{\rm e}^{\frac{z}{t}}t^{-c}{\rm d} t
&=& {\bf F} (c,z),\ \ \ \ {\rm Re} z>0.\end{eqnarray*}
For ${\rm Re} c>\frac12$ we have a representation called the {\em Poisson formula}:
\begin{eqnarray*}
\int_{-1}^1(1-t^2)^{c-\frac32}{\rm e}^{
2t\sqrt z}&=&\Gamma(c-\frac12)\sqrt\pi {\bf F} (c,z).\end{eqnarray*}
We will usually prefer to use the Lie-algebraic parameters:
\begin{eqnarray*}
F_\alpha (z)&:=&F(\alpha+1;z),\\
{\bf F} _\alpha (z)&:=& {\bf F} (\alpha+1;z)
.
\end{eqnarray*}
\begin{remark} In the literature the ${}_0F_1$ function is seldom used. Instead,
one uses the {\em modified Bessel function} and, even more frequently,
the {\em Bessel function}:
\begin{eqnarray*}
I_\alpha(w)&=&\Big(\frac{w}{2}\Big)^\alpha {\bf F} _\alpha\Big(\frac{w^2}{4}\Big),\\[3mm]
J_\alpha(w)&=&\Big(\frac{w}{2}\Big)^\alpha {\bf F} _\alpha\Big(-\frac{w^2}{4}\Big).
\end{eqnarray*}
They solve the modified Bessel, resp. the Bessel equation.
\end{remark}
\subsection{Standard solutions}
$z=0$ is a regular singular point. We have two standard solutions corresponding to its two indices.
Besides, we have an additional solution with a special behavior at $\infty$.
We know from Thm \ref{schl} that for appropriate contours $\gamma$ the integrals
\begin{eqnarray*}
\int\limits_{\gamma}
{\rm e}^t{\rm e}^{\frac{z}{t}}t^{-\alpha -1}{\rm d} t
\end{eqnarray*}
solve the ${}_0F_{1}$ equation.
The integrand goes to zero as $t\to-\infty$ and $t\to0-0$ (the latter for ${\rm Re} z>0$). Therefore, contours ending at these points yield solutions. We will see that in this way we can obtain all 3 standard solutions.
Besides, we can use Thm \ref{dad7} to obtain other integral representations, which are essentially special cases of representations for the ${}_1F_1$ and ${}_2F_0$ functions.
\subsubsection{Solution $\sim1$ at 0}
If $\alpha \neq-1,-2,\dots$, then the only solution of the ${}_0F_1$ equation
$\sim1$ at 0
is
\begin{eqnarray*}F_\alpha (z)&=&
{\rm e}^{-2\sqrt{z}}F_{0,2\alpha }\big(4\sqrt{z}\big)\\&=&
{\rm e}^{2\sqrt{z}}F_{0,2\alpha }\big(-4\sqrt{z}\big).
\end{eqnarray*}
For all parameters we have an integral representation
\begin{eqnarray*}
\frac{1}{2\pi {\rm i}}\int\limits_{]-\infty,0^+,-\infty[}
{\rm e}^t{\rm e}^{\frac{z}{t}}t^{-\alpha -1}{\rm d} t
&=& {\bf F} _\alpha (z),\ \ \ \ {\rm Re} z>0;\end{eqnarray*}
and for ${\rm Re} \alpha >-\frac12$ we have another integral representation
\begin{eqnarray*}
\int_{-1}^1(1-t^2)^{\alpha -\frac12}{\rm e}^{
2t\sqrt z}{\rm d} t&=&\Gamma(\alpha +\frac12)\sqrt\pi {\bf F} _\alpha (z),\ \
\ \ z\not\in]-\infty,0]
.\end{eqnarray*}
\subsubsection{Solution $\sim z^{-\alpha }$ at 0}
If $\alpha \neq 1,2,\dots$, then the only solution to the ${}_0F_1$ equation
$\sim z^{-\alpha}$ at 0 is
\begin{eqnarray*}z^{-\alpha }F_{-\alpha} (z)&=&
z^{-\alpha }{\rm e}^{-2\sqrt{z}}F_{0,-2\alpha }\big(4\sqrt{z}\big)\\&=&
z^{-\alpha }{\rm e}^{2\sqrt{z}}F_{0,-2\alpha }\big(-4\sqrt{z}\big).
\end{eqnarray*}
For all parameters we have
\begin{eqnarray*}
\frac{1}{2\pi {\rm i}}\int\limits_{[(0-0)^+]}{\rm e}^t{\rm e}^{\frac{z}{t}}t^{-\alpha -1}{\rm d} t
&=&z^{-\alpha } {\bf F} _{-\alpha }(z),\ \ {\rm Re} z>0;\end{eqnarray*}
and for $\frac12>\alpha$ we have
\begin{eqnarray*}
\int_{-\sqrt z}^{\sqrt z}(z-t^2)^{-\alpha -\frac12}{\rm e}^{
2t}{\rm d} t&=&\Gamma\Big({-}\alpha +\frac12\Big)\sqrt\pi z^{-\alpha }
{\bf F} _{-\alpha }(z),\ \ \ z\not\in]-\infty,0].\end{eqnarray*}
\subsubsection{Solution
$\sim{\rm exp}(- 2z^\frac{1}{2}) z^{-\frac{ \alpha }2-\frac14}$
for $z\to+\infty$}
\label{s5.8}
The following function is
also a solution of the ${}_0F_1$ equation:
\begin{eqnarray*}
\tilde F_\alpha (z)&:=&{\rm e}^{-2\sqrt z} z^{-\frac{\alpha}{2} -\frac14}
\tilde F_{0,2\alpha }\Big(-\frac{1}{4\sqrt z}\Big).
\end{eqnarray*}
We have the identity
\[\tilde F_\alpha (z)=z^{-\alpha }\tilde F_{-\alpha }(z).\]
Integral representations for all parameters:
\begin{eqnarray*}
\int_{-\infty}^{0}{\rm e}^t{\rm e}^{\frac{z}{t}}(-t)^{-\alpha -1}{\rm d} t
&=&\pi^{\frac{1}{2}}\tilde F_\alpha (z),\ \ \ \ {\rm Re} z>0;\end{eqnarray*}
for $ {\rm Re} \alpha >-\frac12
$:
\begin{eqnarray*}\int_{-\infty}^{-1}(t^2-1)^{\alpha -\frac12}{\rm e}^{
2t\sqrt z}{\rm d} t&=&
\frac12\Gamma\Big(\alpha +\frac12\Big) \tilde F_\alpha (z),\ \ \ z\not\in]-\infty,0]; \end{eqnarray*}
for $ {\rm Re} \alpha <\frac12$:
\begin{eqnarray*}\
\int_{-\infty}^{-\sqrt z}(t^2-z)^{-\alpha -\frac12}{\rm e}^{
2t}{\rm d} t&=&\frac12\Gamma\Big(-\alpha +\frac12\Big) \tilde F_\alpha (z)
,\ \ \ z\not\in]-\infty,0].
\end{eqnarray*}
As $|z|\to\infty$ and
$|\arg z|<\pi/2-\epsilon$, we have
\begin{equation}
\tilde F_\alpha (z)\sim{\rm exp}(- 2z^\frac{1}{2}) z^{-\frac{\alpha }2-\frac14}.
\label{saddle1}\end{equation}
$F_\alpha$ is a unique solution with this property.
To prove (\ref{saddle1}) we can use the saddle point method. We write the left
hand side as
\[\int_0^\infty{\rm e}^{\phi(s)}s^{-\alpha-1}{\rm d} s\]
with $\phi(s)=-s-\frac{z}{s}$. We compute:
\[\phi'(s)=-1+\frac{z}{s^2},\ \ \phi''(s)=-2\frac{z}{s^3}.\]
We find the stationary point at $s_0=\sqrt z$ with
$\phi''(s_0)=-2z^{-\frac12}$ and
$\phi(s_0)=-2\sqrt z$. Hence the left hand side of (\ref{saddle1}) can be
approximated by
\[\int_{-\infty}^\infty
{\rm e}^{\phi(s_0)+\frac{1}{2}(s-s_0)^2\phi''(s_0)}s_0^{-\alpha-1}{\rm d} s=
\pi^{\frac12} z^{-\frac{\alpha}{2}-\frac14}{\rm e}^{-2\sqrt z}.\]
\begin{remark} In the literature, instead of the $\tilde F$ function one uses
the {\em MacDonald function}, solving the modified Bessel equation:
\begin{eqnarray*}
K_\alpha(w)&=&\sqrt\pi\Big(\frac{
w}{2}\Big)^\alpha \tilde
F_\alpha\Big(\frac{w^2}{4}\Big),\end{eqnarray*}
and the Hankel functions of the 1st and 2nd kind, solving the Bessel equation:
\begin{eqnarray*}
H_\alpha^{(1)}(w)&=&\frac{{\rm i}}{\sqrt\pi}\Big(\frac{{\rm e}^{-{\rm i}\pi/2}
w}{2}\Big)^\alpha \tilde F_\alpha\Big({\rm e}^{-{\rm i}\pi}\frac{w^2}{4}\Big),\\
H_\alpha^{(2)}(w)&=&-\frac{{\rm i}}{\sqrt\pi}\Big(\frac{{\rm e}^{{\rm i}\pi/2}
w}{2}\Big)^\alpha{\tilde F}_\alpha\Big({\rm e}^{{\rm i}\pi}\frac{w^2}{4}\Big).\end{eqnarray*}
\end{remark}
\subsection{Connection formulas}
We can use the solutions with a simple behavior at zero as the basis:
\begin{eqnarray*}
\tilde F_\alpha (z)
&=&\frac{\sqrt\pi}{\sin\pi (-\alpha )} {\bf F} _\alpha (z)
+\frac{\sqrt \pi}{\sin\pi \alpha }
z^{-\alpha } {\bf F} _{-\alpha }(z).
\end{eqnarray*}
Alternatively, we can use the $\tilde F$ function and its analytic
continuation around $0$ in the clockwise or anti-clockwise direction
as the basis:
\begin{eqnarray*}
{\bf F}_\alpha(z)&=&\frac{1}{ 2\pi^{\frac{3}{2}}}\left({\rm e}^{-{\rm i}\pi
( \alpha-\frac{1}{2})}
{\tilde F}_\alpha(z)-
{\rm e}^{{\rm i}\pi (\alpha-\frac{1}{2})}{\tilde F}_\alpha({\rm e}^{-{\rm i} 2\pi}z)\right)\\
&=&\frac{1}{ 2\pi^{\frac{3}{2}}}\left({\rm e}^{-{\rm i}\pi (\alpha-\frac{1}{2})}{\tilde F}_\alpha({\rm e}^{{\rm i} 2\pi}z)-
{\rm e}^{{\rm i}\pi (\alpha-\frac{1}{2})}{\tilde F}_\alpha(z)\right),\\[3ex]
z^{-\alpha}{\bf F}_{-\alpha}(z)
&=&\frac{1}{ 2\pi^{\frac{3}{2}}}\left({\rm e}^{{\rm i}\pi (\alpha+\frac{1}{2})}{\tilde F}_\alpha(z)-
{\rm e}^{-{\rm i}\pi( \alpha+\frac{1}{2})}{\tilde F}_\alpha({\rm e}^{-{\rm i} 2\pi}z)\right)
\\
&=&\frac{1}{2\pi^{\frac{3}{2}}}\left({\rm e}^{{\rm i}\pi( \alpha+\frac{1}{2})}
{\tilde F}_\alpha({\rm e}^{{\rm i} 2\pi}z)-
{\rm e}^{-{\rm i}\pi (\alpha+\frac{1}{2})}{\tilde F}_\alpha(z)\right).
\end{eqnarray*}
\subsection{Recurrence relations}
The following recurrence relations easily follow from the commutation
relations of Subsect. \ref{symcom2a}:
\begin{eqnarray*}
\partial_z {\bf F} _\alpha (z)&=& {\bf F} _{\alpha +1}(z),
\\[3mm]
\left(z \partial_z+\alpha\right) {\bf F} _\alpha (z)&=& {\bf F} _{\alpha -1}(z).
\end{eqnarray*}
\subsection{Degenerate case}
$\alpha=m\in{\mathbb Z}$ is the degenerate case of the ${}_0F_1$ equation at $0$.
We have then
\[{\bf F}
(1+m;z)=\sum_{n=\max(0,-m)}\frac{1}{n!(m+n)!}z^n.\]
This easily implies the identity
\[{\bf F}(1+m;z)=z^{-m}
{\bf F}(1-m;z).
\]Thus the two standard solutions determined by the behavior at zero are proportional to one another.
We have an integral representation, called the {\em Bessel formula}, and a generating function:
\begin{eqnarray*}
\frac{1}{2\pi i}\int\limits_{[0^+]}
{\rm e}^{t+z\slash t}t^{-m-1}{\rm d} t&= &{\bf F} _m(z)=z^{-m} {\bf F} _{-m}(z),\\
{\rm e}^{t}{\rm e}^{z\slash t}&
=&\sum_{m\in{\mathbb Z}}t^m {\bf F} _m(z).
\end{eqnarray*}
\subsection{Special cases}
If $\alpha=\pm\frac{1}{2}$, then the ${}_0F_1$ equation can be reduced to an equation easily solvable in terms of elementary functions:
\begin{eqnarray*}
{\cal F}_{-\frac12}(z,\partial_z)&=&\partial_u^2-1,\\
{\cal F}_{\frac12}(z,\partial_z)&=&u^{-1}(\partial_u^2-1)u
,\end{eqnarray*}
where $u=2\sqrt z$.
They have solutions
\begin{eqnarray*}
F_{-\frac{1}{2}}(z)=\cosh2\sqrt z,&&\tilde F_{-\frac{1}{2}}(z)=\exp(-2\sqrt z),\\
F_{\frac{1}{2}}(z)=\frac{\sinh2\sqrt z}{2\sqrt z},&&\tilde F_{\frac{1}{2}}(z)=\frac{\exp(-2\sqrt z)}{\sqrt z}.
\end{eqnarray*}
\section{The Gegenbauer equation}
\label{s7}
\subsection{Introduction}
The hypergeometric equation can be moved by an affine transformation
so that its finite singular points are placed at $-1$ and $1$. If in
addition the equation is reflection invariant,
then it will be called the {\em Gegenbauer equation}.
Because of the reflection invariance, the third classical parameter can be obtained from the first two: $c=\frac{a+b+1}{2}$. Therefore, we will use only ${a},{b}\in{\mathbb C}$
as the (classical) parameters of the Gegenbauer equation. It will
be given by the operator
\begin{equation}
{\cal S}({a},{b};z, \partial_z):=(1-z^2) \partial_z^2-({a}+{b}+1)z \partial_z-{a}{b}.
\label{geg}\end{equation}
To describe the symmetries of the Gegenbauer operator it is convenient to use
its Lie-algebraic parameters
\[\begin{array}{rl}
\alpha :=\frac{{a}+{b}-1}{2},& \lambda: =\frac{{b}-{a}}{2},\\[3ex]
{a}=\frac12+\alpha -\lambda ,&{b}=\frac12+\alpha +\lambda .
\end{array}\]
Thus (\ref{geg}) becomes
\begin{eqnarray*}
{\cal S}_{\alpha ,\lambda }(z, \partial_z)
&:=&(1-z^2) \partial_z^2-2(1+\alpha )z \partial_z
+\lambda ^2-\Big(\alpha +\frac{1}{2}\Big)^2.\end{eqnarray*}
The Lie-algebraic parameters have an interesting interpretation in terms of the natural basis of the Cartan algebra of the Lie algebra $so(5)$ \cite{DM}.
\subsection{Equivalence with the hypergeometric equation}
The Gegenbauer equation is equivalent to certain subclasses of the hypergeometric
equation by a number of different substitutions.
First of all, we can reduce the Gegenbauer equation to the hypergeometric equation by
two affine transformations. They move the singular points from $-1$, $1$ to
$0$, $1$ or $1$, $0$:
\begin{equation}\begin{array}{l}
{\cal S}({a},{b};z, \partial_z)={\cal F}({a},{b};\frac{{a}+{b}+1}{2}
;u, \partial_u),\end{array}\label{ha2}\end{equation}
where
\[\begin{array}{rl}&u=\frac{1-z}{2},\ \ \ z=1-2u,\\[3mm]
\hbox{or}&u=\frac{1+z}{2},\ \ \ z=-1+2u.
\end{array}\]
In the Lie-algebraic parameters
\[{\cal S}_{\alpha,\lambda}(z, \partial_z)={\cal F}_{\alpha,\alpha,2\lambda}(u, \partial_u).\]
Another pair of substitutions is a
consequence of the reflection invariance of the Gegenbauer equation (see
Subsect. \ref{a.5}):
\begin{equation}\begin{array}{rl}{\cal S}({a},{b};z, \partial_z)&=
4{\cal F}(\frac{{a}}{2},\frac{{b}}{2};\frac{1}{2};w, \partial_w),\\[2ex]
z^{-1}{\cal S}({a},{b};z, \partial_z)z&=
4{\cal F}(\frac{{a}+1}{2},\frac{{b}+1}{2};\frac32;w, \partial_w),
\end{array}\label{ha1}\end{equation}
where \[w=z^2,\ \ \ \
z=\sqrt w.\]
In the Lie-algebraic parameters
\begin{eqnarray}
\label{1/2}{\cal S}_{\alpha,\lambda}(z, \partial_z)&=&{\cal F}_{-\frac{1}{2},\alpha,\lambda}(w, \partial_w),\\
\label{-1/2}z^{-1}{\cal S}_{\alpha,\lambda}(z, \partial_z)z&=&{\cal F}_{\frac{1}{2},\alpha,\lambda}(w, \partial_w).
\end{eqnarray}
\subsection{Symmetries}
\label{symcom}
All the operators below equal ${\cal S}_{\alpha ,\lambda }(w, \partial_w)$ for an
appropriate $w$:
\[\begin{array}{rrcl}
w=\pm z:&&&\\
&&{\cal S}_{\alpha ,\pm\lambda },&\\[2ex]
w=\pm z:&&&\\
&(z^2-1)^{-\alpha }&{\cal S}_{-\alpha ,\pm\lambda }&
(z^2-1)^{\alpha },\\[2ex]
w=\frac{\pm z}{(z^2-1)^{\frac{1}{2}}}:&&&\\
& (z^2-1)^{\frac{1}{2}(\alpha +\lambda +\frac52)}
&{\cal S}_{\lambda ,\pm\alpha }
& (z^2-1)^{\frac{1}{2}(-\alpha -\lambda -\frac{1}{2})},\\[2ex]
w=\frac{\pm z}{(z^2-1)^{\frac{1}{2}}}:&&&\\
& (z^2-1)^{\frac{1}{2}(\alpha -\lambda +\frac52)}&
{\cal S}_{-\lambda ,\pm\alpha }&
(z^2-1)^{\frac{1}{2}(-\alpha +\lambda -\frac{1}{2})}.
\end{array}\]
The symmetries of the Gegenbauer operator
have an interpretation in terms of the Weyl group of the Lie
algebra $so(5)$.
Note that the first two symmetries from the above table are
inherited from the hypergeometric equation through the substitution
(\ref{ha2}).
The symmetries involving $w=\frac{\pm z}{(z^2-1)^{\frac{1}{2}}}$ go under the name of the {\em Whipple transformation}. To obtain them we first use
the substitution (\ref{ha1}) $z\to z^2$, then
$z^2\to\frac{z^2}{1-z^2}$, which is one of the symmetries from the Kummer's table, finally
the substitution (\ref{ha1}) in the opposite direction
$\frac{z^2}{1-z^2}\to\sqrt{\frac{z^2}{1-z^2}}$.
We will continue our discussion of the Whipple transformation in Subsect.
\ref{The Riemann surface of the Gegenbauer equation}.
\subsection{Factorizations and commutation relations}
\label{symcoma}
There are several ways of factorizing the Gegenbauer operator:
\begin{eqnarray*}{\cal S}_{\alpha,\lambda}&=&\Big((1-z^2)\partial_z
-2(1+\alpha)z\Big)\partial_z\\
&&+\Big(\alpha+\lambda+\frac12\Big)\Big(-\alpha+\lambda-\frac12\Big)\\
&=&\partial_z\Big((1-z^2)\partial_z
-2\alpha z\Big)\\
&&+\Big(\alpha+\lambda-\frac12\Big)\Big(-\alpha+\lambda+\frac12\Big),\\
(1-z^2){\cal S}_{\alpha,\lambda}&=&\Big((1-z^2)\partial_z
+\big(\alpha-\lambda+\frac32\big)z\Big)\Big((1-z^2)\partial_z
+\big(\alpha+\lambda+\frac12\big)z\Big)\\
&&-\Big(\alpha+\lambda+\frac12\Big)\Big(\alpha-\lambda+\frac32\Big)\\
&=&\Big((1-z^2)\partial_z
+\big(\alpha+\lambda+\frac32\big)z\Big)\Big((1-z^2)\partial_z
+\big(\alpha-\lambda+\frac12\big)z\Big)\\
&&-\Big(\alpha-\lambda+\frac12\Big)\Big(\alpha+\lambda+\frac32\Big);
\end{eqnarray*}
\begin{eqnarray*}z^2{\cal S}_{\alpha,\lambda}&=&\Big(z(1-z^2)\partial_z
-\alpha-\lambda-\frac32+\big(-\alpha+\lambda-\frac12\big)z^2\Big)
\Big(z\partial_z+\alpha+\lambda+\frac12\Big)\\
&&+\Big(\alpha+\lambda+\frac12\Big)\Big(\alpha+\lambda+\frac32\Big)\\
&=&
\Big(z\partial_z+\alpha+\lambda-\frac32\Big)
\Big(z(1-z^2)\partial_z
-\alpha-\lambda+\frac12+\big(-\alpha+\lambda-\frac12\big)z^2\Big)\\
&&+\Big(\alpha+\lambda-\frac12\Big)\Big(\alpha+\lambda-\frac32\Big)\\
&=&\Big(z(1-z^2)\partial_z
-\alpha+\lambda-\frac32+\big(-\alpha-\lambda-\frac12\big)z^2\Big)
\Big(z\partial_z+\alpha-\lambda+\frac12\Big)\\
&&+\Big(\alpha-\lambda+\frac12\Big)\Big(\alpha-\lambda+\frac32\Big)\\
&=&
\Big(z\partial_z+\alpha-\lambda-\frac32\Big)
\Big(z(1-z^2)\partial_z
-\alpha+\lambda+\frac12+\big(-\alpha-\lambda-\frac12\big)z^2\Big)\\
&&+\Big(\alpha-\lambda-\frac12\Big)\Big(\alpha-\lambda-\frac32\Big).
\end{eqnarray*}
The following commutation relations can be derived from the factorizations:
\[\begin{array}{rrl}
& \partial_z&
{\cal S}_{\alpha ,\lambda } \\[0.4ex]
&=\ \ \ {\cal S}_{\alpha +1,\lambda }& \partial_z,\\[2ex]
&((1-z^2) \partial_z-2\alpha z)&
{\cal S}_{\alpha ,\lambda }\\[0.4ex]
&=\ \ \ {\cal S}_{\alpha -1,\lambda }&((1-z^2) \partial_z-2\alpha z),\\[2ex]
&((1-z^2) \partial_z-
(\alpha +\lambda +\frac12 )z)&(1-z^2){\cal S}_{\alpha ,\lambda }\\[0.4ex]
&=\ \ \ (1-z^2){\cal S}_{\alpha ,\lambda +1}&
((1-z^2) \partial_z-
(\alpha +\lambda +\frac{1}{2})z),\\[2ex]
&((1-z^2) \partial_z-(\alpha -\lambda +\frac12)z)&
(1-z^2){\cal S}_{\alpha ,\lambda }\\[0.4ex]
&=\ \ \ (1-z^2){\cal S}_{\alpha ,\lambda -1}&((1-z^2) \partial_z-(\alpha -\lambda +\frac{1}{2})z);
\\[3ex]
&(z \partial_z+
\alpha -\lambda +\frac12)&z^2{\cal S}_{\alpha ,\lambda }\\[0.4ex]
&=\ \ \ z^2{\cal S}_{\alpha +1,\lambda -1}&(z \partial_z+
\alpha -\lambda +\frac{1}{2}),\\[2ex]
&(z(1{-}z^2) \partial_z{-}\alpha{+}\lambda{+}\frac12{-}(\alpha{+}\lambda{+}\frac12)z^2)
&z^2{\cal S}_{\alpha ,\lambda }\\[0.4ex]
&=\ \ \ z^2{\cal S}_{\alpha -1,\lambda +1}
&(z(1{-}z^2) \partial_z
{-}\alpha{+}\lambda{+}\frac12{-}(\alpha{+}\lambda{+}\frac12)z^2),
\\[2ex]
&(z \partial_z+
\alpha +\lambda +\frac12)&z^2{\cal S}_{\alpha ,\lambda }\\[0.4ex]
&=\ \ \ z^2{\cal S}_{\alpha +1,\lambda +1}&(z \partial_z+
\alpha +\lambda +\frac{1}{2}),\\[2ex]
&(z(1{-}z^2) \partial_z
{-}\alpha{-}\lambda{+}\frac12{-}(\alpha{-}\lambda{+}\frac12)z^2
)&
z^2{\cal S}_{\alpha ,\lambda }\\[0.4ex]
&=\ \ \ z^2{\cal S}_{\alpha -1,\lambda -1}&(z(1{-}z^2) \partial_z
{-}\alpha{-}\lambda{+}\frac12{-}(\alpha{-}\lambda{+}\frac12)z^2).
\end{array}\]
Each of these commutation relations is associated with a root
of the Lie algebra $so(5)$.
Note that only the first pair of commutation relations is directly inherited from the basic commutation relations of the hypergeometric equation of Subsect.
\ref{commu}. The next pair comes from what we called additional
commutation relations (see Subsect. \ref{addi}), which in the reflection invariant case simplify, so that they can be counted as basic commutation relations (see a discussion in Subsect. \ref{Properties of hypergeometric type operators}).
Note that the Whipple transformation transforms the first pair of the
commutation relations into the second, and the other way around.
The last four commutation relations form a separate class -- they can be obtained by applying consecutively an appropriate pair from the first four commutation relations.
\subsection{The Riemann surface of the Gegenbauer equation}
\label{The Riemann surface of the Gegenbauer equation}
Let us analyze more closely the Whipple symmetry.
First let us precise the meaning of the holomorphic function involved in this symmetry. If $z\in\Omega_+:={\mathbb C}\backslash[-1,1]$, then
$1-z^{-2}\in{\mathbb C}\backslash]-\infty,0]$. Therefore,
\begin{equation}
\frac{z}{(z^2-1)^{\frac{1}{2}}}:=\frac{1}{(1-z^{-2})^{\frac{1}{2}}}\label{anala}\end{equation}
defines a unique anlytic function on
$z\in\Omega_+$ (where on the right we have the principal branch of the square root). Note that, for $z\to\infty$, (\ref{anala}) converges to $1$.
Consider a second copy of $\Omega_+$, denoted $\Omega_-$. Glue them together along $]-1,1[$, so that crossing $]-1,1[$ we go from $\Omega_\pm$ to $\Omega_\mp$. The resulting complex manifold will be called $\Omega$.
The elements of $\Omega_\pm$ corresponding to $z\in
{\mathbb C}\backslash]-1,1[$ will be denoted
will be denoted $z_\pm$.
$\Omega$
is biholomorphic to the sphere with 4
punctures, which correspond to the points $-1,1,\infty_+,\infty_-$.
It is easy to see that $\Omega$ is the Riemann surface of the maximal holomorphic function extending (\ref{anala}).
On $\Omega_-$ it equals
$-\frac{z}{(z^{2}-1)^{\frac{1}{2}}}$.
It is useful to reinterpret this holomorphic function as a biholomorphic function from $\Omega$ into itself:
\begin{eqnarray*}
\tau(z_+)&:=&
\left\{\begin{array}{ll}
\left(\frac{z}{\sqrt{z^2-1}}\right)_+, &{} {\rm Re} z>0,\\
\left(\frac{z}{\sqrt{z^2-1}}\right)_-,& {\rm Re} z<0,
\end{array}\right.\\
\tau(z_-)&:=&
\left\{\begin{array}{ll}
\left(-\frac{z}{\sqrt{z^2-1}}\right)_-, &{} {\rm Re} z>0,\\
\left(-\frac{z}{\sqrt{z^2-1}}\right)_+,& {\rm Re} z<0.
\end{array}\right.\end{eqnarray*}
We also introduce
\begin{eqnarray*}
\epsilon(z_\pm)&:=&z_\mp,\\
(-1)z_\pm&:=&(-z)_\pm
.\end{eqnarray*}
Note that $\tau^2={\rm id}$, $\epsilon^2={\rm id}$, $(-1)^2={\rm id}$, $\tau\epsilon=(-1)\epsilon\tau$.
$\tau$ and $\epsilon$ generate
a group isomorphic to the group of
the symmetries of the square. The vertices of this square can be identified with
$(1,\infty_+,-1,\infty_-)$.
They are permuted by these transformations as follows:
\[\begin{array}{r}
\epsilon(1,\infty_+,-1,\infty_-)=(1,\infty_-,-1,\infty_+),\\[3mm]
(-1)(1,\infty_+,-1,\infty_-)=(-1,\infty_+,1,\infty_-),\\[3mm]
\tau(1,\infty_+,-1,\infty_-)=(\infty_+,1,\infty_-,-1).\end{array}\]
It is useful to view the Gegenbauer equation as defined on $\Omega$.
\subsection{Integral representations}
\begin{theoreme}\begin{enumerate}\item
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\[(t^2-1)^{\frac{{b}-{a}+1}{2}}(t-z)^{-{b}-1}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\begin{equation}
{\cal S}({a},{b};z, \partial_z)
\int_\gamma (t^2-1)^{\frac{{b}-{a}-1}{2}}(t-z)^{-{b}}{\rm d} t=0
.\label{dad9}\end{equation}
\item
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\begin{eqnarray*}&&
(t^2+2tz+1)^{\frac{-{b}-{a}}{2}+1}t^{b-2}
\Big|_{\gamma(0)}^{\gamma(1)}=0.\end{eqnarray*}
Then
\begin{equation}
{\cal S}({a},{b};z, \partial_z)
\int_\gamma(t^2+2tz+1)^{\frac{-{b}-{a}}{2}}t^{{b}-1}{\rm d} t=0
.\label{dad8}\end{equation}
\end{enumerate}
\label{gqw}\end{theoreme}
\noindent{\bf Proof.}\ \
We compute that (\ref{dad9}) and (\ref{dad8}) equal
\begin{eqnarray*}
&&{a}
\int_\gamma \Big( \partial_t(t^2-1)^{\frac{{b}-{a}+1}{2}}(t-z)^{-{b}-1}\Big){\rm d} t,\\
&&\int\limits_\gamma\Big(
\partial_t(t^2+2tz+1)^{\frac{-{b}-{a}}{2}+1}t^{b-2}\Big){\rm d} t\end{eqnarray*}
respectively.
Note that
(\ref{dad9}) is essentially a special case of Theorem \ref{intr}.
(\ref{dad8}) can be derived from (\ref{dad9}). In fact, using
the Whipple symmetry
we see that, for an appropriate contour $\tilde \gamma$,
\begin{equation}\begin{array}{l}
(z^2-1)^{-\frac{{a}}{2}}\int\limits_{\tilde\gamma}(s^2-1)^{\frac{-{b}-{a}}{2}}
(s-\frac{z}{\sqrt{z^2-1}})^{{b}-1}{\rm d} s\end{array}\label{sol}\end{equation}
solves the Gegenbauer equation.
Then we change the variables
\[\begin{array}{l}
t=s\sqrt{z^2-1}-z,\ \ \ \ s=\frac{t+z}{\sqrt{z^2-1}},
\end{array}\]
and we obtain that (\ref{sol}) equals
\[\int_\gamma(t^2+2tz+1)^{\frac{-{b}-{a}}{2}}t^{{b}-1}{\rm d} t,\]
with an appropriate contour $\gamma$. \hfill$\Box$\medskip
Note that in the above theorem we can interchange $a$ and $b$. Thus we obtain four kinds of integral representations.
\subsection{Canonical forms}
The natural weight of the Gegenbauer operator equals $(z^2-1)^{\alpha}$,
so that
\[{\cal S}_{\alpha,\lambda}=
-(z^2-1)^{-\alpha}\partial_z(z^2-1)^{\alpha+1}\partial_z
+\lambda^2-\Big(\alpha+\frac{1}{2}\Big)^2.\]
The balanced form of the Gegenbauer operator is
\begin{eqnarray*}
&&
(z^2-1)^{\frac{\alpha}{2}}{\cal S}_{\alpha,\lambda}
(z^2-1)^{-\frac{\alpha}{2}}
\\
&=&
\partial_z(1-z^2)\partial_z-\frac{\alpha^2}{1-z^2}+\lambda^2-\frac{1}{4}.
\end{eqnarray*}
Note that the symmetries $\alpha\to-\alpha$ and $\lambda\to-\lambda$ are obvious in the balanced form.
\begin{remark}\label{Legen}
In the literature the Gegenbauer equation is used mostly in the context of Gegenbauer polynomials, that is for $-a=0,1,2,\dots$. In the general case,
instead of the Gegenauer equation one usually considers the so-called
{\em associated Legendre equation}. It coincides with the balanced form of the Gegenbauer equation, except that one of its parameters is shifted by $\frac12$. In the standard form it is
\
(1-z^2) \partial_z^2-2z \partial_z-\frac{m^2}{1-z^2}+l(l+1),\]
so that $m$, $l$ correspond to $\alpha$, $\lambda-\frac12$ according to our convention.
\end{remark}
\subsection{Even solution}
Inserting a power series into equation we see that
the Gegenbauer equation possesses an even solution equal to
\begin{eqnarray*}
S_{\alpha,\lambda}^+(z)&:=&\sum_{j=0}^\infty\frac{(\frac{{a}}{2})_j(\frac{{b}}{2})_j}{(2j)!}(2z)^{2j}\\
=\ F\Big(\frac{{a}}{2},\frac{{b}}{2};\frac{1}{2};z^2\Big)
&=&F_{-\frac{1}{2},\alpha,\lambda}(z^2)
.\end{eqnarray*}
It is the unique solution of the Gegenbauer equation satisfying
\begin{equation} S_{\alpha,\lambda}^+(0)=1,\ \ \ \ \ \frac{{\rm d}}{{\rm d} z}S_{\alpha,\lambda}^+(0)=0.\label{in1}\end{equation}
One way to derive the expression in terms of the hypergeometric
function
is to use the transformation (\ref{ha1}).
We have the identities
\begin{eqnarray*}
S_{\alpha ,\lambda }^+(z)&
=&(1-z^2)^{\frac{-1-2\alpha \pm2\lambda }{4}}
S_{\mp\lambda ,\alpha }^+
\Big(\frac{{\rm i} z}{\sqrt{1-z^2}}\Big)
\\
&=&(1-z^2)^{-\alpha }
S_{-\alpha ,\lambda }^+(z),\end{eqnarray*}
beside the obvious ones
\[S_{\alpha,\lambda}^+(z)=
S_{\alpha,-\lambda}^+(z)=S_{\alpha,\lambda}^+(-z)=S_{\alpha,-\lambda}^+(-z),\]
\subsection{Odd solution}
Similarly, the Gegenbauer equation possesses an odd solution equal to
\begin{eqnarray*}
S_{\alpha,\lambda}^-(z)&:=&\sum
\limits_{j=0}^\infty\frac{(\frac{{a}+1}{2})_j(\frac{{b}+1}{2})_j}{(2j+1)!}
(2z)^{2j+1} \\
=\ 2zF\Big(\frac{{a}+1}{2},\frac{{b}+1}{2};\frac{3}{2};z^2\Big)
&=& 2zF_{\frac{1}{2},\alpha,\lambda}(z^2).
\end{eqnarray*}
It is the unique solution of the Gegenbauer equation satisfying
\begin{equation} S_{\alpha,\lambda}^-(0)=0,\ \ \ \ \ \frac{{\rm d}}{{\rm d} z}S_{\alpha,\lambda}^-(0)=2.\label{in2}\end{equation}
We have the identities
\begin{eqnarray*}
S_{\alpha ,\lambda }^-(z)&
=&-{\rm i}(1-z^2)^{\frac{-1-2\alpha \pm2\lambda }{4}}
S_{\mp\lambda ,\alpha }^-
\Big(\frac{{\rm i} z}{\sqrt{1-z^2}}\Big)\\
&=&(1-z^2)^{-\alpha }
S_{-\alpha ,\lambda }^-(z),\end{eqnarray*}
beside the obvious ones:
\[S_{\alpha,\lambda}^+(z)=
S_{\alpha,-\lambda}^+(z)=-S_{\alpha,\lambda}^+(-z)=-S_{\alpha,-\lambda}^+(-z),\]
\subsection{Standard solutions}
As usual, by standard solutions we mean solutions with a simple behavior around singular points. The singular points of the Gegenbauer equation are at $\{1,-1,\infty\}$. The discussion of the point $-1$ can be easily reduced to that of $1$. Therefore, it is enough to discuss $2\times2$ solutions corresponding to two indices at $1$ and $\infty$.
By Thm \ref{gqw}, for appropriate $\gamma_1$, $\gamma_2$ the integrals
\begin{eqnarray}
\int\limits_{\gamma_1}(t^2-1)^{-\frac12+\lambda }(t-z)^{-\frac12-\alpha -\lambda }{\rm d}
t,
&&
\label{ka1a-}\\
\int\limits_{\gamma_2}(t^2+2tz+1)^{-\alpha-\frac12}(-t)^{-\frac12+\alpha +\lambda }{\rm d} t
&&
\label{ka2a-}\end{eqnarray}
are solutions.
The natural endpoints of $\gamma_1$ are $-1,1,z,\infty$. We will see that all standard solutions can be obtained from such integrals.
The natural endpoints of $\gamma_2$ are $z+\sqrt{z^2-1},z-\sqrt{z^2-1},
0,\infty$. Similarly, all standard solutions can be obtained from the integrals over contours with these endpoints.
It is interesting to note that in some aspects the theory of the Gegenbauer equation is more complicated than that of the hypergeometric equation. One of its manifestations is a relatively big number of natural normalizations of solutions. Indeed, let us consider e.g. integral representations of the type (\ref{ka1a-}).
The natural endpoints fall into two categories: $\{1,-1\}$ and $\{0,\infty\}$.
Therefore, we have 3 kinds of contours joining two of these endpoint:
$[-1,1]$, $[0,\infty[$ and the contours joining two distinct categories. This corresponds two three distinct natural normalizations, which we describe in the wht follows.
\subsubsection{Solution $\sim1$ at $1$}
\label{subsub1}
If $\alpha \neq-1,-2,\dots$, then the unique solution of the Gegenbauer
equation equal to 1 at $1$ is the following function:
\begin{eqnarray*}
S_{\alpha ,\lambda }(z):&=&F_{\alpha ,\alpha ,2\lambda }
\Big(\frac{1-z}{2}\Big)=F\Big({a},{b};\frac{{a}+{b}+1}{2};\frac{1-z}{2}\Big)\\&=&
F_{\alpha ,-\frac12,\lambda }(1-z^2)=
F\Big(\frac{{a}}{2},\frac{{b}}{2};\frac{{a}+{b}+1}{2};1-z^2\Big).\end{eqnarray*}
We will also introduce several alternatively normalized functions:
\begin{eqnarray*}
{\bf S}_{\alpha,\lambda}(z)&:=&\frac{1}{\Gamma(\alpha+1)}S_{\alpha,\lambda}(z)\\
&=&\frac{1}{\Gamma(\frac{a+b+1}{2})}
F\Big({a},{b};\frac{{a}+{b}+1}{2};\frac{1-z}{2}\Big)=
{\bf F}_{\alpha,\alpha,2\lambda}\Big(\frac{1-z}{2}\Big),\\[4ex]
{\bf S}_{\alpha,\lambda}^{\rm\scriptscriptstyle I}(z)&:=&2^{-\frac12-\alpha+\lambda}\frac{\Gamma(\frac{1+2\alpha-2\lambda}{2})\Gamma(\frac{1+2\lambda}{2})}{\Gamma(\alpha+1)}S_{\alpha,\lambda}
(z)
\\
&=&
2^{-a}\frac{\Gamma(a)\Gamma(\frac{-a+b+1}{2})}{\Gamma(\frac{a+b+1}{2})}
F\Big({a},{b};\frac{{a}+{b}+1}{2};\frac{1-z}{2}\Big)=2^{-\frac12-\alpha+\lambda}
{\bf F}_{\alpha,\alpha,2\lambda}^{\rm\scriptscriptstyle I}\Big(\frac{1-z}{2}\Big),\\[4ex]
{\bf S}_{\alpha,\lambda}^{{\rm\scriptscriptstyle {I}{I}}}(z)&:=&\frac{\Gamma(\frac{1+2\alpha-2\lambda}{2})\Gamma(\frac{1+2\alpha+2\lambda}{2})}{\Gamma(2\alpha+1)}S_{\alpha,\lambda}(z)\\
&=&
\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}
F\Big({a},{b};\frac{{a}+{b}+1}{2};\frac{1-z}{2}\Big),\\[4ex]
{\bf S}_{\alpha,\lambda}^{{\rm\scriptscriptstyle 0}}(z)&:=&2^{2\alpha}\frac{\Gamma(\frac{1+2\alpha}{2})^2}{\Gamma(2\alpha+1)}S_{\alpha,\lambda}(z)=
\sqrt{\pi}
\frac{\Gamma(\frac{1+2\alpha}{2})}{\Gamma(\alpha+1)}S_{\alpha,\lambda}(z)
\\
&=&
2^{a+b-1}\frac{\Gamma(\frac{a+b}{2})^2}{\Gamma(a+b)}
F\Big({a},{b};\frac{{a}+{b}+1}{2};\frac{1-z}{2}\Big).
\end{eqnarray*}
Assuming that $z\not\in]-\infty,-1]$, we have the following
integral representations: for ${\rm Re}\alpha+\frac12>{\rm Re}\lambda>-\frac{1}{2}$
\begin{eqnarray}
\int\limits_{-\infty}^{-1}(t^2-1)^{-\frac12+\lambda }(z-t)^{-\frac12-\alpha -\lambda }{\rm d}
t\nonumber
&=&
{\bf S}^{\rm\scriptscriptstyle I}_{\alpha ,\lambda }(z)
,\label{ka1a}\end{eqnarray}
and for ${\rm Re}\alpha+\frac{1}{2}>|{\rm Re}\lambda|$
\begin{eqnarray}
\int\limits_0^{\infty}(t^2+2tz+1)^{-\alpha-\frac12}t^{-\frac12+\alpha +\lambda }{\rm d} t
\nonumber&=&
{\bf S}_{\alpha ,\lambda }^{{\rm\scriptscriptstyle {I}{I}}}(z).
\label{ka2a}\end{eqnarray}
\subsubsection{Solution $\sim 2^{-\alpha}(1-z)^{-\alpha }$ at $1$}
\label{subsub2}
If $\alpha \neq1,2,\dots$, then the unique solution of the Gegenbauer
equation behaving as
$(1-z)^{-\alpha }$
at $1$ is the following function:
\begin{eqnarray*}
(1-z^2)^{-\alpha }S_{-\alpha ,-\lambda }(z)&=&2^{-\alpha}(1-z)^{-\alpha }F_{-\alpha ,\alpha ,-2\lambda }
\Big(\frac{1-z}{2}\Big)\\&=&
(1-z^2)^{-\alpha }F_{-\alpha ,-\frac12,-\lambda }(1-z^2).\end{eqnarray*}
Assuming that $z\not\in]-\infty,-1]\cup[1,\infty[$, we have the following
integral representations: for $-{\rm Re}\alpha+\frac12>{\rm Re}\lambda>-\frac{1}{2}$
\begin{eqnarray}
\int\limits_{-1}^z(1-t^2)^{-\frac12+\lambda }(z-t)^{-\frac12-\alpha -\lambda }{\rm d}
t\nonumber
&=&
(1-z^2)^{-\alpha } {\bf S}^{\rm\scriptscriptstyle I}_{-\alpha ,\lambda }(z)
,\label{ka1b}
\end{eqnarray}
and for $\frac{1}{2}>{\rm Re}\alpha$
\begin{eqnarray}
\int\limits_{-{\rm i}\sqrt{1-z^2}-z}^{{\rm i}\sqrt{1-z^2}-z}
(t^2+2tz+1)^{-\alpha -\frac12}(-t)^{-\frac12+\alpha +\lambda }{\rm d} t
\nonumber
&=&
(1-z^2)^{-\alpha }
{\bf S}_{\alpha ,\lambda }^{{\rm\scriptscriptstyle 0}}(z).
\label{ka2b}\end{eqnarray}
\subsubsection{Solution $\sim z^{-{a}}$ at $\infty$}
\label{subsub3}
If $2\lambda \neq-1,-2,\dots$, then the unique solution of the Gegenbauer
equation behaving as $z^{-a}=z^{-\frac12-\alpha +\lambda }$ at $\infty$
is the following function:
\begin{eqnarray*}
(z^2-1)^{\frac{-1-2\alpha +2\lambda }{4}}S_{-\lambda ,-\alpha }\Big(\frac{z}{\sqrt{z^2-1}}\Big)
&=&(1+z)^{-\frac12-\alpha +\lambda }F_{-2\lambda ,\alpha ,-\alpha }
\Big(\frac{2}{1+z}\Big)\\&=&
z^{-\frac12-\alpha +\lambda }F_{-\lambda ,\alpha ,\frac12}(z^{-2}).\end{eqnarray*}
Assuming that $z\not\in]-\infty,1]$, we have the following
integral representations: for $\frac12>{\rm Re}\lambda$
\begin{eqnarray}
\int\limits_{-1}^1(t^2-1)^{-\frac12-\lambda }(z-t)^{-\frac12-\alpha +\lambda }{\rm d}
t\nonumber
&=&
(z^2-1)^{\frac{-1-2\alpha +2\lambda }{4}}{\bf S}_{-\lambda ,\alpha }^{{\rm\scriptscriptstyle 0}}\Big(\frac{z}{\sqrt{z^2-1}}\Big)
,\label{ka1c}
\end{eqnarray}
and for $-\lambda+\frac{1}{2}>-{\rm Re}\alpha>-\frac{1}{2}$
\begin{eqnarray}
\int\limits_{\sqrt{z^2-1}-z}^{0}
(t^2+2tz+1)^{-\alpha-\frac12}(-t)^{-\frac12+\alpha -\lambda }{\rm d} t
\nonumber
&=&(z^2-1)^{\frac{-1-2\alpha +2\lambda }{4}}{\bf S}_{-\lambda ,\alpha }^{\rm\scriptscriptstyle I}\Big(\frac{z}{\sqrt{z^2-1}}\Big).
\label{ka2c}\end{eqnarray}
\subsubsection{Solution $\sim z^{-{b}}$ at $\infty$}
\label{subsub4}
If $2\lambda \neq1,2,\dots$, then the unique solution of the Gegenbauer
equation behaving as $z^{-b}=z^{-\frac12-\alpha -\lambda }$ at $\infty$
is the following function:
\begin{eqnarray*}
(z^2-1)^{\frac{-1-2\alpha -2\lambda }{4}}S_{\lambda ,\alpha }\Big(\frac{z}{\sqrt{z^2-1}}\Big)
&=&(1+z)^{-\frac12-\alpha -\lambda }F_{2\lambda ,\alpha ,\alpha }
\Big(\frac{2}{1+z}\Big)\\&=&
z^{-\frac12-\alpha -\lambda }F_{\lambda ,\alpha ,\frac12}(z^{-2}).\end{eqnarray*}
Assuming that $z\not\in]-\infty,1]$, we have the following
integral representations: for ${\rm Re}\lambda+\frac12>|{\rm Re}\alpha|$
\begin{eqnarray}
\int\limits_z^\infty(t^2-1)^{-\frac12-\lambda }(t-z)^{-\frac12-\alpha +\lambda }{\rm d}
t\nonumber
&=&(z^2-1)^{\frac{-1-2\alpha -2\lambda }{4}}{\bf S}_{\lambda ,\alpha }^{{\rm\scriptscriptstyle {I}{I}}}\Big(\frac{z}{\sqrt{z^2-1}}\Big)
,\label{ka1d}
\end{eqnarray}
and for $\lambda+\frac{1}{2}>-{\rm Re}\alpha>-\frac{1}{2}$
\begin{eqnarray}
\int\limits_\infty^{-\sqrt{z^2-1}-z}
(t^2+2tz+1)^{-\alpha -\frac12}t^{-\frac12+\alpha -\lambda }{\rm d} t
\nonumber
&=&(z^2-1)^{-\frac14-\frac{\alpha}{2} -\frac{\lambda}{2}}{\bf S}_{\lambda ,\alpha }^{\rm\scriptscriptstyle I}\Big(\frac{z}{\sqrt{z^2-1}}\Big)
.
\label{ka2d}\end{eqnarray}
\begin{remark} As mentioned in Remark \ref{Legen}, in the literature instead of the Gegenbauer equation the associated Legendre equation usually appears.
One class of its standard solutions are the
{\em associated Legendre function of the 1st kind}
\begin{eqnarray*}
{\bf P}_l^m(z)&=&\left(\frac{z+1}{z-1}\right)^{\frac{m}{2}}
{\bf F} \Big(-l,l+1;1-m;\frac{1-z}{2}\Big)\\
&=&\frac{2^m}{(z^2-1)^{\frac{m}{2}}}
{\bf F} \Big(1-m+l,-m-l;1-m;\frac{1-z}{2}\Big)\\
&=&\frac{2^m}{(z^2-1)^{\frac{m}{2}}}
{\bf S}_{-m,l+\frac12}(z),\end{eqnarray*}
which up to a constant are $(z^2-1)^{\frac{m}{2}}$ times
the solutions of Subsubsect. \ref{subsub2}.
Another class of solutions are the {\em associated Legendre function of the 2nd kind}
\begin{eqnarray*}
{\bf Q}_l^m(z)&=&
\frac{(z^2-1)^{\frac{m}{2}}}{2^{l+1}z^{l+m+1}} {\bf F} \Big(\frac{l+m+2}{2},\frac{l+m+1}{2};l+\frac{3}{2};z^{-2}\Big)\\
&=&
2^{-l-1}(z^2-1)^{-\frac{l+1}{2}}{\bf S}_{l+\frac12,m}\Big(\frac{z}{\sqrt{z^2-1}}\Big),\end{eqnarray*}
which up to a constant are $(z^2-1)^{\frac{m}{2}}$ times
the solutions of Subsubsect.
\ref{subsub4}.
(In the literature one can find a couple of other varieties of associated Legendre functions of the 1st and 2nd kind, differing by their normalization, see e.g. \cite{NIST}).
\end{remark}
\subsection{Connection formulas}
We can express the standard solutions in terms of the even and odd solutions
\begin{eqnarray*}\nonumber
{\bf S}_{\alpha ,\lambda }(z)
&=&\frac{\sqrt\pi}
{\Gamma(\frac{3}{4}+\frac{\alpha}{2} -\frac{\lambda }{2})\Gamma(\frac{3}{4}+\frac{\alpha}{2} +\frac{\lambda }{2})}
S_{\alpha ,\lambda }^+(z)\\[1ex]&&
+\frac{\sqrt{\pi}}
{\Gamma(\frac{1}{4}+\frac{\alpha}{2} -\frac{\lambda}{2})\Gamma(\frac{1}{2}+\frac{\alpha}{2} +\frac{\lambda }{2})}
S_{\alpha ,\lambda }^-(z),\\[2ex]
(1-z^2)^{-\alpha }
{\bf S}_{-\alpha ,-\lambda }(z)
&=&\frac{\sqrt\pi}
{\Gamma(\frac{3}{4}-\frac{\alpha}{2} +\frac{\lambda}{2})\Gamma(\frac{3}{4}-\frac{\alpha}{2} -\frac{\lambda }{2})}
S_{\alpha ,\lambda }^+(z)\\[1ex]
&&+\frac{\sqrt{\pi}}
{\Gamma(\frac{1}{4}-\frac{\alpha}{2} +\frac{\lambda }{2})\Gamma(\frac{1}{4}-\frac{\alpha}{2} -\frac{\lambda }{2})}
S_{\alpha ,\lambda }^-(z),\\[2ex]
(1-z^2)^{-\frac{1}{4}-\frac{\alpha}{2} +\frac{\lambda }{2}}{\bf S}_{-\lambda ,-\alpha } (\frac{z}{\sqrt{z^2-1}})
&=&\frac{\sqrt\pi}
{\Gamma(\frac{3}{4}-\frac{\alpha}{2} -\frac{\lambda }{2})\Gamma(\frac{3}{4}+\frac{\alpha}{2} -\frac{\lambda }{2})}
S_{\alpha ,\lambda }^+(z)\\[1ex]
&&+\frac{{\rm i}\sqrt{\pi}}
{\Gamma(\frac{1}{4}-\frac{\alpha}{2} -\frac{\lambda }{2})\Gamma(\frac{1}{4}+\frac{\alpha}{2} -\frac{\lambda }{2})}
S_{\alpha ,\lambda }^-(z),\\[2ex]
(1-z^2)^{-\frac{1}{4}-\frac{\alpha}{2} -\frac{\lambda }{2}}{\bf S}_{\lambda ,\alpha } \Big(\frac{z}{\sqrt{z^2-1}}\Big)
&=&\frac{\sqrt\pi}
{\Gamma(\frac{3}{4}-\frac{\alpha}{2} +\frac{\lambda }{2})\Gamma(\frac{3}{4}+\frac{\alpha}{2} +\frac{\lambda }{2})}
S_{\alpha ,\lambda }^+(z)\\[1ex]
&&+\frac{{\rm i}\sqrt{\pi}}
{\Gamma(\frac{1}{4}-\frac{\alpha}{2} +\frac{\lambda}{2})\Gamma(\frac{1}{4}+\frac{\alpha}{2} +\frac{\lambda }{2})}
S_{\alpha ,\lambda }^-(z).
\end{eqnarray*}
\subsection{Recurrence relations}
The following recurrence relations can be easily derived from the
commutation properties of Subsect. \ref{symcoma}
\begin{eqnarray*}
\partial_z {\bf S}_{\alpha ,\lambda }(z)&=&-\frac{1}{2}\Big(\frac12+\alpha -\lambda \Big)\Big(\frac12+\alpha +\lambda \Big)
{\bf S}_{\alpha +1,\lambda }(z),\\
\left((1-z^2) \partial_z -2\alpha z\right)
{\bf S}_{\alpha ,\lambda }(z)
&=&-2{\bf S}_{\alpha -1,\lambda }(z) ,\\[2ex]
\left((1-z^2) \partial_z -\Big(\frac12+\alpha +\lambda \Big) z\right){\bf S}_{\alpha ,\lambda }(z)
&=&-\Big(\frac12+\alpha +\lambda \Big) {\bf S}_{\alpha ,\lambda +1}(z),
\\
\left((1-z^2) \partial_z -\Big(\frac12+\alpha -\lambda \Big) z\right){\bf S}_{\alpha ,\lambda }(z)
&=&-\Big(\frac12+\alpha -\lambda \Big){\bf S}_{\alpha ,\lambda -1}(z),
\\[2ex]
\left(z \partial_z+\frac12+\alpha -\lambda \right){\bf S}_{\alpha ,\lambda }(z)
&=&\frac12\Big(\frac12+\alpha -\lambda \Big)\Big(\frac32+\alpha-\lambda\Big) {\bf S}_{\alpha +1,\lambda -1}(z),\\
\left(z(1{-}z^2) \partial_z{+}\Big(\frac12{-}\alpha {+}\lambda \Big)(1{-}z^2){-}2\alpha z^2
\right){\bf S}_{\alpha ,\lambda }(z)
&=&-2{\bf S}_{\alpha -1,\lambda +1}(z),\\[2ex]
\left(z \partial_z+\frac12+\alpha +\lambda \right){\bf S}_{\alpha ,\lambda }(z)
&=&\Big(\frac12+\alpha +\lambda \Big)(\alpha+1) {\bf S}_{\alpha +1,\lambda +1}(z),\\
\left(z(1{-}z^2) \partial_z{+}\Big(\frac12{-}\alpha {-}\lambda \Big)(1{-}z^2){-}2\alpha z^2
\right){\bf S}_{\alpha ,\lambda }(z)
&=&-2{\bf S}_{\alpha -1,\lambda -1}(z).
\end{eqnarray*}
\subsection{Gegenbauer polynomials}
If $-a=n=0,1,2,\dots$, then Gegenbauer functions are polynomials.
We will use two distinct normalizations of these polynomials.
The $C_n^{\rm\scriptscriptstyle I}$ polynomials have a natural Rodriguez-type definition:
\[
C_n^{{\rm\scriptscriptstyle I},\alpha}(z):=\frac{1}{2^nn!}(z^2-1)^{-\alpha} \partial_z^n
(z^2-1)^{n+\alpha}.
\]
The $C_n^{\rm\scriptscriptstyle {I}{I}}$ polynomials are defined as
\[C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z):=\frac{(2\alpha+1)_n}{(\alpha+1)_n}
C_n^{{\rm\scriptscriptstyle I},\alpha}(z).\]
\begin{remark}
The first kind polynomials is just the special case of the conventional Jacobi polynomials (see Rem. \ref{jaco}) with $\alpha=\beta$:
\[C_n^{{\rm\scriptscriptstyle I},\alpha}(z)=P_n^{\alpha,\alpha}(z).\]
The second kind of polynomials is called in the literature the {\em Gegenbauer polynomials}. In the standard notation its parameter is shifted by $\frac12$:
\[C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)=C_n^{\alpha+\frac12}(z).\]
\end{remark}
When describing the properties of Gegenbauer polynomials we can choose
between $C_n^{{\rm\scriptscriptstyle I}}$ and $C_n^{{\rm\scriptscriptstyle {I}{I}}}$. We either give properties of
both kinds of polynomials or choose those that give simpler
formmulas.
Both kinds of polynomials solve the Gegenbauer equation:
\begin{eqnarray*}
\Big(
(1-z^2)\partial_z^2-2(1+\alpha)z\partial_z+n(n+2\alpha+1)\Big)C_n^{{\rm\scriptscriptstyle I}/{\rm\scriptscriptstyle {I}{I}}}(z)&&\\
=\ \ {\mathcal S}(-n,n+2\alpha+1;z,\partial_z)C_n^{{\rm\scriptscriptstyle I}/{\rm\scriptscriptstyle {I}{I}}}(z)&=&0.
\end{eqnarray*}
Generating functions:
\begin{eqnarray*}
(1-2tz+t^2(z^2-1))^{-\alpha}&=&\sum\limits_{n=0}^\infty
(2t)^nC_n^{{\rm\scriptscriptstyle I},-\alpha-n}(z),\\
(1-2zt+t^2)^{-\alpha-\frac12}&
=&\sum\limits_{n=0}^\infty C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)t^n.
\end{eqnarray*}
Integral representations:
\begin{eqnarray*}C_n^{{\rm\scriptscriptstyle I},\alpha}(z)&=&\frac{1}{2\pi{\rm i}}\int\limits_{[0^+]}
\Big(1-2tz+t^2(z^2-1)\Big)^{\alpha+n}t^{-n-1}{\rm d} t,\\
C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)&
=&\frac{1}{2\pi{\rm i}}\int\limits_{[0^+]}(1-2zt+t^2)^{-\alpha-\frac12}t^{-n-1}{\rm d} t
.\end{eqnarray*}
We give symmetries for both kinds of polynomials:
\begin{eqnarray*}C_n^{{\rm\scriptscriptstyle I},\alpha}(z)
&=&(- 1)^nC_n^{{\rm\scriptscriptstyle I},\alpha}(- z)\\
&=&\frac
{(2\alpha+1+n)_n}
{(\mp 2)^{n}(\alpha+\frac12)_n}(z^2-1)^{\frac{n}{2}}C_n^{{\rm\scriptscriptstyle I},-\frac{1}{2}-\alpha-n}
\Big(\frac{\pm z}{\sqrt{z^2-1}}\Big)
.\end{eqnarray*}
\begin{eqnarray*}C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)
&=&(- 1)^nC_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(- z)\\
&=&\frac{(\mp 2)^n(\alpha+\frac12)_n}
{(2\alpha+1+n)_n}(z^2-1)^{\frac{n}{2}}C_n^{{\rm\scriptscriptstyle {I}{I}},-\frac{1}{2}-\alpha-n}
\Big(\frac{\pm z}{\sqrt{z^2-1}}\Big)
.\end{eqnarray*}
We give recurrence relations only for $C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}$, those for $C_n^{{\rm\scriptscriptstyle I},\alpha}$ differ by coefficients on the right, but have a comparable level of complexity:
\begin{eqnarray*}
\partial_z C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)&=&(2\alpha+1) C_{n-1}^{{\rm\scriptscriptstyle {I}{I}},\alpha+1}(z),
\\
\left((1-z^2) \partial_z-2\alpha z\right)C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)
&=&\frac{-(n+1)(n+2\alpha)}{2\alpha}C_{n+1}^{{\rm\scriptscriptstyle {I}{I}},\alpha-1}(z),
\\[5ex]
\left((1-z^2) \partial_z-(n+2\alpha+1)z\right)C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)
&=&-(n+1)C_{n+1}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z),
\\
\left((1-z^2) \partial_z+nz\right)C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)
&=&(n+2\alpha)C_{n-1}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z),\\[7ex]
(z \partial_z-n)C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)&=&(2\alpha+1) C_{n-2}^{{\rm\scriptscriptstyle {I}{I}},\alpha+1}(z),\\
\left(z(1-z^2) \partial_z+1+n-(n+2\alpha+1)z^2\right)C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)
&=&-\frac{(n+1)(n+2)}{2\alpha-1}C_{n+2}^{{\rm\scriptscriptstyle {I}{I}},\alpha-1}(z),\\[5ex]
\left(z \partial_z+n+2\alpha+1\right)C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)&=&(2\alpha+1) C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha+1}(z),\\
\left(z(1-z^2) \partial_z-n-2\alpha+nz^2\right)C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)
&=&-\frac{(2\alpha+n-1)(2\alpha+n)}{2\alpha-1}C_{n}^{{\rm\scriptscriptstyle {I}{I}},\alpha-1}(z).
\end{eqnarray*}
The differential equation, the Rodriguez-type formula, the first generating function and the first integral representation
are special cases of
the corresponding formulas of Subsect.
\ref{Hypergeometric type polynomials}. Thus the polynomials
$C^{\rm\scriptscriptstyle I}$ belong to the scheme of Subsect.
\ref{Hypergeometric type polynomials}.
$C^{{\rm\scriptscriptstyle {I}{I}}}$ do not have a natural Rodriguez-type formula, and do not belong to the scheme of Subsect. \ref{Hypergeometric type polynomials}.
The $C^{{\rm\scriptscriptstyle I}}$ polynomials have simple expressions in terms of the Jacobi polynomials:
\begin{eqnarray*}
C_n^{{\rm\scriptscriptstyle I},\alpha}(z)&
=&(\pm 1)^nR_n^{\alpha,\alpha}\Big(\frac{1\mp z}{2}\Big)\\
&=&
\Big(\frac{\pm 1-z}{2}\Big)^n
R_n^{\alpha,-2\alpha-2n-1}\Big(\frac{2}{1\mp z}\Big)\\[3mm]
&=&
\Big(\frac{z\mp 1}{2}\Big)^nR_n^{-2\alpha-2n-1,\alpha}\Big(\frac{\pm 1+z}{\mp 1+z}\Big).
\end{eqnarray*}
We have several alternative expressions for $C^{\rm\scriptscriptstyle I}$ and $C^{\rm\scriptscriptstyle {I}{I}}$ polynomials:
\begin{eqnarray*}
C_n^{{\rm\scriptscriptstyle I},\alpha}(z)&:=&\lim\limits_{\nu\to n}(-1)^n(\nu{-}n){\bf S}_{\alpha,\nu+\alpha+\frac12}^{{\rm\scriptscriptstyle I}}(z)=
\lim\limits_{\nu\to n}(\nu{-}n){\bf F}_{\alpha,\alpha,2\nu+2\alpha+1}^{{\rm\scriptscriptstyle I}}\Big(\frac{1\mp z}{2}\Big)\\&=&(\pm1)^n\frac{(\alpha+1)_n}{n!}F\Big(-n,n+2\alpha+1;\alpha+1;
\frac{1\mp z}{2}\Big),\\
C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)&:=&\lim\limits_{\nu\to n}(-1)^n(\nu{-}n){\bf S}_{\alpha,\nu+\alpha+\frac12}^{{\rm\scriptscriptstyle {I}{I}}}(z)\\
&=&(\pm1)^n\frac{(2\alpha+1)_n}{n!}F\Big(-n,n+2\alpha+1;\alpha+1;
\frac{1\mp z}{2}\Big)\\
&=&\sum\limits_{k=0}^{[\frac{n}{2}]}\frac{(-1)^k(\alpha+\frac12)_{n-k}}{k!(n-2k)!}(2z)^{n-2k}
.
\end{eqnarray*}
Values at $\pm1$, behavior at infinity we give for both kinds of polynomials:
\begin{eqnarray*}
C_n^{{\rm\scriptscriptstyle I},\alpha}(\pm1)&=\ (\pm 1)^n\frac{(\alpha+1)_n}{n!},\ \ \ \
\lim\limits_{z\to\infty}\frac{C_n^{{\rm\scriptscriptstyle I},\alpha}(z)}{z^n}&=\ \frac{2^{-n}(2\alpha+n+1)_n}{n!},\\
C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(\pm1)&=\ (\pm 1)^n\frac{(2\alpha+1)_n}{n!},\ \ \ \
\lim\limits_{z\to\infty}\frac{C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)}{z^n}&=\ \frac{2^n(\alpha+\frac12)_n}{n!}.
\end{eqnarray*}
The degenerate case has a simple expression in terms of $C^{\rm\scriptscriptstyle I}$
polynomials:
\[C_n^{{\rm\scriptscriptstyle I},\alpha}=\Big(\frac{2}{z^2-1}\Big)^{\alpha}
C_{n+2\alpha}^{{\rm\scriptscriptstyle I},-\alpha}(z),\ \ \alpha\in{\mathbb Z}.\]
The initial conditions at $0$ and the identities for the even and odd case are given only for $C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}$, since those for $C_n^{{\rm\scriptscriptstyle I},\alpha}$ are more complicated:
\begin{eqnarray*}
C_{2m}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(0)&=\ \frac{(-1)^m(\alpha+\frac12)_m}{m!},\ \
\partial_zC_{2m}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(0)&=\ 0;\\
C_{2m+1}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(0)&=\ 0,\ \ \ \ \ \ \ \ \ \ \ \partial_zC_{2m+1}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(0)
&=\ \frac{(-1)^m2(\alpha+\frac12)_m}{m!}.\end{eqnarray*}
\begin{eqnarray*}
C_{2m}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)&=&(-1)^m\frac{(\alpha+\frac12)_m}{(\alpha+1)_m} R_m^{\alpha,-\frac12}(z^2)\\
&=&(-1)^m\frac{(\alpha+\frac12)_m}{m!}S_{\alpha,2m+\frac12+\alpha}^+(z)\\
&=&(-1)^m\frac{(\alpha+\frac12)_m}{m!}F\Big(-m,m+\frac12+\alpha;\frac12;z^2\Big)
,\\
C_{2m+1}^{{\rm\scriptscriptstyle {I}{I}},\alpha}(z)&=&(-1)^m\frac{(\alpha+\frac12)_m}{(\alpha+1)_m}
2zR_m^{\alpha,\frac12}(z^2)\\
&=&(-1)^m\frac{(\alpha+\frac12)_m}{m!}S_{\alpha,2m+\frac32+\alpha}^-(z)\\
&=&(-1)^m\frac{(\alpha+\frac12)_m}{m!}2zF\Big(-m,m+\frac32+\alpha;\frac32;z^2\Big)
.\end{eqnarray*}
We have the following special cases:
\begin{enumerate}
\item
If $\alpha\in{\mathbb Z}$, $-n\leq\alpha\leq\frac{-n-1}{2}$, then $C_n^{{\rm\scriptscriptstyle I},\alpha}=0$.
\item
If $\alpha\in{\mathbb Z}+\frac12$, $\frac{-n-1}{2}\leq\alpha\leq-\frac{1}{2}$, then $C_n^{{\rm\scriptscriptstyle {I}{I}},\alpha}=0$.
\item
If $\alpha\in{\mathbb Z}$, $\frac{-n+1}{2}\leq\alpha\leq-1$, then $C_n^{{\rm\scriptscriptstyle I}/{\rm\scriptscriptstyle {I}{I}},\alpha}=(1-z^2)^{-\alpha} W$, where $W$ is a polynomial not divisible by $1-z^2$.
\end{enumerate}
\subsection{Special cases}
When describing special cases of the Gegenbauer quation we will primarily use the Lie-algebraic parameters.
\subsubsection{The Legendre equation}
Suppose that one of the parameters is an integer. Using, if necessary, recurrence relations we can assume it is zero. After applying an appropriate symmetry, we can assume that $\alpha=0$. We obtain then the {\em Legendre operator}:
\begin{eqnarray}
{\cal S}_{0,\lambda }(z, \partial_z)
&=&(1-z^2) \partial_z^2-2z \partial_z
+\lambda ^2-\frac14.\label{legen}\end{eqnarray}
For the particular case $\lambda=0$ its solutions can be expressed by the so called {\em complete elliptic functions}.
The Legendre operator for polynomials of degree $n$ has the form
\begin{eqnarray*}
&&(1-z^2) \partial_z^2-2z \partial_z
+n(n+1).\end{eqnarray*}
The {\em Legendre polynomials} are special cases of both $C^{\rm\scriptscriptstyle I}$ and $C^{\rm\scriptscriptstyle {I}{I}}$:
\begin{eqnarray*}
P_n(z)&=&C_n^{{\rm\scriptscriptstyle I},0}(z)=C_n^{{\rm\scriptscriptstyle {I}{I}},0}(z)\\&=&\frac{1}{2^nn!} \partial_z^n
(z^2-1)^{n}.
\end{eqnarray*}
Their generating function is a special case of the generating function for $C^{\rm\scriptscriptstyle {I}{I}}$:
\begin{eqnarray*}
(1-2zt+t^2)^{-\frac12}&
=&\sum\limits_{n=0}^\infty P_n(z)t^n.
\end{eqnarray*}
\subsubsection{Chebyshev equation of the 1st kind}
Suppose that one of the parameters belongs to ${\mathbb Z}+\frac12$. Using, if necessary, recurrence relation, we can assume it equals $-\frac12$. After applying an appropriate symmetry we can assume that $\alpha=-\frac12$. We obtain then the {\em Chebyshev operator of the 1st kind}:
\begin{eqnarray}
{\cal S}_{0,\lambda }(z, \partial_z)
&=&(1-z^2) \partial_z^2-2z \partial_z
+\lambda ^2.\label{cheb1}\end{eqnarray}
After substitution $z=\cos\phi$ it becomes
\[\partial_\phi^2+\lambda^2.\]
Thus the coresponding
equation can be solved in terms of elementary functions.
To obtain an operator that annihilates a polynomial of degree $n$ we simply set $\lambda=n$:
\begin{eqnarray*}
&&(1-z^2) \partial_z^2-2z \partial_z
+n^2.\end{eqnarray*}
The {\em Chebyshev polynomials of the 1st kind} are
\begin{eqnarray*}
T_n(z)&=&
\frac{n!}{(1/2)_n}C_n^{{\rm\scriptscriptstyle I},-\frac12}(z)=\frac{{\rm d}}{{\rm d}\alpha}C_n^{{\rm\scriptscriptstyle {I}{I}},-\frac{1}{2}}(z)\\
&=&
\frac12\Big((z+{\rm i}\sqrt{1-z^2})^n+
(z-{\rm i}\sqrt{1-z^2})^n\Big).
\end{eqnarray*}
Note that $C_n^{{\rm\scriptscriptstyle {I}{I}},-\frac12}=0$, therefore the usual generating function for $C^{\rm\scriptscriptstyle {I}{I}}$ cannot be applied for the Chebyshev polynomials of the 1st kind. Instead, we have generating functions
\begin{eqnarray*}
-\log(1-2zt+t^2)&
=&\sum\limits_{n=0}^\infty T_n(z)\frac{t^n}{n},\\
\frac{1-zt}{1-2zt+t^2}&
=&\sum\limits_{n=0}^\infty T_n(z)t^n.
\end{eqnarray*}
\subsubsection{Chebyshev equation of the 2nd kind}
If one of the parameters belongs to ${\mathbb Z}+\frac12$, instead of $\alpha=-\frac12$ we can reduce ourselves to the case
$\alpha=\frac12$. We obtain then the {\em Chebyshev operator of the 2nd kind}:
\begin{eqnarray}\label{cheb2}
{\cal S}_{0,\lambda }(z, \partial_z)
&=&(1-z^2) \partial_z^2-2z \partial_z
+\lambda ^2-1.\end{eqnarray}
After substitution $z=\cos\phi$ it becomes
\[\sin\phi(\partial_\phi^2+\lambda^2)(\sin\phi)^{-1}.\]
Clearly, the corresponding equation can also be solved in elementary functions.
To obtain an operator that annihilates a polynomial of degree $n$ we set $\lambda=n+1$:
\begin{eqnarray*}
&&(1-z^2) \partial_z^2-2z \partial_z
+n(n+2).\end{eqnarray*}
The
{\em Chebyshev polynomials of the 2nd kind} are
\begin{eqnarray*}
U_n(z)&=&
\frac{n!}{(3/2)_n}C_n^{{\rm\scriptscriptstyle I},\frac12}(z)=C_n^{{\rm\scriptscriptstyle {I}{I}},\frac{1}{2}}(z)\\
&=&
\frac{(z+{\rm i}\sqrt{1-z^2})^{n+1}-
(z-{\rm i}\sqrt{1-z^2})^{n+1}}{2{\rm i}\sqrt{1-z^2}}.
\end{eqnarray*}
Their generating function is a special case of the generating function for $C^{\rm\scriptscriptstyle {I}{I}}$:
\begin{eqnarray*}
(1-2zt+t^2)^{-1}&
=&\sum\limits_{n=0}^\infty U_n(z)t^n.
\end{eqnarray*}
\section{The Hermite equation}
\label{s8}
\subsection{Introduction}
Let ${a}\in{\mathbb C}$.
In this section we study the {\em Hermite equation}, which is given by
the operator
\[{\cal S}({a},z, \partial_z):= \partial_z^2-2z \partial_z-2{a}.\]
The choice of the parameter $a$ is dictated by the analogy with the
parameters of the Gegenbauer. It will be called a {\em classical parameter},
even though it is not the usual one in the literature.
The Hermite operator can be obtained as the limit of the Gegenbauer operator:
\begin{equation}
\lim_{b\to\infty}\frac{2}{b}{\cal S}\Big(a,b;z\sqrt{2/b},\partial_{\big(z\sqrt{2/b}\big)}\Big)
={\cal S}(a;z,\partial z).\label{psda}\end{equation}
To describe the symmetries it is convenient to use its {\em Lie-algebraic parameter}:
\[\lambda={a}-\frac12,\ \ \ a=\lambda+\frac12.\]
In the new parameter the Hermite operator equals
\begin{eqnarray*}
{\cal S}_\lambda (z, \partial_z)
&=& \partial_z^2-2z \partial_z-2\lambda -1.
\end{eqnarray*}
The Lie-algebraic parameter has an interesting interpretation in terms of a ``Cartan element'' of the Lie algebra $sch(1)$ \cite{DM}.
\subsection{Equivalence with a subclass of the confluent equation}
The Hermite equation is reflection invariant. By using the quadratic
transformation
we can reduce it to a special case of the confluent equation:
\begin{eqnarray}
{\cal S}({a};z, \partial_z)&=&4{\cal F}(\frac{{a}}{2};\frac{1}{2};w, \partial_w),\label{ha6}\\[2ex]
z^{-1}{\cal S}({a};z, \partial_z)z&=&4{\cal F}(\frac{{a}+1}{2};\frac32;w, \partial_w),
\label{ha6a}\end{eqnarray}
where
\[w=z^2,\ \ \ \ z=\sqrt w.\]
In the Lie-algebraic parameters
\begin{eqnarray*}
{\cal S}_\lambda(z, \partial_z)&=&4{\cal F}_{\lambda,-\frac12}(w, \partial_w),\\
z^{-1}{\cal S}_\lambda(z, \partial_z)z&=&4{\cal F}_{\lambda,\frac12}(w, \partial_w).
\end{eqnarray*}
\subsection{Symmetries}
\label{symcom5}
The following operators equal ${\cal S}_\lambda (w, \partial_w)$ for an
appropriate $w$:
\[\begin{array}{rrcl}
w=\pm z:&&&\\
&&{\cal S}_\lambda (z, \partial_z),&\\[1ex]
w=\pm{\rm i} z:&&&\\
&-\exp(-z^2)&{\cal S}_{-\lambda }(z, \partial_z)&\exp(z^2).
\end{array}\]
The group of symmetries of the Hermite equation is isomorphic to ${\mathbb Z}_4$ and can be interpreted as the ``Weyl group'' of $sch(1)$.
\subsection{Factorizations and commutation properties}
\label{symcom5a}
There are several ways to factorize the Hermite operator:
\begin{eqnarray*}
{\cal S}_\lambda&=&\big( \partial_z-2z\big) \partial_z-2\lambda-1\\
&=& \partial_z\big( \partial_z-2z\big)-2\lambda+1,\\[2ex]
z^2{\cal S}_\lambda&=&
\Big(z \partial_z+\lambda-\frac32\Big)\Big(z \partial_z-\lambda+\frac12-2z^2\Big)\\
&&+\Big(\lambda-\frac32\Big)\Big(\lambda-\frac12\Big)\\
&=&
\Big(z \partial_z-\lambda-\frac32-2z^2\Big)
\Big(z \partial_z+\lambda+\frac12\Big)\\
&&+\Big(\lambda+\frac32\Big)\Big(\lambda+\frac12\Big).
\end{eqnarray*}
The factorizations can be used to derive the following commutation relations:
\[\begin{array}{rl}
\partial_z&{\cal S}_\lambda \\
=\ \ {\cal S}_{\lambda +1}& \partial_z,\\[2ex]
( \partial_z-2z)
&{\cal S}_\lambda \\
=\ \ \ {\cal S}_{\lambda -1}&( \partial_z-2z),\\[3ex]
(z \partial_z+\lambda +\frac12)&z^2{\cal S}_\lambda \\
=\ \ z^2{\cal S}_{\lambda +2}& (z \partial_z+\lambda +\frac{1}{2}),\\[2ex]
(z \partial_z-\lambda +\frac12-2z^2)&z^2{\cal S}_\lambda \\
=\ \ z^2{\cal S}_{\lambda -2}&(z \partial_z-\lambda +\frac{1}{2}-2z^2).
\end{array}\]
Each of these commutations relations is associated with a ``root'' of the Lie algebra $sch(1)$.
\subsection{Convergence of the Gegenbauer equation to the Hermite equation}
It is interesting to describe the transition from the symmetries of the
Gegenbauer equation to the symmetries of the Hermite equation. We
consider the limit (\ref{psda}). We also consider the surface
$\Omega$ described in Subsect.
\ref{The Riemann surface of the Gegenbauer equation}.
Let us look only at the part of $\Omega$ given by the union of
$\Omega_+\cap\{{\rm Im} z>0\}_+$ and
$\Omega_-\cap\{{\rm Im} z>0\}_-$ glued along $]-1,1[$.
The scaling involved in the limit (\ref{psda}) transforms this part
of $\Omega$ into ${\mathbb C}$.
$\tau(\Omega_+\cap\{{\rm Im} z>0\})$
is equal to the union of $\Omega_-\cap\{{\rm Im} z>0,{\rm Re} z>0\}$ and
$\Omega_-\cap\{{\rm Im} z<0,{\rm Re} z>0\}$ glued along $]0,1[$. Thus
the limit of $\tau$ on
$\Omega_+\cap\{{\rm Im} z>0\}$ equals the multiplication by $-{\rm i}$.
$-\tau(\Omega_-\cap\{{\rm Im} z<0\})$
is equal to the union of
$\Omega_+\cap\{{\rm Im} z>0,{\rm Re} z<0\}$ and $\Omega_-\cap\{{\rm Im} z<0,{\rm Re} z<0\}$
glued along $]-1,0[$. Thus the limit of $-\tau$ on
$\Omega_-\cap\{{\rm Im} z>0\}$ also equals the multiplication by $-{\rm i}$.
Thus the multiplication by $-{\rm i}$ is not the limit of a single element of the
group of the symmetries of ther Gegenbauer equation, but a combination of the
limits of two symmetries.
\subsection{Integral representations}
Below we describe two kinds of integral representations
of the Hermite equation.
\begin{theoreme}\begin{enumerate}\item
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\[{\rm e}^{t^2}(t-z)^{-{a}-1}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then
\begin{equation}{\cal S}({a};z, \partial_z)\int_\gamma{\rm e}^{t^2}(t-z)^{-{a}}{\rm d} t.\label{dad10}\end{equation}
\item
Let $[0,1]\ni t\mapsto\gamma(t)$ satisfy
\[{\rm e}^{-t^2-2zt}t^{{a}}\Big|_{\gamma(0)}^{\gamma(1)}=0.\]
Then\begin{equation}
{\cal S}({a};z, \partial_z)
\int_\gamma
{\rm e}^{-t^2-2zt}t^{a-1}{\rm d} t=0.\label{dad11}\end{equation}
\end{enumerate}\label{dad12}\end{theoreme}
\noindent{\bf Proof.}\ \
We check that for any contour $\gamma$, (\ref{dad10}) and (\ref{dad11}) equal
\begin{eqnarray*}
&&-{a} \int_\gamma\left( \partial_t
{\rm e}^{t^2}(t-z)^{-{a}-1}\right){\rm d} t,\\
&&-2\int_\gamma
\Big( \partial_t{\rm e}^{-t^2-2zt}t^{{a}}\Big){\rm d} t
\end{eqnarray*}
respectively.
We can also deduce the second representation from the first
by the symmetry involving the multiplication by ${\rm e}^{z^2}$ and the change of variables
$z\mapsto{\rm i} z$.
\hfill$\Box$\medskip
\subsection{Canonical forms}
The natural weight of the Hermite operator equals ${\rm e}^{-z^2}$, so that
\[{\cal S}_{\lambda}=
{\rm e}^{z^2}\partial_z{\rm e}^{-z^2}\partial_z-2\lambda-1
.\]
The balanced (as well as Schr\"odinger-type) form
of the Hermite operator is
\begin{eqnarray*}
{\rm e}^{-\frac{z^2}{2}}{\cal S}_{\lambda}{\rm e}^{\frac{z^2}{2}}
&=&
\partial_z^2-z^2-2\lambda.
\end{eqnarray*}
Note that the symmetry $(z,\lambda)\mapsto({\rm i} z,-\lambda)$ is obvious
in the balanced form.
\begin{remark}
The balanced form of the Hermite equation is known in the literature
as the {\em Weber} or {\em parabolic cylinder equation}.
It is usually written in one of two forms
\begin{eqnarray*}
\partial_z^2-\frac14z^2-k,&&\partial_z^2+\frac14z^2-k.
\end{eqnarray*}
\end{remark}
\subsection{Even solution}
Inserting a power series in the equation we see that the Hermite equation has
an even solution
\begin{eqnarray*}
S_\lambda^+(z)&:=&\sum\limits_{j=0}^\infty\frac{(\frac{{a}}{2})_j}{(2j)!}(2z)^{2j}
\\=
\
F\Big(\frac{{a}}{2};\frac{1}{2};z^2\Big)&=&F_{-\frac12,\lambda}(z^2).
\end{eqnarray*}
It is the unique solution
satisfying
\begin{equation} S_\lambda^+(0)=1,\ \ \ \ \frac{{\rm d}}{{\rm d} z}S_\lambda^+(0)=0.\label{inh1}\end{equation}
It has the properties
\[S_\lambda ^+(z)=S_\lambda ^+(-z)
={\rm e}^{z^2}S_{-\lambda }^+({\rm i} z)
.\]
\subsection{Odd solution}
The Hermite equation has
an odd solution
\begin{eqnarray*}
S_\lambda^-(z)&:=&
\sum\limits_{j=0}^\infty\frac{(\frac{{a}+1}{2})_j}{(2j+1)!}(2z)^{2j+1},\\
=\ 2zF\Big(\frac{{a}+1}{2};\frac32;z^2\Big)&=&2zF_{\frac12,\lambda}(z^2)
.\end{eqnarray*}
It is the unique solution of the Hermite
equation satisfying
\begin{equation} S_\lambda^-(0)=0,\ \ \ \ \frac{{\rm d}}{{\rm d} z}S_\lambda^-(0)=2.\label{inh2}\end{equation}
It has the properties
\[S^-_\lambda (z)=-S_\lambda ^-(-z)
=-{\rm i} {\rm e}^{z^2}S_{-\lambda }^-({\rm i} z)
.\]
\subsection{Standard solutions}
The Hermite equation has only one singular point, $\infty$. We will
see that one can define two kinds of solutions with a simple asymptotics at $\infty$.
By Thm \ref{dad12}, for appropriate $\gamma_1$ and $\gamma_2$ the following integrals are solutions:
\begin{eqnarray*}
\int\limits_{\gamma_1}{\rm e}^{-t^2-2tz}t^{\lambda -\frac12}{\rm d} t,
&&\\
\int\limits_{\gamma_2}{\rm e}^{t^2}
(z-t)^{-\lambda -\frac12}
{\rm d}
t.
&&
\end{eqnarray*}
In the first case the integrand has a singular point at $0$ and goes to zero as $t\to \pm \infty$. We can thus use $\gamma_1$ with such endpoints. We will see that they give all standard solutions.
In the second case the integrand has a singular point at $z$ and goes
to zero as $t\to \pm {\rm i}\infty$. Using $\gamma_2$ with such
endpoints we will also obtain all standard solutions.
\subsubsection{Solution $\sim z^{-{a}}$ for $z\to+\infty$}
The following function is the solution of the Hermite equation that behaves as
$z^{-{a}}=z^{-\lambda-\frac12}$ for $|z|\to\infty$, $|\arg z|<\pi/2-\epsilon$:
\begin{eqnarray*}
S_\lambda (z)&:=&z^{-\lambda -\frac12}\tilde F_{-\frac12,\lambda }(-z^{-2})
\ = \
z^{-{a}}F\Big(\frac{{a}}{2},\frac{{a}+1}{2};-;-z^{-2}\Big).
\end{eqnarray*}
We will also introduce
alternatively normalized solutions:
\begin{eqnarray*}
{\bf S}_\lambda^{\rm\scriptscriptstyle I} (z)&:=&2^{-\lambda-\frac12}\Gamma\Big(\lambda+\frac12\Big)S_\lambda(z)\\
& = &
2^{-a}z^{-{a}}\frac{1}{\Gamma(a)}F\Big(\frac{{a}}{2},\frac{{a}+1}{2};-;-z^{-2}\Big),\\
{\bf S}_\lambda^{\rm\scriptscriptstyle 0} (z)&:=&\sqrt{\pi}S_\lambda(z).
\end{eqnarray*}
(The normalization of ${\bf S}_\lambda^{\rm\scriptscriptstyle 0}$ is somewhat trivial -- we introduce it to preserve the analogy with the Gegenbauer equation, which had a less trivially normalized solution
${\bf S}_{\alpha,\lambda}^{\rm\scriptscriptstyle 0}$.)
Assuming that $z\not\in]-\infty,0]$,
we have an integral representation valid for $ -\frac12<\lambda$:
\begin{eqnarray*}
\int\limits_0^\infty{\rm e}^{-t^2-2tz}t^{\lambda -\frac12}{\rm d} t
&=&{\bf S}_\lambda ^{\rm\scriptscriptstyle I}(z),
\end{eqnarray*} and for all parameters:
\begin{eqnarray*}
-{\rm i}\int\limits_{]-{\rm i}\infty,z^-,{\rm i}\infty[}{\rm e}^{t^2}
(z-t)^{-\lambda -\frac12}
{\rm d}
t
&=&{\bf S}^{\rm\scriptscriptstyle 0}_\lambda (z).
\end{eqnarray*}
\subsubsection{Solution $\sim(-{\rm i} z)^{{a}-1}{\rm e}^{z^2}$ for
$z\to+{\rm i}\infty$}
The following function is the solution of the Hermite equation that behaves as
$(-{\rm i} z)^{{a}-1}{\rm e}^{z^2}=(-{\rm i} z)^{\lambda -\frac12}{\rm e}^{z^2}$ for $|z|\to\infty$, $|\arg z-\pi/2|<\pi/2-\epsilon$:
\begin{eqnarray*}
{\rm e}^{z^2}S_{-\lambda }(-{\rm i} z)&=&
(-{\rm i} z)^{\lambda -\frac12}{\rm e}^{z^2}\tilde F_{-\frac12,-\lambda}(z^{-2}).
\end{eqnarray*}
Assuming that $z\not\in[0,\infty[$, we have an
integral representation valid for all parameters:
\begin{eqnarray*}
\int\limits_{]-\infty,0^+,\infty[}{\rm e}^{-t^2-2tz}(-{\rm i} t)^{\lambda -\frac12}{\rm d} t&=&
{\rm e}^{z^2}{\bf S}_{-\lambda }^{\rm\scriptscriptstyle 0}(-{\rm i} z),
\end{eqnarray*} and for $ \lambda<\frac12$:
\begin{eqnarray*}
-{\rm i}\int\limits_{[z,{\rm i}\infty[}{\rm e}^{t^2}(-{\rm i} (t-z))^{-\lambda -\frac12}{\rm d} t
&=& {\rm e}^{z^2}{\bf S}_{-\lambda }^{\rm\scriptscriptstyle I}(-{\rm i} z).
\end{eqnarray*}
\subsection{Connection formulas}
We can decompose the standard solutions into
the even and odd solutions:
\begin{eqnarray*}
S_\lambda (z)&=&\frac{\sqrt\pi}{\Gamma(\frac{2\lambda +3}{4})}S_\lambda ^+(z)
-\frac{\sqrt\pi}{\Gamma(\frac{2\lambda +1}{4})}S_\lambda ^-(z);
\\
{\rm e}^{z^2}S_{-\lambda }(-{\rm i} z)
&=&\frac{\sqrt{\pi}}{\Gamma(\frac{3-2\lambda }{4})}S_\lambda ^+(z)
+{\rm i}\frac{\sqrt{\pi}}{\Gamma(\frac{1-2\lambda }{4})}S_\lambda ^-(z).
\end{eqnarray*}
\subsection{Recurrence relations}
The following recurrence relations follow easily from the commutation properties of Subsect. \ref{symcom5a}:
\begin{eqnarray*}
\partial_z S_\lambda(z)&=&-\Big(\frac12+\lambda \Big) S_{\lambda +1}(z),\\
( \partial_z -2z)S_\lambda (z)&=&-2S_{\lambda -1}(z),\\[3ex]
(z \partial_z+\frac12-\lambda -2z^2) S_\lambda (z)&=&-2
S_{\lambda -2}(z),\\
(z \partial_z+\frac12+\lambda )S_\lambda (z)&=&-\frac12\Big(\frac12+\lambda \Big)
\Big(\frac32+\lambda\Big) S_{\lambda +2}(z).
\end{eqnarray*}
\subsection{Hermite polynomials}
If $-a=n=0,1,2,\dots$, then Hermite functions are polynomials.
Following
Subsect.
\ref{Hypergeometric type polynomials}, they can be defined by the following version of the Rodriguez-type formula:
\[
H_n(z):=\frac{(-1)^n}{n!}{\rm e}^{z^2} \partial_z^n{\rm e}^{-z^2}.
\]
\begin{remark}
The Hermite polynomials found usually in the literature equal
\[n!H_n(z).\]
The advantage of our convention is that the
Rodriguez-type formula has the same form for all classes of
hypergeometric type polynomials.
\end{remark}
The differential equation:
\begin{eqnarray*}
&&\big( \partial_z^2-2z \partial_z+2n\big)H_n(z)\\
&=&(-n;z, \partial_z)H_n(z)=0.
\end{eqnarray*}
The generating function:
\[\begin{array}{l}\exp(2tz-t^2)=
\sum\limits_{n=0}^\infty
t^nH_n(z).
\end{array}\]
The integral representation:
\[\begin{array}{l}H_n(z)
=\frac{1}{2\pi{\rm i}}\int\limits_{[0^+]}\exp(2tz-t^2)t^{-n-1}{\rm d} t.
\end{array}\]
Recurrence relations:
\begin{eqnarray*}
\partial_zH_n(z)&=&2H_{n-1}(z),\\
\left( \partial_z-2z\right)H_n(z)&=&-(n+1)H_{n+1}(z),\\[5ex]
\left(z \partial_z-n\right)H_n(z)&=&2H_{n-2}(z),\\
\left(z \partial_z+n+1-2z^2\right)H_n(z)&=&-(n+1)(n+2)H_{n+2}(z).
\end{eqnarray*}
The differential equation, the Rodriguez-type formula, the generating
function, the integral representation and the first pair of recurrence relations
are special cases of
the corresponding formulas of Subsect.
\ref{Hypergeometric type polynomials}.
We have several alternative expressions for Hermite polynomials:
\begin{eqnarray*}
H_n(z)&=&-\lim_{\nu\to n}(-1)^n(\nu{-}n){\bf S}_{-n-\frac12}^{{\rm\scriptscriptstyle I}}(z)\ =\ \frac{2^n}{n!}S_{-n-\frac12}(z)\\
&=&\frac{2^n}{n!}z^nF\Big(-\frac{n}{2},\frac{-n+1}{2};-;z^{-2}\Big)\\
&=&\sum\limits_{k=0}^{[\frac{n}{2}]}
\frac{(-1)^k(2z)^{n-2k}}{k!(n-2k)!}.
\end{eqnarray*}
Behavior at $\infty$.
\[\begin{array}{l}
\lim\limits_{n\to\infty}\frac{H_n(z)}{z^n}=\frac{2^n}{n!}.\end{array}\]
Initial conditions at $0$.
\[\begin{array}{l}
H_{2m}(0)=\frac{(-1)^m}{m!},\ \ \ \ H'_{2m}(0)=0,\\[3mm]
H_{2m+1}(0)=0,\ \ \ \ H'_{2m+1}(0)=\frac{(-1)^m2}{m!}.\end{array}\]
Identities for even and odd polynomials.
\begin{eqnarray*}
H_{2m}(z)
&=&\frac{(-1)^m2^{2m}m!}{(2m)!}L_m^{-1\slash2}(z^2)
\
=\ \frac{(-1)^m(2z)^{2m}m!}{(2m)!}B_m^{-2m-\frac{1}{2}}(-z^{-2}),\\
&=&\frac{(-1)^m}{m!}S_{-2m-\frac12}^+(z)=\frac{(-1)^m}{m!}F\Big(-m;\frac{1}{2};z^2\Big),\\
H_{2m+1}(z)
&=&\frac{(-1)^m2^{2m+1}m!}{(2m+1)!}zL_m^{1\slash2}(z^2)
\ =\ \frac{(-1)^m(2z)^{2m+1}m!}{(2m+1)!}B_m^{-2m-\frac{3}{2}}(-z^{-2})\\
& =&\ \frac{(-1)^m}{m!}S_{-2m-\frac32}^-(z)= \frac{(-1)^m}{m!}2zF\Big(-m;\frac{3}{2};z^2\Big).
\end{eqnarray*}
| {
"timestamp": "2013-08-06T02:06:07",
"yymm": "1305",
"arxiv_id": "1305.3113",
"language": "en",
"url": "https://arxiv.org/abs/1305.3113",
"abstract": "We give a systematic and unified discussion of various classes of hypergeometric type equations: the hypergeometric equation, the confluent equation, the F_1 equation (equivalent to the Bessel equation), the Gegenbauer equation and the Hermite equation. In particular, we discuss recurrence relations of their solutions, their integral representations and discrete symmetries.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Hypergeometric type functions and their symmetries",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750529474512,
"lm_q2_score": 0.8006920116079209,
"lm_q1q2_score": 0.7905833173559721
} |
https://arxiv.org/abs/1512.04640 | Nil-good and nil-good clean matrix rings | The notion of clean rings and 2-good rings have many variations, and have been widely studied. We provide a few results about two new variations of these concepts and discuss the theory that ties these variations to objects and properties of interest to noncommutative algebraists. A ring is called nil-good if each element in the ring is the sum of a nilpotent element and either a unit or zero. We establish that the ring of endomorphisms of a module over a division is nil-good, as well as some basic consequences. We then define a new property we call nil-good clean, the condition that an element of a ring is the sum of a nilpotent, an idempotent, and a unit. We explore the interplay between these properties and the notion of clean rings. | \section{Introduction}
In 1977, W. K. Nicholson defined a ring $R$ to be clean if for every $a \in R$ there is $u$ a unit in $R$ and $e$ an idempotent in $R$ such that $a=u+e$ \cite{Nic77}. The interest in the clean property of rings stems from its close connection to exchange rings, since clean is a concise property that implies exchange. Properties of rings related to the clean and exchange properties have been largely expanded and researched, and some generalizations closely relate to other properties of interest to algebraists.
The study of rings generated by their units dates back to the 1950's, when it was proved that any endomorphism of a module over a division ring is equal to the sum of two units, unless the dimension of the module is 1 or the division ring is $\mathbb{F}_2$, as established in \cite{Wol53} and \cite{Zel54}. This motivated algebraists to make extensive study of rings generated by their units. Later, Peter V{\'a}mos defined an element $a$ in $R$ to be $2$-$good$ if it can be expressed as the sum of two units in $R$, and defined a ring $R$ to be $2$-$good$ if every element in $R$ is $2$-$good$ \cite{Vam05}. In general, a ring is $n$-$good$ if each element can be written as the sum of $n$ units, and these properties have distinct applications from those of clean, but have also led to a diverse line of inquiry in ring theory. P. Danchev defined a property in \cite{Dan15} related to $2$-$good$ in the following way: an element $a$ in $R$ is nil-good if $a=n+u$ where $n$ is a nilpotent element of $R$ and $u$ is either $0$ or a unit in $R$. The ring $R$ is called nil-good if every element of $R$ is nil-good.
In this paper, we prove that if $R$ is a division ring, then $\mathbb{M}_n(R)$ is nil-good for all $n \in \mathbb{N}$. We then establish some basic properties of nil-good rings in general. We extend these results to specifically characterize local rings and artinian rings that are nil-good.
We then relate this property to clean rings in a new way by defining the property nil-good clean. We say a ring $R$ is nil-good clean if for all $r \in R$ there is a unit $u$, a nilpotent $n$ and an idempotent $e$ in $R$ such that $r=u+n+e$. This property holds for all clean and all nil-good rings, yet we show it includes a larger class of rings than only those that satisfy one property or the other. Understanding how the nil-good clean property generalizes to include rings that are neither nil-good nor clean may reveal more about the interaction of those two properties within unital rings. We provide an example of a nil-good clean ring that is not exchange, and therefore not clean. The example has properties similar to the ring provided by G. M. Bergman in \cite{Han77} of a nonclean exchange ring.
Throughout this paper rings are associative with unity. We denote the Jacobson radical $J(R)$ for a ring $R$ and write $\mathbb{M}_n(R)$ for the ring of $n \times n$ matrices over $R$.
\section{Matrices over a Field}
We first prove that $\mathbb{M}_n(R)$ is a nil-good ring when $R$ is a field for illustrative purposes. The linear algebra over $\mathbb{M}_n(R)$ required in the case where R is a field is more accessible, and the subsequent proof in the case where $R$ is a division ring is more concise and intuitive as a result. We can write a nil-good decomposition for any element of $\mathbb{M}_n(k)$ where $k$ is a field by putting all matrices in rational canonical form.
\begin{theorem}{For all $n \geq 1$ the ring $\mathbb{M}_n(k)$ is nil-good if $k$ is a field.}
\begin{proof}
Note that suitable rearrangement of the basis elements allows us to rearrange the companion matrix blocks in a matrix's rational canonical form without altering the nilpotence or invertibility of that matrix.
If the coefficients of any matrix are all zero then its rational canonical form is exclusively zero except possibly on the subdiagonal, making it nilpotent, in which case we let $U$ be the zero matrix, and let $N$ be the matrix in question.
Similarly, if the matrix $A$ we wish to decompose is the $n \times n$ zero matrix we let both $U$ and $N$ be the zero matrix. If the matrix $A$ in question is a unit, we let $N$ be the zero matrix, and let $U$ be the original matrix, the rational canonical form of $A$.
Now suppose $A$ is a non-nilpotent, non-unit matrix. Then, it will have zero as its $-c_0$ coefficient for some companion matrix block, as well as some non-zero $-c_0$ coefficients. We may choose an arrangement of the companion matrix basis that allows us to place the companion matrix blocks corresponding to the nonzero first coefficient in the upper left corner, and to place the blocks for which the $-c_0$ coefficient is zero in the lower right hand corner, ordered amongst themselves by size of block. \\
Thus, we consider a matrix of the form\\
$$\begin{bmatrix}
C_{g_1} & 0 &\cdots & 0 \\
0 & C_{g_2} & \ddots & \vdots \\
\vdots &\ddots & \ddots & 0 \\
0 & \cdots &0 &C_{g_k}
\end{bmatrix}$$ \\
where each $C_{g_i}$ is a companion matrix that has some nonzero element for all $i<j$ for some $j>1$. We call an $r \times r$ block consisting of zeros everywhere except the subdiagonal, which consists entirely of ones, an $N_r$ block. Therefore, each $C_{g_i}$ for $i<k$ has some nonzero coefficient $-c_i$ or is an $N_r$ block of size $2 \times 2$ or greater.
If the size $m \times m$ companion matrix block $C_{g_i}$ of $A$ is invertible, then we let the corresponding
diagonal block of $U$ be $C_{g_i}$ and the corresponding diagonal block of $N$ be the $m \times m$ zero block.
If $A$ contains any companion matrix block with zero as the $-c_0$ coefficient, then in the unit $U$ of its decomposition we add a $-1$ in the entry corresponding to the $-c_0$-coefficient of the companion matrix block, so that the block becomes invertible. Correspondingly, we place a $1$ in the entry of the nilpotent matrix corresponding to the entry of the $-c_0$ coefficient of that companion matrix block. Otherwise we leave the corresponding block in the nilpotent summand entirely zero, so that the direct sum of these blocks will be the direct sum of nilpotents, making the overall matrix nilpotent as well.
Note that the method described works for $N_r$ blocks as well as those with nonzero coefficents. In general, the decomposition of the companion matrix block that has zero as its $-c_0$ coefficient looks like:
$$\begin{bmatrix}
0 & \cdots & \cdots & 0 \\
1 &\ddots & \vdots & -c_1 \\
0 &\ddots & 0 &\vdots \\
\vdots &\ddots &1 & -c_{m-1}
\end{bmatrix} = \begin{bmatrix}
0 & \cdots & \cdots & -1 \\
1 &\ddots & \vdots & -c_1 \\
0 &\ddots & 0 &\vdots \\
\vdots &\ddots &1 & -c_{m-1}
\end{bmatrix} + \begin{bmatrix}
0 & \cdots & 0 & 1 \\
\vdots &\ddots & & 0 \\
\vdots &\ddots & \ddots &\vdots \\
0 & \cdots & \cdots & 0
\end{bmatrix}$$
Since these matrices will be useful later, the first matrix on the right side of the equation we will denote $C^*_{g_i}$ and the second matrix on the right we will denote $N^*$. Observe that $C^*_{g_i}$ is invertible and $N^*$ is nilpotent.
Whether the coefficients $-c_1$ through $-c_{m-1}$ are zero or nonzero does not affect the validity of the decomposition, since the addition of a one in the first entry of the last column makes the columns of the $A-N$ block necessarily linearly independent, ensuring that this decomposition is in fact the sum of an invertible matrix and a nilpotent one. The direct sum of such $m \times m$ blocks will respectively be invertible and nilpotent as well.
Suppose the rational canonical form of the matrix is a series of companion matrix blocks followed by a zero block:
$$\begin{bmatrix}
C_{g_1} & 0 & \cdots & 0\\
0 & \ddots & & \vdots\\
\vdots & & C_{g_m} & \vdots \\
0 & 0 & 0 &[0]
\end{bmatrix}$$
We will decompose the zero block of size $n \times n$ augmented with the last two rows and columns of the $C_{g_m}$ block of size $r \times r$ to ensure that there is at least one nonzero entry, which is the one on the last row of $C_{g_m}$ and the $(m-1)^{th}$ column.
Thus the augmented matrix that we then decompose is of the form
$$A = \begin{bmatrix}
0 & -c_{s-2} & 0 & \cdots & 0\\
1 & -c_{s-1} & 0 & \cdots & 0 \\
0 & \vdots & \ddots & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & 0 & \cdots & \cdots & 0
\end{bmatrix}$$
where A is an $(n+2) \times (n+2)$ matrix.
To find a suitable $(n+2) \times (n+2)$ nilpotent matrix of rank $n-1$ for the decomposition, we will conjugate $N_{n+2}$ by a suitable invertible matrix since conjugation will result in another nilpotent matrix. Choose
$$P = \begin{bmatrix}
0 & \cdots & \cdots & \cdots & 0 & 1\\
0 & 1 & -1 & 0 & \cdots & 0\\
\vdots & 0 & 1 & -1 & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots & \ddots & 0\\
0 & & & 0 & 1 & -1\\
1 & -1 & 0 & \cdots & \cdots & 0
\end{bmatrix} \text{ and } {P^{-1}}= \begin{bmatrix}
1 & 1 & 1 & \cdots & 1 & 1\\
1 & 1 & 1 & & \vdots & 0 \\
1 & 0 & 1 & \ddots & \vdots & \vdots\\
\vdots & \vdots & \ddots & \ddots & 1 & \vdots \\
1 & \vdots & & 0 & 1 & \vdots\\
1 & 0 & \cdots & \cdots & \cdots & 0
\end{bmatrix}$$
Then,
$$PN_{n+2}P^{-1} = \begin{bmatrix}
1 & 0 & \cdots & \cdots & 0 & 1 & 0\\
0 & 0 & \cdots & \cdots & 0 & 0 & 1\\
0 & 1 & & & & 0 & 0 \\
\vdots & & \ddots & & \vdots & \vdots\\
\vdots & & & \ddots& & \vdots & \vdots \\
0 & & && 1& 0 & 0\\
-1 & \cdots & \cdots & \cdots & -1 & -1 & -1
\end{bmatrix}$$
will be the nilpotent matrix involved in the decomposition.
Now let
$$U =
\begin{bmatrix}
1 & -c_{s-2} & 0 & \cdots & 0 & 1 & 0\\
1 & -c_{s-1} & 0 & \cdots & 0 & 0 & 1\\
0 & 1 & 0 & \cdots & 0 & 0 & 0 \\
\vdots & & \ddots & & & \vdots & \vdots\\
\vdots & & & \ddots & & \vdots & \vdots\\
\vdots & & & & & \vdots & \vdots \\
0 & & & & 1 & 0 & 0 \\
-1 & \cdots & \cdots & \cdots & -1 & -1 & -1
\end{bmatrix} \text{ and } N=
\begin{bmatrix}
-1 & 0 & \cdots & 0 & -1 & 0\\
0 & 0 & \cdots & 0 & 0 & -1\\
0 & -1 & & & 0 & 0 \\
\vdots & & \ddots & & \vdots & \vdots\\
\vdots & & & -1 & \vdots & \vdots \\
0 & & & & 0 & 0\\
1 & \cdots & \cdots & 1 & 1 & 1
\end{bmatrix}$$
To see why U is invertible, note that regardless of the values of $c_{s-2}$ and $c_{s-1}$ the second through the last column form a linearly independent set of vectors because of the negative ones on different rows. Then we only need to determine if the first column can be written as a linear combination of vectors from this set. If indeed there were scalars such that the first column could be written as a linear combination of the others in $U$, then having zeros everywhere except the first, second and last row will restrict the coefficients of the columns to be zero except possibly the $(n+1)^{th}$ and $(n+2)^{th}$ column. However, a routine calculation shows such a linear combination is not possible either. Thus the matrix is invertible.
Now we consider A as the direct sum of $C_{g_1}$ through $C_{g_{(m-1)}}$ and $C_{g_m}$ with the zero block, as illustrated:
$$\begin{bmatrix}
0 & \cdots & 0 & -c_{0} & 0 \\
1 & \ddots & \vdots & \vdots & \vdots \\
& \ddots & 0& -c_{s-2} & \vdots\\
& & 1& -c_{s-1} & \vdots\\
\end{bmatrix} \oplus [0]$$
where $C_{g_m} \oplus [0]$ is an $(n+r) \times (n+r)$ matrix.
Then choose
\setcounter{MaxMatrixCols}{20}
$$U= C^*_{g_1} \oplus \cdots \oplus C^*_{g_{(m-1)}} \oplus \begin{bmatrix}
0 & \cdots & 1 & 0 & -c_{0} \\
1 & \ddots & \vdots & \vdots & \vdots \\
&\ddots & 0 & \vdots & \vdots \\
& & 1 & 1 & -c_{s-2} & 0 & \cdots &\cdots & 0& 1& 0 \\
& & & 1& -c_{s-1} & 0&\cdots&\cdots &\cdots & 0&1\\
& & & 0 & 1 & 0 & \cdots & \cdots & 0 & 0 & 0 \\
& & & \vdots & & \ddots & & & & \vdots & \vdots \\
& & & \vdots & & & \ddots & & & \vdots & \vdots \\
& & & \vdots & & & &\ddots & & \vdots & \vdots \\
& & & 0 & & & & & 1 & 0 & 0 \\
& & & -1 & \cdots & \cdots & \cdots & \cdots & -1 & -1 & -1
\end{bmatrix}$$
Then we have
$$N= N^* \oplus \cdots \oplus \begin{bmatrix}
0 & \cdots & -1 & \\
& \ddots & \vdots \\
& & 0 \\
& & & -1 & 0 & \cdots & 0 & -1 & 0\\
& & & 0 & 0 & \cdots & 0 & 0 & -1\\
& & & 0 & -1 & & & 0 & 0 \\
& & & \vdots & & \ddots & & \vdots & \vdots\\
& & & \vdots & & & -1 & \vdots & \vdots \\
& & & 0 & & & & 0 & 0\\
& & & 1 & \cdots & \cdots & 1 & 1 & 1
\end{bmatrix}$$
which is nilpotent. Thus $A=U+N$ where $U$ is a unit and $N$ is a nilpotent.
One can check that in the last direct summand the first $(r-2)$ columns and the last $(n+1)$ columns are linearly independent because of the ones in different rows. Due to the $1$'s the $r^{th}$ row and $(r+1)^{th}$ column, the $(r+1)^{th}$ column cannot be written as a linear combination of the others, thus the only concern is the $(r-1)$th column. Writing it as a linear combination of other columns requires constructing the linear combination using only the $(n+r-1)^{th}$ and $(r+n)^{th}$ columns, because to use the $(r-2)^{th}$ column requires also using the $r^{th}$ column due to the $1$ in the first row and $(r-2)^{th}$ column. However, the $1$ in the $(r+1)^{th}$ row and $r^{th}$ column prevents this. Yet we've observed the $(r-1)^{th}$, $(n+r-1)^{th}$, and $(r+n)^{th}$ columns are linearly independent. Thus $U$ is invertible.
Having addressed all possible cases for an $n \times n$ matrix's rational canonical form, we again recall that conjugation by an invertible matrix and its inverse preserves both unity and nilpotence, so we may conclude that $\mathbb{M}_n(k)$ is nil-good for any field $k$ and dimension $n$.
\end{proof}
\end{theorem}
\section{Matrices over Division Rings}
We now consider linear operators on division ring modules of dimension $n$. Since the modules we consider are finite-dimensional, there exists a basis with respect to which we may express linear operators as $n \times n$ matrices.
\begin{theorem}{For all $n \geq 1$ the ring $\mathbb{M}_n(D)$ is nil-good if $D$ is a division ring.}
\begin{proof}
Given an $n \times n$ matrix $A \in \mathbb{M}_n(D)$ there exists an invertible matrix $Q \in \mathbb{M}_n(D)$ such that $A=QA_dQ^{-1}$ where $A_d$ is a matrix of the form $ U_A \oplus N_A $.
Here $U_A$ is an $m \times m$ invertible block on the diagonal and $N_A$ is an $(n-m) \times (n-m)$ nilpotent block.
Although a matrix over a division ring does not necessarily have a rational canonical form, there exists a primary rational canonical form \cite{Coh73} similar to the standard rational canonical form for certain matrices. Suppose a matrix $A \in \mathbb{M}_n (R)$ is algebraic over the center of the division ring $R$ and that it has a single elementary divisor $\alpha$. P. Cohn proved that if $\alpha = c_1c_2 \dots c_s$ then $A$ may be put into the form
$$\begin{bmatrix} C_1 & 0 &\cdots & 0 \\
N^{*} & C_2 & \ddots & \vdots \\ 0 & \ddots & \ddots & 0\\ \vdots & \dots & N^{*} & C_s \end{bmatrix}$$
where $N^{*}$ is a matrix with a $1$ in the entry in the upper right corner and zeros everywhere else, and $C$ is the companion matrix of $c_i$ for each $i$.
We note that the matrix $N_A$ may be put in primary rational ranonical form since it is algebraic over the center of any ring and its minimal polynomial $x^k = 0$ has a single elementary divisor.
Since the companion matrix of any power of nilpotent $N_A$ is a direct sum of $N_r$ blocks, the primary rational canonical form of $N_A$ is zero everywhere except the subdiagonal, which, as in the case of matrices over a field, makes the primary rational canonical form a direct sum of $N_r$ blocks as well.
If the $N_r$ blocks have dimension $2$ or higher, then we can write $N_A$ in the form $$\begin{bmatrix}
N_{r_1} & 0 &\cdots & 0 \\
0 & N_{r_2} & \ddots & \vdots \\
\vdots &\ddots & \ddots & 0 \\
0 & \cdots &0 &N_{r_k}
\end{bmatrix}$$
which, by subtracting a nilpotent of the form
$$\begin{bmatrix}
N^{*}_{r_1} & 0 &\cdots & 0 \\
0 & N^{*}_{r_2} & \ddots & \vdots \\
\vdots &\ddots & \ddots & 0 \\
0 & \cdots &0 &N^{*}_{r_k}
\end{bmatrix}$$
(in which the matrix $N^{*}_{r_i}$ is an $N^{*}$ matrix with the dimension of the corresponding $N_{r_i}$ matrix) results in an invertible matrix. The details of this proccess are outlined more explicitly in the case of a vector space over a field.
If any companion matrix in the primary rational canonical form of $N_A$ has a $k \times k$ zero block in its companion matrix, then we consider the augmented diagonal block given by the zero block and the $k+1$ entries of the column immediately to the left and the $k+1$ entries furthest to the right in row immediately above. If the zero block in the companion matrix is not the first diagonal block in the first companion matrix on the diagonal of the primary rational canonical form of $N_A$, then the augmented block in question will be of the form
$$\begin{bmatrix}
0 & 0 & \cdots & 0 \\
1 &0 & \cdots & 0 \\
0 &\vdots & \ddots &\vdots \\
\vdots &\ddots &\cdots & 0
\end{bmatrix} $$
for which, if we substract the nilpotent matrix
$$ \begin{bmatrix}
-1 & 0 & \cdots & 0 & -1 & 0\\
0 & 0 & \cdots & 0 & 0 & -1\\
0 & -1 & & & 0 & 0 \\
\vdots & & \ddots & & \vdots & \vdots\\
\vdots & & & -1 & \vdots & \vdots \\
0 & & & & 0 & 0\\
1 & \cdots & \cdots & 1 & 1 & 1
\end{bmatrix}$$
the result is an invertible block.
Just as is the case for matrices over a field, this redistribution of columns and rows into slightly smaller or larger blocks does not change the block martices used to decompose $A_d$ into the sum of an invertible matrix and a nilpotent one.
In the case that the first $k \times k$ block of the first companion matrix on the diagonal of the primary rational canonical form of $N_A$ is a zero block, we treat the first $(m+k) \times (m+k)$ block, which is comprised of $U_A$ and the zero block in question, as described in the case detailed below, then treat the bottom $(n-m-k) \times (n-m-k)$ block as described above.
In the special case that the $N_A$ block is an $(n-m) \times (n-m)$ zero block, a little more work is required than for the field case to find a decomposition of the matrix since the invertible block is not in a nice normal form. We first consider the case when $a_{mm}$ is nonzero. Then we augment the zero block by the last $n-m$ entries of the last column and row of $U_{A}$.
The augmented matrix
$$A = \begin{bmatrix}
a_{mm} & 0 & \cdots & 0 \\
\vdots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \vdots \\
0 & \cdots & \cdots & 0
\end{bmatrix}$$
can be written as the sum of the following:\\
$$U =
\begin{bmatrix}
a_{mm} +1 & 0 & \cdots & \cdots & 0 & 1 & 0\\
0 & 0 & \cdots & \cdots & 0 & 0 & 1 \\
\vdots & 1 & & & & \vdots & \vdots\\
\vdots & & \ddots & & & \vdots & \vdots\\
\vdots & & & \ddots & & \vdots & \vdots \\
0 & & & & 1 & 0 & 0 \\
-1 & \cdots & \cdots & \cdots & -1 & -1 & -1
\end{bmatrix} \text{ and } N=
\begin{bmatrix}
-1 & 0 & \cdots & 0 & -1 & 0\\
0 & 0 & \cdots & 0 & 0 & -1\\
0 & -1 & & & 0 & 0 \\
\vdots & & \ddots & & \vdots & \vdots\\
\vdots & & & & \vdots & \vdots \\
0 & & & -1 & 0 & 0\\
1 & \cdots & \cdots & 1 & 1 & 1
\end{bmatrix}$$
In both matrices, the last $(n-m)$ columns of $U$ are linearly independent because of the ones on exclusively different rows. If writing the first column as a linear combination of the others is possible, only the $(n-m)$th column can be included. Since $a_{mm} \neq 0$, the first column is not a scalar multiple of the $(n-m)$th column, and thus $U$ is invertible.
Therefore,
$A_{d}= U+N$, where
\setcounter{MaxMatrixCols}{20}
$$U= \begin{bmatrix}
a_{11} & \cdots & a_{1m} \\
& \ddots & \vdots \\
& & a_{mm}+1 & 0 & \cdots &\cdots & 0& 1& 0\\
& & 0& 0&\cdots&\cdots &\cdots & 0&1\\
& & 0 & 1 & 0 & \cdots & \cdots & 0 & 0 \\
& & \vdots & & \ddots & & & \vdots & \vdots \\
& & \vdots & & & \ddots & & \vdots & \vdots \\
& & 0 & & & & 1 & 0 & 0 \\
& & -1 & \cdots & \cdots & \cdots & -1 & -1 & -1
\end{bmatrix}$$
and
\setcounter{MaxMatrixCols}{20}
$$N= \begin{bmatrix}
0 & \cdots & 0 & \\
& \ddots & \vdots \\
& & 0 \\
& & & -1 & 0 & \cdots & 0 & -1 & 0\\
& & & 0 & 0 & \cdots & 0 & 0 & -1\\
& & & 0 & -1 & & & 0 & 0 \\
& & & \vdots & & \ddots & & \vdots & \vdots\\
& & & \vdots & & & -1 & \vdots & \vdots \\
& & & 0 & & & & 0 & 0\\
& & & 1 & \cdots & \cdots & 1 & 1 & 1
\end{bmatrix}.$$
\\
One can see that the first $(m-1)$ and the last $n$ columns are linearly independent. So if $U$ is not invertible, then the $m^{th}$ column can be written as a linear combination of the other columns. Note that because of the zeros, the coefficients of the last $m$ columns, except possibly the $(m+1)^{th}$ column, have to be zero. Since there is a negative one on the last row of the $m^{th}$ column, the coefficient of the $(n-1)^{th}$ column has to be one. Then the difference of the $m^{th}$ column and the $(n-1)^{th}$ column would be a linear combination of the first $(m-1)$ columns. The existence of such a linear combination would imply that $U_{A}$ is not invertible, a contradiction. Therefore $U$ is invertible.
Now consider the case in which $a_{mm}=0$. If there is a nonzero entry on the diagonal, say $a_{ii}$, then we can conjugate $A_{d}$ by a permutation matrix $P$ that swaps the $i^{th}$ row with the $m^{th}$ row. Then the $(m,m)^{th}$ entry in $PA_{d}P^{-1}$ will be nonzero, and we can apply the above method to decompose the matrix.
If all of the entries on the diagonal are zero, but $a_{(m-1)m}\neq 0$, then we can conjugate $A_{d}$ by an invertible matrix $S$ that subtracts the $(m-1)^{th}$ row from the $m^{th}$ row. Note that conjugation does not change the invertibility of the unit block and the nilpotence of the nilpotent block.
We define $S$ and its inverse $S^{-1}$ as the following $(m+n) \times (m+n)$ matrices:
$$S = \begin{bmatrix}
1 \\
& \ddots \\
& & 1 \\
& & -1 & 1\\
& & & & I_n \\
\end{bmatrix} \text{ and } S^{-1} = \begin{bmatrix}
1 \\
& \ddots \\
& & 1 \\
& & 1 & 1\\
& & & & I_n \\
\end{bmatrix}$$
$$ \text{So } A_{d}' = SA_{d}S^{-1} = \begin{bmatrix}
0 & \cdots & a_{1(m-1)}+a_{1m} & a_{1m} \\
\vdots & \ddots & \vdots & \vdots \\
a_{(m-1)1} & \cdots & a_{(m-1)m} & a_{(m-1)m} \\
a_{m1} - a_{(m-1)1} & a_{m(m-1)} & a_{m(m-1)}-a_{(m-1)m} & -a_{(m-1)m}\\
\end{bmatrix} \oplus [0]$$
Now the $(m,m)^{th}$ entry of $U_{A_{d}'}$ is nonzero, so we can use the first case for decomposition.
If all of the entries on the diagonal are zero and but $a_{m-1(m)}= 0$, there is a nonzero entry on the $m$th column because $U_{A}$ is invertible. Let $a_{km}$ be nonzero. Then we can conjugate $A_{d}$ by a permutation matrix $P$ that swaps the $k^{th}$ row with the $(m-1)^{th}$ row.
$$ \text{Then let } A_{d}'' = PA_{d}P^{-1} = \begin{bmatrix}
0 & \cdots &a_{1(m-1)} &\cdots & a_{1k}&a_{1m} \\
\vdots & & \vdots & \vdots & \vdots & \vdots \\
a_{(m-1)1} & \cdots & 0& \cdots & a_{(m-1)k} & 0 \\
\vdots & & \vdots & \vdots & \vdots & \vdots \\
a_{k1} & \cdots & a_{k(m-1)} & \cdots & 0 & a_{km} \\
a_{m1} & \cdots & a_{m(m-1)} & \cdots & a_{mk}& 0\\
\end{bmatrix} \oplus [0]$$
Now in the $m^{th}$ column, the entry on the $(m-1)^{th}$ row is nonzero, we can conjugate $A_{d}'$ by $S$ as introduced above and follow the above method for decomposition.
Therefore, solely by applying a certain series of invertible linear transformations, one may find a nil-good decomposition of any square matrix over a division ring.
\end{proof}
\end{theorem}
\section{General Properties of Nil-good Rings}
Having established the essential fact that the ring of $n \times n$ matrices over a division ring is nil-good, we may observe some sufficient or necessary conditions for some types of widely used rings to be nil-good. In particular we give a necessary and sufficient condition for artinian rings and matrices over a local ring. For completeness, proofs of other elementary facts are provided.
The following four remarks also appear in \cite{Dan15}, but we briefly provide our own proofs of these elementary results for completeness.
\begin{remark}
A ring $R$ is nil-good if and only if there exists a nil ideal $\mathfrak{A}$ such that $R/ \mathfrak{A}$ is nil-good.
\begin{proof}
The forward direction is trivial, simply consider the ideal (0).
Suppose now that $\mathfrak{A}$ is a nil ideal of $R$ and every element of $R/ \mathfrak{A}$ has a nil-good decomposition. If $\bar{a}$ is nilpotent in $R/\mathfrak{A}$ then $\bar{a}^k = 0$ in $R/\mathfrak{A}$ for some $k \in \mathbb{N}$ so $a^k \in R/\mathfrak{A}$. If $R/\mathfrak{A}$ is a nil ideal, this implies $a^k=n$ for some nilpotent element $n$ in $R/\mathfrak{A}$. Then if $a^k$ is nilpotent, it is immediate that $a$ is nilpotent. Since any nil ideal is contained in the radical and units lift modulo $J(R)$, we conclude any unit $\bar{u}$ in $R/\mathfrak{A}$ lifts to a unit $u$ in $R$. So the nil-good decomposition of any element in $R/\mathfrak{A}$ lifts to the sum of a unit and a nilpotent in $R$.
\end{proof}
\end{remark}
As a corollary to this, we know a ring $R$ is nil-good if and only if for any nil ideal $\mathfrak{A}$ the quotient ring $R/ \mathfrak{A}$ is nil-good. The essence of the proof is similar to that of the above proposition, with the added observation that if $R$ is a nil-good ring and $\mathfrak{A}$ is a nil ideal, then given $a=n+u$ for a nilpotent $n$ in $R$ and a unit $u$ in $R$ the image $\bar{a}$ in $R/ \mathfrak{A}$ has the decomposition $\overline{u+n} = \bar{u} + \bar{n}$. Since $\mathfrak{A}$ is a nil ideal it is contained in $J(R)$ so $\bar{u}$ is a unit in the quotient ring. Moreover, the fact that $n^k = 0$ for some $k$ means that $n^k \in \mathfrak{A}$ so $\bar{n}^k = \bar{0}$.\\
\begin{remark}
If $R$ is nil-good then $J(R)$ is a nil ideal.
\begin{proof}
If $R$ is nil-good then for all $y \in J(R)$ we know $y=n+u$ where $n$ is nilpotent and $u$ is a unit or zero. Suppose for contradiction that $u$ is a unit. Then if $U(R)$ denotes the set of units in $R$, we know $1-yu^{-1} \in U(R)$ by definition of the Jacobson radical. However $1-yu^{-1} = 1 - nu^{-1} -1$ which implies $-nu^{-1} \in U(R)$, a contradiction. So we must have that $u=0$ which implies $y=n+0$ is nilpotent.
\end{proof}
\end{remark}
If $J(R)$ is a nil ideal, then by the above two remarks we know $R$ is nil-good if and only if $R/J(R)$ is nil-good. Therefore, if $J(R)$ is a nil ideal, we wish to know when $R/J(R)$ is nil-good. The following result will prove useful to that end.
If $J(R)$ is the unique ideal that is maximal both as a left ideal and as a right ideal, then we say $R$ is a local ring.\\
\begin{remark}
A local ring $R$ is nil-good if and only if $J(R)$ is nil.
\begin{proof}
If R is nil-good then $J(R)$ is nil by Remark 4.2. If $J(R)$ is a nil ideal then since $J(R)$ is equal to the unique maximal ideal of $R$, any element not in $J(R)$ is a unit. Therefore for any element $a$ in $R$ either has the decomposition $a=n+0$ or $a=u+0$ which implies $R$ is nil-good.
\end{proof}
\end{remark}
\begin{remark}
If $R$ is nil-good then $R$ has no nontrivial central idempotents.
\begin{proof}
Suppose $R$ is nil-good and $e$ is a central idempotent. Then $e=u+n$ which implies $u=e-n$. If $u$ is a unit then $e-n$ is a unit that commutes with nilpotent $n$, which implies $u+n$ is a unit. This implies $e=1$. If $u=0$ then $e$ is nilpotent, but $e=e^2$ so $e=0$. Therefore $e$ is trivial if it is central.
\end{proof}
\end{remark}
The above remark allows us to conclude that a semisimple ring, which is always isomorphic to the direct product of matrix rings over division rings of various shapes and sizes, is nil-good if and only if it is simple. Observing that if $R$ is a left artinian ring then $R/J(R)$ is semisimple \cite{Lam01}, we arrive at the following result.\\
\begin{proposition}
If $R$ is a left artinian ring, then $R$ is nil-good if and only if $J(R)$ is maximal.
\begin{proof}
Suppose $R$ is left artinian. Then $R/J(R)$ is semisimple, and therefore $R \simeq \mathbb{M}_{n_1}(D_1) \times \dots \times \mathbb{M}_{n_r}(D_r)$ for some division rings $D_1,...,D_r$ and positive intgeres $n_1,...,n_r$.
However, any nontrivial direct product contains central idempotents. So if $R$ is nil-good it must be that $R/J(R)$ is isomorphic to a matrix ring over a division ring, which is a simple semisimple ring. The quotient $R/J(R)$ is simple if and only if the ideal $J(R)$ is maximal.
Conversely, $J(R)$ is nil if $R$ is artinian, and if $J(R)$ is maximal then we know $R/J(R)$ is a simple semisimple ring. As shown for the first direction, we can conclude $R/J(R) \simeq \mathbb{M}_{n}(D)$ for some division ring $D$ and natural number $n$. Then by Remark 4.1, the nil-good decompositions of $R/J(R)$ lift modulo $J(R)$. Thus $R$ is a nil-good ring.
\end{proof}
\end{proposition}
We may also establish a few facts about matrices over nil-good rings. If $R$ is a simple artinian ring, then $\mathbb{M}_n(R)\simeq \mathbb{M}_n(\mathbb{M}_k(D))$ and since $\mathbb{M}_n(\mathbb{M}_k(D))\simeq \mathbb{M}_{nk}(D)$ we conclude that any matrix ring over a simple artinian ring is nil-good.
\begin{corollary}
If $R$ is a local ring such that $J(R)$ is nil, then $\mathbb{M}_n(R)$ is nil-good if and only if $\mathbb{M}_n(J(R))$ the maximal ideal of $\mathbb{M}_n(R)$ is a nil ideal.
\begin{proof}
Since $R$ is local $R/J(R)$ is a division ring, thus $\mathbb{M}_n(R/J(R))$ is nil-good. Since ${\mathbb{M}_n(J(R))}$ the maximal ideal of $\mathbb{M}_n(R)$ is nil, by above remarks $\mathbb{M}_n(R)$ is nil-good.
Conversely, if $\mathbb{M}_n(R)$ is nil-good then by above proposition, its maximal ideal ${\mathbb{M}_n(J(R))}$ is nil.
\end{proof}
\end{corollary}
The above remark is included because although we would like to say that $R$ local and $J(R)$ nil implies that ${\mathbb{M}_n(R)}$ is nil-good, this only holds if the K\"{o}the conjecture does as well.
\section{The Nil-Good Clean Property}
We now define a new property ``nil-good clean,'' which is slightly weaker than clean or nil-good in general. An element is nil-good clean if it can be written as the sum of a nilpotent, idempotent and a unit. A ring is nil-good clean if all its elements are nil-good clean.
Observe that in the commutative case, nil-good clean is equivalent to the property clean. Let $R$ be a commutative ring that is nil-good clean. Then any element $a \in R$ can be written as the sum of a nilpotent $n$, an idempotent $e$ and a unit $u$. Since $u' = u+n$ is always a unit in a commutative ring,
$a = u' + e$. Therefore, $a$ is clean. For the opposite direction, if $R$ is a clean commutative ring, then any element $a$ can be written as the sum of a unit $u$ and an idempotent $e$. It follows that by letting the nilpotent $n$ be zero, we have the nil-good clean decomposition $a = e+ u+ 0$.
We have found one example of a nil-good clean ring that is not clean. It is a subring of lower-triangular column-finite matrices. We say a matrix is ``diagonal-finite'' if there is some fixed $n$ such that only the first $n$ sub-diagonals below the main diagonal contain nonzero entries for some nonnegative integer $n$. The set of lower-triangular matrices with this property form a ring. We denote the ring of column-finite matrices as $\mathbb{CFM_N}(R)$ and denote the element of this ring by $A=(a_{ij})^\infty_{i,j=1}$ for which $(a_{ij})_j$ are the rows, and $(a_{ij})_i$ are the columns. \\
\begin{definition}
We denote the ring of lower-triangular diagonal-finite matrices over a ring $R$ by $\mathbb{LTDFM_N}(R) = \{A=(a_{ij})\in \mathbb{CFM_N}(R) |$ there exists $n\in \mathbb{N}$ such that $a_{ij}=0$ for all $i\geq j+n, j\geq 1\}$.
\end{definition}
To see that this is indeed a subring, note that if $A$ is a lower triangular matrix with $n$ nonzero subdiagonals and $B$ is a lower triangular matrix with $m$ nonzero subdiagonals then in the product $AB$, each column $(ab_{ij})_i$ will have $ab_{ij}=0$ above the diagonal, and $ab_{ij}=0$ if $i\geq j+n+m$ for all $j\geq 1$. Therefore $AB$ can only have at most $n+m$ nonzero diagonals below the main diagonal, so the set is closed under multiplication. Clearly it is also a group with addition, and satisfies the usual ring axioms.\\
\begin{lemma}For every idempotent $E$ in $\mathbb{LTDFM_{N}}(R)$ and $ i \in \mathbb{N}$, $e_{ii}$ is idempotent. Moreover, $ e_{ii} = 1$ for all $i$ implies $E = I$
\begin{proof}
As $E$ is idempotent, $E^2 = E$ and $e_{ii}e_{ii} = e_{ii}$. Thus $e_{ii}$ is idempotent.
We will show by induction on subdiagonls that $e_{ii}=1$ for all $i$ implies that the elements on the $n$th subdiagonal are zero. Consider the first subdiagonal as the base case. By the idempotence of $E$, when we consider the $(i-1)^{th}$ elment on the $i^{th}$ row, we have the equation $e_{i,i-1}(1)+(1)e_{i,i-1}=e_{i,i-1}$. Then $2e_{i,i-1} = e_{i,i-1}$ and $e_{i,i-1}=0$. This shows that all the elements on the first subdiagonal are zero. Suppose that all elements on or above the $k^{th}$ diagonal, except the main diagonal, are zero. We will now show that all elements on the $(k+1)^{th}$ diagonal are also zero, i.e. $e_{i,i-l}=0$ for any natural number $l \leq k$ and any $i$. Consider the $(i-k-1)^{th}$ element on the $i^{th}$ row. By the idempotence of $E$, $e_{i,i-k-1}=e_{i,i-k-1}e_{i-k-1,i-k-1}+e_{i,i-k}e_{i-k,i-k-1}+ \cdots +e_{i,i-1}e_{i-1,i-k-1}+e_{i,i}e_{i,i-k-1} = e_{i,i-k-1}(1)+(0)e_{i-k,i-k-1}+ \cdots + (0)e_{i-1,i-k-1}+(1)e_{i,i-k-1}=2 e_{i,i-k-1}$ and thus $e_{i,i-k-1}=0$. Therefore every element on the $(k+1)^{th}$ subdiagonal is zero. By induction, all the elements below the main diagonal are zero and $E = I$.
\end{proof}
\end{lemma}
\begin{lemma}For every unit $U$ in $\mathbb{LTDFM_{N}}(R)$ and $ i \in \mathbb{N}$, $u_{ii}$ is a unit.
\begin{proof}
Since U is invertible, there exists $A\in \mathbb{LTDFM_{N}}(R)$ such that $UA = I$. Then $u_{ii}a _{ii} = 1$ for any $ i \in \mathbb{N}$ and thus each $u_{ii}$ is a unit.
\end{proof}
\end{lemma}
\begin{theorem}
The ring $\mathbb{LTDFM_{N}}(R)$ is not exchange.
\begin{proof}
For contradiction suppose that $\mathbb{LTDFM_{N}}(R)$ is exchange. \\
Consider the matrix
$$ A = \begin{bmatrix} 1 &0 & \cdots & & & \\
1 & 1 &0 & \cdots & &\\
0 & 1 & 1 & 0 & \cdots \\
\vdots & 0 & 1 & 1 & 0 & \cdots \\
\vdots & \vdots & 0 & 1 & 1 & \ddots \\
\vdots & \vdots & \vdots & \ddots & \ddots & \ddots
\end{bmatrix}$$
We assume there exists an idempotent matrix $E$ such that $E \in ( \mathbb{LTDFM_{N}}(R) ) A$ and $I-E \in ( \mathbb{LTDFM_{N}}(R) ) (I-A)$.
Note that $I-E=B(I-A)$ for some $B \in \mathbb{LTDFM_{N}}(R)$.
Since
$$I-A = \begin{bmatrix} 0 &0 & \cdots & & & \\
1 & 0 &0 & \cdots & &\\
0 & 1 & 0 & 0 & \cdots \\
\vdots & 0 & 1 & 0 & 0 & \cdots \\
\vdots & \vdots & 0 & 1 & 0 & \ddots \\
\vdots & \vdots & \vdots & \ddots & \ddots & \ddots
\end{bmatrix}$$
and $(I-A)_{ii}$ =0 for all $i$, $(I-E)_{ii} = B_{ii}((I-A)_{ii}) = B_{ii}(0) = 0$. Therefore $E_{ii} = 1$ for all $i$. As proved in the lemma earlier that $E$ with main diagonal of ones implies that $E=I$ and therefore $I \in ( \mathbb{LTDFM_{N}}(R))A$. This means that there exists $C \in \mathbb{LTDFM_{N}}(R)$ such that $CA = I$ and so A is invertible. The inverse of $A$ has to have ones and negative ones alternating infinitely in each column, and thus is not in $ \mathbb{LTDFM_{N}}(R)$. This is a contradiction so $ \mathbb{LTDFM_{N}}(R)$ is not exchange.
\end{proof}
\end{theorem}
\begin{corollary}{$\mathbb{LTDFM_{N}}(R)$ is not clean for any unital ring $R$}
\end{corollary}
\begin{proposition}
The ring $\mathbb{LTDFM_N}(R)$ is nil-good clean if $R$ is a clean ring.
\begin{proof} Let $A=(a_{ij})\in \mathbb{LTDFM_N}(R)$ be an infinite matrix that has $n$ nonzero diagonals on or below the main diagonal. We can write $A$ as the sum of two block-diagonal elements of $\mathbb{LTDFM_N}(R)$. We define the first summand $D=\bigoplus D_k$. It is the direct sum of $2n \times 2n$ blocks, thus for all $k\in \mathbb{N}$ define $D_k=(d_{ij})^{2n(k+1)}_{i,j=1+2kn}$ where $d_{ij}=a_{ij}$ if $2kn<i,j\leq2(k+1)n$. We define the second summand $N$ as the direct sum of the $n \times n$ zero matrix and $\bigoplus N_k$ where for all $k\in \mathbb{N}$ we define $N_k=(n_{ij})^{(2k+3)n}_{i,j=1+(2k+1)n}$ where $n_{ij}=a_{ij}$ if $2(k+1)n<i\leq(2k+3)n$ and $(2k+1)n<j<2(k+1)n$.
$$A =
\left[
\begin{array}{c@{}c@{}c}
2n\left\{ \left[ \begin{array}{cccccccc}
*& & & & & & & \\
*&*& & & & & & \\
*&*&*& & & & & \\
*&*&*&*& & & & \\
*&*&*&*&*& & & \\
&*&*&*&*&*& & \\
& &*&*&*&*&*& \\
& & &*&*&*&*&* \\
\end{array}\right] \right. & & \\
2n \left\{ \left[ \begin{array}{cccccccccc}
& & & & & &*&*&*&* \\
& & & & & & &*&*&* \\
& & & & & & & &*&* \\
& & & & & & & & &* \\
& & & & & & & & & \\
& & & & & & & & & \\
& & & & & & & & & \\
& & & & & & & & & \\
\end{array} \right] \right. &\left[ \begin{array}{cccccccc}
*& & & & & & & \\
*&*& & & & & & \\
*&*&*& & & & & \\
*&*&*&*& & & & \\
*&*&*&*&*& & & \\
&*&*&*&*&*& & \\
& &*&*&*&*&*& \\
& & &*&*&*&*&* \\
\end{array}\right] & \\
& & \ddots \\
\end{array} \right] $$
$$D =
\left[
\begin{array}{c@{}c@{}c}
2n\left\{ \left[ \begin{array}{cccccccc}
*& & & & & & & \\
*&*& & & & & & \\
*&*&*& & & & & \\
*&*&*&*& & & & \\
*&*&*&*&*& & & \\
&*&*&*&*&*& & \\
& &*&*&*&*&*& \\
& & &*&*&*&*&* \\
\end{array}\right] \right. & & \\
&2n\left\{ \left[ \begin{array}{cccccccc}
*& & & & & & & \\
*&*& & & & & & \\
*&*&*& & & & & \\
*&*&*&*& & & & \\
*&*&*&*&*& & & \\
&*&*&*&*&*& & \\
& &*&*&*&*&*& \\
& & &*&*&*&*&* \\
\end{array}\right] \right. & \\
& & \ddots \\
\end{array}\right] $$
$$N =
\left[
\begin{array}{c@{}c@{}c@{}c}
n\left\{ \left[ \begin{array}{ccc}
& & \\
&\mathbf{0}& \\
& & \\
\end{array}\right] \right. & & & \\
&2n\left\{ \left[ \begin{array}{cccccccc}
& & & & & & & \\
& & & & & & & \\
& & & & & & & \\
& & & & & & & \\
*&*&*&*& & & & \\
&*&*&*& & & & \\
& &*&*& & & & \\
& & &*& & & & \\ \end{array}\right] \right. & \\
& & 2n\left\{ \left[ \begin{array}{cccccccc}
& & & & & & & \\
& & & & & & & \\
& & & & & & & \\
& & & & & & & \\
*&*&*&*& & & & \\
&*&*&*& & & & \\
& &*&*& & & & \\
& & &*& & & & \\\end{array}\right] \right. &\\
&&& \ddots\\
\end{array}\right] $$
Note that $N$ is a direct sum of finite nilpotent matrices, and therefore is nilpotent itself, and $D$ is the direct sum of finite matrices over a clean ring, so each $D_k$ has a clean decomposition, and therefore the direct sum of the units that decompose each $D_k$ is a unit, the direct sum of the idempotents that decompose each $D_k$ is an idempotent, and the sum of those two direct sums form a clean decomposition of $D$. Therefore $\mathbb{LTDFM_N}(R)$ is nil-good clean.
\end{proof}
\end{proposition}
\textbf{Question 1:} Is there a nil-good clean ring that is exchange but not clean? \\
We suspect that there is little overlap between nil-good clean rings that are not clean and exchange rings.\\
\textbf{Acknowledgement}
The authors would like to thank Alexander J. Diesl for his guidance and contributions to our research.\\
\nocite{*}
\bibliographystyle{amsplain}
| {
"timestamp": "2015-12-16T02:03:59",
"yymm": "1512",
"arxiv_id": "1512.04640",
"language": "en",
"url": "https://arxiv.org/abs/1512.04640",
"abstract": "The notion of clean rings and 2-good rings have many variations, and have been widely studied. We provide a few results about two new variations of these concepts and discuss the theory that ties these variations to objects and properties of interest to noncommutative algebraists. A ring is called nil-good if each element in the ring is the sum of a nilpotent element and either a unit or zero. We establish that the ring of endomorphisms of a module over a division is nil-good, as well as some basic consequences. We then define a new property we call nil-good clean, the condition that an element of a ring is the sum of a nilpotent, an idempotent, and a unit. We explore the interplay between these properties and the notion of clean rings.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Nil-good and nil-good clean matrix rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750529474512,
"lm_q2_score": 0.8006920092299293,
"lm_q1q2_score": 0.7905833150080026
} |
https://arxiv.org/abs/1802.04689 | A bump in the road in elementary topology | We observe a subtle and apparently generally unnoticed difficulty with the definition of the relative topology on a subset of a topological space, and with the weak topology defined by a function. | \section{Relative Topology}
One of the most elementary constructions in general topology is the definition of the relative or
subspace topology on a subset of a topological space. But it turns out it is not quite as elementary to
do this properly as has generally been thought.
If $(X,{\mathcal T})$ is a topological space and $Y\subseteq X$, the {\em relative topology},
or {\em subspace topology}, on $Y$ from ${\mathcal T}$ is
$${\mathcal T}_Y=\{U\cap Y:U\in{\mathcal T}\}$$
i.e.\ the open sets in $Y$ (called the {\em relatively open sets}) are the intersections with $Y$ of the open sets in $X$.
The main issue we discuss is whether ${\mathcal T}_Y$ is really a topology on $Y$. This is generally
considered ``obvious'' or ``trivial.'' We write out the ``obvious'' argument:
\paragraph{}
\begin{Prop}\label{RelTopProp}
${\mathcal T}_Y$ is a topology on $Y$.
\end{Prop}
\begin{proof}
We have $\emptyset=\emptyset\cap Y$ and $Y=X\cap Y$, so $\emptyset,Y\in{\mathcal T}_Y$.
If $U_1\cap Y,\dots,U_n\cap Y\in{\mathcal T}_Y$, where $U_1,\dots,U_n\in{\mathcal T}$, then
$$(U_1\cap Y)\cap\cdots\cap(U_n\cap Y)=(U_1\cap\cdots\cap U_n)\cap Y\in{\mathcal T}_Y$$
since $U_1\cap\cdots\cap U_n\in{\mathcal T}$.
If $\{U_i\cap Y:i\in I\}$ is a collection of sets in ${\mathcal T}_Y$, where each $U_i\in{\mathcal T}$, then
$$\bigcup_{i\in I}(U_i\cap Y)=\left ( \bigcup_{i\in I}U_i\right ) \cap Y\in{\mathcal T}_Y$$
since $\cup_{i\in I}U_i\in{\mathcal T}$.
\end{proof}
Most standard topology references, e.g.\ \cite{Bourbaki}, \cite{Dugundji}, \cite{Engelking}, \cite{HallS}, \cite{HockingY},
\cite{Kasriel}, \cite{Kelley}, \cite{Munkres}, \cite{Willard}, either give this argument explicitly or state that
the result is ``trivial'' or ``easily verified,'' presumably using this argument.
But actually there is a subtle problem with the last part of the argument: how do we know that
every indexed collection of sets in ${\mathcal T}_Y$ is of the form $\{U_i\cap Y:i\in I\}$ for some $U_i\in{\mathcal T}$?
In fact, the Axiom of Choice (AC) is needed to assert this, since for a given $V\in{\mathcal T}_Y$ there
are in general many $U\in{\mathcal T}$ for which $V=U\cap Y$, and one must somehow be chosen.
(The same comment might apply to the finite intersection argument, but there
only finitely many choices need to be made so the AC is not needed.)
When the AC was first formulated and its nature understood, it was observed that mathematicians
had already been using it extensively without comment and generally without notice. The relative topology
example shows that this is still happening.
But does \ref{RelTopProp} really require the AC? There is, in fact, a simple way to avoid it:
there is a systematic way to choose the $U_i$ (I am indebted to
S.\ Jabuka for this observation). If $V\in{\mathcal T}_Y$, there is a largest open set $U\in{\mathcal T}$ such that
$V=U\cap Y$, namely the union of all $W\in{\mathcal T}$ for which $V=W\cap Y$. A correct phrasing of
the proof would thus be:
\begin{proof}
We have $\emptyset=\emptyset\cap Y$ and $Y=X\cap Y$, so $\emptyset,Y\in{\mathcal T}_Y$.
If $V_1,\dots,V_n\in{\mathcal T}_Y$, then, for each $k$, $V_k=U_k\cap Y$ for some
$U_k\in{\mathcal T}$; so
$$V_1\cap\cdots\cap V_n=(U_1\cap Y)\cap\cdots\cap(U_n\cap Y)=(U_1\cap\cdots\cap U_n)\cap Y\in{\mathcal T}_Y$$
since $U_1\cap\cdots\cap U_n\in{\mathcal T}$.
If $\{V_i:i\in I\}$ is a collection of sets in ${\mathcal T}_Y$, for each $i\in I$ let $U_i$ be the union of all $W\in{\mathcal T}$ such that
$V_i=W\cap Y$. Then $U_i\in{\mathcal T}$ and $V_i=U_i\cap Y$ for each $i$, so
$$\bigcup_{i\in I}V_i=\bigcup_{i\in I}(U_i\cap Y)=\left ( \bigcup_{i\in I}U_i\right ) \cap Y\in{\mathcal T}_Y$$
since $\cup_{i\in I}U_i\in{\mathcal T}$.
\end{proof}
There is an alternate argument which avoids
the AC in
\cite{Kuratowski} (which is the only topology book I have found with a complete correct proof of
\ref{RelTopProp}). Recall that a {\em Kuratowski closure operation} on a set $Y$ is an assignment
$A\mapsto\bar A$ for each subset $A$ of $Y$, with the properties $\bar\emptyset=\emptyset$,
$A\subseteq\bar A=\bar{\bar A}$ for all $A$, and $\overline{A\cup B}=\bar A\cup\bar B$ for all $A,B$.
It is easy to show (the argument is in many standard references and can be found in
\cite{Blackadar}, and does not use the AC) that any
Kuratowski closure operation defines closure with respect to a unique
topology for which the closed sets are precisely the sets $A$ for which $\bar A=A$.
To prove \ref{RelTopProp}, for $A\subseteq Y$ define $\tilde A=\bar A \cap Y$, where $\bar A$
is the closure of $A$ in $X$. It is nearly trivial to check (without using the AC) that $A\mapsto
\tilde A$ is a Kuratowski closure operation on $Y$, and that the closed sets with respect to the
corresponding topology are precisely the complements of the sets in ${\mathcal T}_Y$. It follows that
the topology defined by this closure operation is ${\mathcal T}_Y$, and in particular ${\mathcal T}_Y$ is a topology.
\section{The Weak Topology Defined by a Function}
If $(X,{\mathcal T})$ is a topological space, $Y$ a set, and $f:Y\to X$ a function, there is a weakest topology
on $Y$ making $f$ continuous. It should be
$${\mathcal T}_Y=\{f^{-1}(U):U\in{\mathcal T}\}\ .$$
But is ${\mathcal T}_Y$ actually a topology? If $f$ is surjective, there is no difficulty verifying this (using that
preimages respect unions and intersections). However, if $f$ is not surjective, we run into the same
problem as in \ref{RelTopProp} (which is actually just the case where $f$ is injective), since many
different open sets in $X$ can have the same preimage in $Y$, so the AC must apparently be used
to show that ${\mathcal T}_Y$ is a topology.
To show that ${\mathcal T}_Y$ is a topology without using the AC, the union trick works, and
the argument via Kuratowski closure operations
works here too: for $A\subseteq Y$, set $\tilde A=f^{-1}(\overline{f(A)})$.
There is also an alternate argument.
Let $Z=f(Y)\subseteq X$. By \ref{RelTopProp} we have that
${\mathcal T}_Z$ is a topology on $Z$. If ${\mathcal S}$ is a topology on $Y$, then
$f$ is continuous as a function from $(Y,{\mathcal S})$ to $(X,{\mathcal T})$ if and only if it is continuous as a
function from $(Y,{\mathcal S})$ to $(Z,{\mathcal T}_Z)$, and it is easily verified that ${\mathcal T}_Y=({\mathcal T}_Z)_Y$. But $f:Y\to Z$ is surjective,
so the AC is not needed to prove that $({\mathcal T}_Z)_Y$ is a topology.
\section{Should We Worry About the AC?}
Most modern mathematicians have no serious qualms about using the AC, and largely share
the opinion of Ralph Boas \cite[p.\ xi]{BoasPrimer}:
\begin{quote}
``[A]fter G\"{o}del's results, the assumption of the axiom of choice can do no mathematical harm
that has not already been done.''
\end{quote}
\bigskip
\noindent
As an analyst, I have no qualms about it myself. But I do believe:
\begin{enumerate}
\item[1.] When the AC is used, it should be mentioned.
\item[2.] The AC should not be used if it is not needed.
\end{enumerate}
There is a gray area with 2: the AC can drastically simplify proofs of some results which can be
proved without it. But the relative topology case is one where use of the AC is of
highly doubtful benefit.
| {
"timestamp": "2018-02-14T02:10:53",
"yymm": "1802",
"arxiv_id": "1802.04689",
"language": "en",
"url": "https://arxiv.org/abs/1802.04689",
"abstract": "We observe a subtle and apparently generally unnoticed difficulty with the definition of the relative topology on a subset of a topological space, and with the weak topology defined by a function.",
"subjects": "General Topology (math.GN)",
"title": "A bump in the road in elementary topology",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9669140235181256,
"lm_q2_score": 0.8175744761936437,
"lm_q1q2_score": 0.7905242263021199
} |
https://arxiv.org/abs/1610.06286 | A criterion for a degree-one holomorphic map to be a biholomorphism | Let $X$ and $Y$ be compact connected complex manifolds of the same dimension with $b_2(X)= b_2(Y)$. We prove that any surjective holomorphic map of degree one from $X$ to $Y$ is a biholomorphism. A version of this was established by the first two authors, but under an extra assumption that $\dim H^1(X {\mathcal O}_X)\,=\,\dim H^1(Y {\mathcal O}_Y)$. We show that this condition is actually automatically satisfied. | \section{Introduction}
Let $X$ and $Y$ be compact connected complex manifolds of dimension $n$. Let
$$
f \,:\, X\,\longrightarrow\, Y
$$
be a surjective holomorphic map such that the degree of $f$ is one, meaning that
the pullback homomorphism
$$
{\mathbb Z}\,\simeq\, H^{2n}(Y,\, {\mathbb Z})\, \stackrel{f^*}{\longrightarrow}\,
H^{2n}(X,\, {\mathbb Z})\,\simeq\, {\mathbb Z}
$$
is the identity map of $\mathbb Z$. It is very natural to ask, ``Under what conditions would $f$ be a
biholomorphism?'' An answer
to this was given by \cite[Theorem~1.1]{b-b}, namely:
\begin{result}[{\cite[Theorem~1.1]{b-b}}]\label{res:b-b}
Let $X$ and $Y$ be compact connected complex manifolds of dimension $n$, and let
$f \,:\, X\,\longrightarrow\, Y$
be a surjective holomorphic map such that the degree of $f$ is one.
Assume that
\vspace{-1mm}
\begin{enumerate}
\item[$(i)$] the $\mathcal{C}^\infty$ manifolds underlying $X$ and $Y$ are diffeomorphic, and
\item[$(ii)$] $\dim H^1(X,\, {\mathcal O}_X) \,=\, \dim H^1(Y,\, {\mathcal O}_Y)$.
\end{enumerate}
\vspace{-1mm}
Then, the map $f$ is a biholomorphism.
\end{result}
In the proof of Result~\ref{res:b-b}, the condition~$(i)$ is used {\em only} in concluding
that $\dim H^2(X,\, {\mathbb R}) = \dim H^2(Y,\, {\mathbb R})$. In other words, the
proof of \cite[Theorem~1.1]{b-b} establishes that if
$$
\dim H^2(X,\, {\mathbb R})\,=\,\dim H^2(Y,\, {\mathbb R})\ \ \text{ and }\ \
\dim H^1(X,\, {\mathcal O}_X)\,=\,\dim H^1(Y,\, {\mathcal O}_Y)\, ,
$$
then\,---\,with $X$, $Y$, and $f$ as above\,---\, $f$ is a biholomorphism.
There is some cause to believe that the condition $(ii)$ in Result~\ref{res:b-b} might be
superfluous (which we shall discuss presently). It is the basis for our main theorem, which
gives a simple, purely topological, criterion for a degree-one map to be a biholomorphism:
\begin{theorem}\label{th:deg1}
Let $X$ and $Y$ be compact connected complex manifolds of dimension $n$, and let
$f \,:\, X\,\longrightarrow\, Y$
be a surjective holomorphic map of degree one. Then, $f$ is a biholomorphism if and only if
the second Betti numbers of $X$ and $Y$ coincide.
\end{theorem}
If $X$ and $Y$ were assumed to be K{\"a}hler, then Theorem~\ref{th:deg1} would follow
from Result~\ref{res:b-b}. This is because, by the Hodge decomposition,
$\dim H^1(M,\, {\mathcal O}_M) = \frac{1}{2}\dim H^2(M,\, {\mathbb C})$ for any
compact K{\"a}hler manifold $M$. We shall show that this observation\,---\,i.e., that
condition $(ii)$ in Result~\ref{res:b-b} is automatically satisfied under the hypotheses
therein\,---\,holds
true in the general, {\em analytic} setting. In more precise terms, we have:
\begin{proposition}\label{p:main}
Let the manifolds $X$ and $Y$ and $f \,:\, X\,\longrightarrow\, Y$ be as in
Result~\ref{res:b-b}. Then, $f$ induces an isomorphism between $H^1(X,\, {\mathcal O}_X)$ and
$H^1(Y,\, {\mathcal O}_Y)$. In particular, $\dim H^1(X,\, {\mathcal O}_X) = \dim H^1(Y,\, {\mathcal O}_Y)$.
\end{proposition}
The above proposition might be unsurprising to many. It is well known when $X$ and $Y$ are projective.
Since we could not find an explicit statement of Proposition~\ref{p:main}\,---\,and since certain supplementary
details are required in the analytic case\,---\,we provide a proof of it in Section~\ref{sec:proof}.
The {\em non-trivial} step in proving Theorem~\ref{th:deg1} uses Result~\ref{res:b-b}: given
Proposition~\ref{p:main}, our theorem follows from Result~\ref{res:b-b} and the comment above upon its proof.
\section{Proof of Proposition~\ref{p:main}}\label{sec:proof}
We begin with a general fact that we shall use several times below. For any proper holomorphic map
$F\,:\, V \,\longrightarrow\, W$ between complex manifolds, the
Leray spectral sequence gives the following exact sequence:
\begin{equation}\label{eq:leray}
0 \,\longrightarrow\, H^1(W,\, F_*{\mathcal O} _V) \,\stackrel{\theta_F}\longrightarrow\, H^1(V,\, {\mathcal O} _V) \,
\longrightarrow\, H^0(W,\, R^1F_*{\mathcal O} _V)
\, \longrightarrow\, \cdots \, .
\end{equation}
With our assumptions on $X$, $Y$ and $f$, the map $f^{-1}$ (which is defined
outside the image in $Y$ of the set of points at which $f$ fails to be a local
biholomorphism) is holomorphic on its domain. Thus $f$ is bimeromorphic.
We note that any bimeromorphic holomorphic map of connected complex manifolds has
connected fibers, because it is biholomorphic on the complement of a thin analytic subset. In
particular, the fibers of $f$ are connected.
\noindent {\bf Claim~1.}\, {\em Let $F\,:\,V \,\longrightarrow\, W$ be a bimeromorphic
holomorphic map between compact, connected complex manifolds. The natural homomorphism
\begin{equation}\label{eq:nat}
{\mathcal O} _W \,\longrightarrow\, F_*{\mathcal O} _V
\end{equation}
is an isomorphism.}
\vspace{-0.5mm}
\noindent By definition, \eqref{eq:nat} is injective. In our case, it is an
isomorphism outside a closed complex analytic subset of $W$, say $\mathcal{S}$, of codimension
at least 2. So, to show that \eqref{eq:nat} is surjective, it suffices to show that given any
$w\in \mathcal{S}$, for each open connected set $U\ni w$ and each holomorphic function
$\psi$ on $F^{-1}(U)$ there is a function $H_\psi$ holomorphic on $U$ such that
$$
\psi \,=\, H_\psi\circ F \; \; \; \text{on $F^{-1}(U)$}.
$$
Since $F^{-1}$ is holomorphic on $W\!\setminus\!\mathcal{S}$, we set
$$\left.H_{\psi}\right|_{U\setminus\mathcal{S}} \,:= \,
\psi\circ (F^{-1}|_{U\setminus\mathcal{S}})\, .$$
This has a unique holomorphic extension
to $U$ by Hartogs' theorem (or more acurately: Riemann's second extension theorem),
since $\mathcal{S}$ is of codimension at least 2. As $F$ has compact, connected
fibers, this extension has the desired properties. This shows that the homomorphism in \eqref{eq:nat} is surjective.
Hence the claim.
By Claim~1, \eqref{eq:leray} yields an injective homomorphism
\begin{equation}\label{eq:le1}
\Theta_f\,:\,H^1(Y,\,{\mathcal O} _Y) \,\longrightarrow\, H^1(X,\, {\mathcal O} _X )\, ,
\end{equation}
which is the composition of the homomorphism $\theta_f$, as given by \eqref{eq:leray}, and the isomorphism
induced by \eqref{eq:nat}.
There is a commutative diagram of holomorphic maps
\begin{equation}\label{eq:comm}
\xymatrix{& Z \ar[d]^h \ar[dl]_g \\ X \ar[r]^f & ~Y\, , }
\end{equation}
where $h$ is a composition of successive blow-ups with smooth centers, such that the subset of $Y$
over which $h$ fails to be a local biholomorphism (i.e., the image in $Y$ of the the exceptional locus
in $Z$)
coincides with the subset of $Y$ over which $f$ fails to be a local biholomorphism. This fact
(also called ``Hironaka's Chow Lemma'')
can be deduced from Hironaka's Flattening Theorem
\cite[p.~503]{hiro}, \cite[p.~504, Corollary~1]{hiro}. We recollect briefly the argument for this.
The set $\mathcal{A}$ of
values of $f$ in $Y$ at which $f$ is not flat coincides with the set
of points over which $f$ is not locally biholomorphic. Hironaka's Flattening Theorem states that there exists
a sequence of blow-ups of $Y$ with smooth centers over $\mathcal{A}$ amounting to a
map $$h\,:\,Z \,\longrightarrow\, Y$$ such
that\,---\,with $\widetilde Z$ denoting the proper transform of $Y$ in $X\times_Y Z$
and ${\sf pr}_Z$ denoting the projection $X\times_Y Z \,\longrightarrow\, Z$\,---\,the map
$\widetilde{f} := \left.{\sf pr}\right|_{\widetilde Z}$
is flat. In our case this implies that $\widetilde{f}\,:\,\widetilde Z \,\longrightarrow\, Z$ is a biholomorphism.
The map $g = {\sf pr}_X\!\circ(\widetilde{f}\,)^{-1}$ and has the properties stated above.
The maps $h$ and $g$ above are proper modifications. Thus, all the assumptions in Claim~1 hold
true for $g\,:\,Z \,\longrightarrow\, X$. Hence, we conclude that the homomorphism
${\mathcal O} _X \,\longrightarrow\, g_*{\mathcal O} _Z$ is an isomorphism. By \eqref{eq:leray} applied now to
$(V, W, F) = (Z, X, g)$, the homomorphism
\begin{equation}\label{eq:le2}
\Theta_g\,:\,H^1(X, \,{\mathcal O} _X) \,\longrightarrow\, H^1(Z, \, {\mathcal O} _Z),
\end{equation}
which is analogous to $\Theta_f$ above, is injective.
Similarly, the homomorphism ${\mathcal O} _Y \,\longrightarrow\, h_*{\mathcal O} _Z$ is an isomorphism.
Since \eqref{eq:leray}, an exact sequence, is natural, we
would be done\,---\,in view of \eqref{eq:le1}, \eqref{eq:le2} and the diagram \eqref{eq:comm}\,---\,if we show
that the homomorphism $\Theta_h\,:\,H^1(Y,\,{\mathcal O} _Y) \,\longrightarrow\, H^1(Z,\,{\mathcal O} _Z)$,
(given by applying \eqref{eq:leray} to $(V, W, F) = (Z, Y, h)$) is an isomorphism.
To this end, we will use the following:
\noindent {\bf Claim 2.}\, {\em For a complex manifold $W$ of dimension $n$, if
$$
\sigma\,:\,S \,\longrightarrow\, W
$$
is a blow-up with smooth center, then the direct image $R^1\sigma_* {\mathcal O} _S$ vanishes.}
\noindent This claim is familiar to many. However, since it is not so easy to point to one {\em specific}
work for a proof in the {\em analytic} case, we indicate an argument. We first study the blow-up
$\widetilde \sigma\,:\, \widetilde S \,\longrightarrow\, \widetilde W$ of a point $0\in \widetilde W$ with exceptional divisor
$\widetilde E\,=\,\sigma^{-1}(0)$.
We use the ``Theorem on formal functions'' \cite[Theorem~11.1]{Ha},
and the ``Grauert comparison theorem'' \cite[Theorem III.3.1]{BSt} for the analytic case.
Let ${\mathfrak m}_0 \,\subset \,{\mathcal O} _{\widetilde W}$ be the maximal ideal sheaf for the
point $0\,\in\, \widetilde W$. Then the completion
$\left((R^1{\widetilde\sigma}_*{\mathcal O} _{\widetilde S})_0\right)^{\!\boldsymbol{\vee}}$ of
$(R^1{\widetilde\sigma}_*{\mathcal O} _{\widetilde S})_0$ in the $\mathfrak m_0$-adic topology is equal to
$$
\lim_{{\substack{\longleftarrow \\ k}}} H^1\left({\widetilde \sigma}^{-1}(0),\, {\mathcal O} _{\widetilde S}/{\widetilde \sigma}^*(\mathfrak{m}^k_0)\right).
$$
We have the exact sequence
$$
0 \,\longrightarrow\,
{\mathcal O} _E(k) \,\longrightarrow\, {\mathcal O} _{\widetilde S}/{\widetilde \sigma}^*(\mathfrak m^{k+1}_0) \,
\longrightarrow\,{\mathcal O} _{\widetilde S}/{\widetilde \sigma}^*(\mathfrak m^{k}_0) \,\longrightarrow\, 0
$$
of sheaves with support on
$$
{\widetilde \sigma}^{-1}(0)\,=\,\widetilde E\,\simeq\, \mathbb P^{n-1}
$$
so that the cohomology groups $H^q(\widetilde E,\, {\mathcal O} _{\widetilde E}(k))$ vanish for all $k \geq 0$, and $q > 0$. In particular the maps
$$
H^1(\widetilde S,\, {\mathcal O} _{\widetilde S}/{\widetilde \sigma}^*(\mathfrak m^{k+1}_0)) \,\longrightarrow \,
H^1(\widetilde S,\,{\mathcal O} _{\widetilde S}/{\widetilde \sigma}^*(\mathfrak m^{k}_0))
$$
are isomorphisms for $k\geq 1$, and furthermore we have
$$
H^1(\widetilde S,\,{\mathcal O} _{\widetilde S}/{\widetilde \sigma}^*(\mathfrak m_0))\,\simeq\, H^1(\mathbb P^{n-1},\,
{\mathcal O} _{\mathbb P^{n-1}})\,=\,0\, .
$$
This shows
that $R^1{\widetilde \sigma}_* {\mathcal O} _{\widetilde S}$ vanishes. This establishes the claim for blow-up at a point.
Now consider the case where the center of the blow-up $\sigma$ is a smooth submanifold $A$ of
positive dimension. Since the claim is local with respect to the base space $W$, we may assume that $W$ is of the
form $A \times \widetilde W$, where both $A$ and $\widetilde W$ are small open subsets of complex number spaces,
e.g.\ polydisks. Denote by $\pi \,:\, W \,\longrightarrow\, \widetilde W$ the projection. We identify $A$ with
$A\times\{0\} =\pi^{-1}(0) \subset W$ as a submanifold.
Note that the blow-up
$$
\sigma\,:\,S \,\longrightarrow\, W
$$
of $W$ along $A$ is the fiber product
$\widetilde S \times_{\widetilde W} W \,\longrightarrow\, W$. The exceptional divisor $E$ of $\sigma$ can be identified with
$A\times \widetilde E$.
In the above argument we replace the maximal ideal sheaf $\mathfrak m_0$ by the vanishing ideal $\mathcal I_A$ of $A$.
Now $\sigma^*(\mathcal I^k_A)/\sigma^*(\mathcal I^{k+1}_A) \,\simeq\,{\mathcal O} _{E}(k)$, and by
\cite[Theorem III.3.4]{BSt} we have
$$
R^1(\left.\sigma\right|_{E})_* {\mathcal O} _{E} \,\simeq \,
\pi^* R^1(\left.\widetilde\sigma\right|_{\widetilde E})_*{\mathcal O} _{\widetilde E}\,=\,0
$$
so that the
earlier argument can be applied. Hence the claim.
Now, let
$$
Z\,=\,Z_N \,\stackrel{\tau_N}{\longrightarrow} \,Z_{N-1} \,\stackrel{\tau_{N\!-\!1}}{\longrightarrow}\,
\cdots\, \stackrel{\tau_2}{\longrightarrow} \, Z_1\,\stackrel{\tau_1}{\longrightarrow}\,Z_0
\,=\, Y
$$
be the sequence of blow-ups that constitute $h\,:\,Z \,\longrightarrow\, Y$.
We have $\tau_{j*}{\mathcal O} _{Z_j} \simeq {\mathcal O} _{Z_{j-1}}$ and
$R^1\tau_{j\, *}{\mathcal O} _{Z_{j}} = 0$ for $1\,\leq\, j\,\leq\, \tau_N$. Combining these
with \eqref{eq:leray} yields a canonical injective homomorphism
$$
H^1(Z_j, \,{\mathcal O} _{Z_{j}})\,\longrightarrow\, H^1(Z_{j-1},\, {\mathcal O} _{Z_{j-1}})
$$
that is an isomorphism for all $j\,=\,1,\, \cdots,\, N$. Hence, by naturality, the
homomorphism $\Theta_h\,:\,H^1(Y,\,{\mathcal O} _Y) \,\longrightarrow\, H^1(Z,\,{\mathcal O} _Z)$ is an isomorphism.
By our above remarks, this establishes the result.
\section*{Acknowledgements}
The second-named author thanks the Philipps-Universit\"at, where the work for this paper was carried
out, for its hospitality.
| {
"timestamp": "2016-10-21T02:02:08",
"yymm": "1610",
"arxiv_id": "1610.06286",
"language": "en",
"url": "https://arxiv.org/abs/1610.06286",
"abstract": "Let $X$ and $Y$ be compact connected complex manifolds of the same dimension with $b_2(X)= b_2(Y)$. We prove that any surjective holomorphic map of degree one from $X$ to $Y$ is a biholomorphism. A version of this was established by the first two authors, but under an extra assumption that $\\dim H^1(X {\\mathcal O}_X)\\,=\\,\\dim H^1(Y {\\mathcal O}_Y)$. We show that this condition is actually automatically satisfied.",
"subjects": "Complex Variables (math.CV); Algebraic Geometry (math.AG)",
"title": "A criterion for a degree-one holomorphic map to be a biholomorphism",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9811668679067631,
"lm_q2_score": 0.8056321936479701,
"lm_q1q2_score": 0.7904596161264337
} |
https://arxiv.org/abs/1309.2141 | A uniqueness theorem for higher order anharmonic oscillators | We study for $\alpha\in\R$, $k \in {\mathbb N} \setminus \{0\}$ the family of self-adjoint operators \[ -\frac{d^2}{dt^2}+\Bigl(\frac{t^{k+1}}{k+1}-\alpha\Bigr)^2 \] in $L^2(\R)$ and show that if $k$ is even then $\alpha=0$ gives the unique minimum of the lowest eigenvalue of this family of operators. Combined with earlier results this gives that for any $k \geq 1$, the lowest eigenvalue has a unique minimum as a function of $\alpha$. | \section{Introduction}
\subsection{Definition of $\Q^{(k)}(\alpha)$ and main result}
For any $k \in {\mathbb N}\setminus\{0\}$ and $\alpha\in\mathbb{R}$ we define
the operator
\begin{equation*}
\Q^{(k)}(\alpha) = -\frac{d^2}{dt^2}+\Bigl(\frac{t^{k+1}}{k+1}-\alpha\Bigr)^2,
\end{equation*}
as a self-adjoint operator in $L^2({\mathbb R})$. This family of operators is
connected with the study of Schr\"{o}dinger operators with a magnetic field
vanishing along a curve and with the Ginzburg-Landau theory of superconductivity.
It first appeared in~\cite{mo} (for $k=1$) and was later studied
in~\cite{hemo1,pakw,heko1,helf,hepe,dora,fope,fope2}.
We denote by $\bigl\{ \lambda_{j,\Q^{(k)}(\alpha)}\bigr\}_{j=1}^{\infty}$ the
increasing sequence of eigenvalues of $\Q^{(k)}(\alpha)$. In particular,
$\eigone{\Q^{(k)}(\alpha)}$ is the ground state eigenvalue, and we denote by
$u_{\alpha}$ the associated positive, $L^2$-normalized eigenfunction.
The main result of the present paper is the following theorem.
\begin{thm}\label{thm:main}
Assume that $k\geq 2$ is an even integer. Then $\eigone{\Q^{(k)}(\alpha)}$
attains a unique minimum at $\alpha=0$.
Moreover, this minimum is non-degenerate.
\end{thm}
\begin{remark}
This extends the previous results and discussions
in~\cite{helf,hepe}, where similar results were obtained for odd $k$.
The non-degeneracy was proved in~\cite{hepe}. In that paper it was also
shown that Theorem~\ref{thm:main} is valid for large even $k$. The fact that
the minimum is attained at $\alpha=0$ was suggested by numerical computations
done by V. Bonnaillie--No\"{e}l.
\end{remark}
Combining our
Theorem~\ref{thm:main} with the results of ~\cite{helf,hepe} we get the
following complete answer.
\begin{thm}
For any $k \in {\mathbb N}\setminus\{0\}$, the function
$\alpha \mapsto \eigone{\Q^{(k)}(\alpha)}$ attains a unique minimum. Moreover,
this minimum is non-degenerate.
\end{thm}
The paper is organized as follows. In Section~\ref{sec:aux} we give several
spectral bounds on the first two eigenvalues of $\Q^{(k)}(\alpha)$. These estimates
are used to prove Theorem~\ref{thm:main} for $2\leq k\leq 68$ in
Section~\ref{sec:proofsmallk} and for $k\geq 70$ in
Section~\ref{sec:prooflargek}.
\section{Auxiliary results}
\label{sec:aux}
\subsection{Introduction}
In this section we collect several spectral bounds that will help us in
proving Theorem~\ref{thm:main}. In the following, we assume that $k$ denotes
a positive even integer.
With the scaling $s=\alpha^{-1/(k+1)}t$ it becomes clear that the form domain
of $\Q^{(k)}(\alpha)$ is independent of $\alpha$. Thus, we are allowed to use the
machinery of analytic perturbation theory.
First we note that $\Q^{(k)}(\alpha)$ and $\Q^{(k)}(-\alpha)$ are unitarily equivalent
(map $t\mapsto-t$ along with $\alpha\mapsto-\alpha$). This implies that the
function $\alpha\mapsto\eigone{\Q^{(k)}(\alpha)}$ is even, and hence has a
critical point at $\alpha=0$. It is proved in~\cite{hepe} that this critical
point is a nondegenerate minimum. This also follows from our estimates below.
\begin{lemma}\label{lem:Virial}
If $\alpha_c$ is a critical point of $\eigone{\Q^{(k)}(\alpha)}$, then
\[
\int_{-\infty}^{+\infty}
\Bigl(\frac{t^{k+1}}{k+1}-\alpha_c\Bigr)u_{\alpha_c}(t)^2\,dt = 0
\]
and
\[
\int_{-\infty}^{+\infty}
\Bigl(\frac{t^{k+1}}{k+1}-\alpha_c\Bigr)^2u_{\alpha_c}(t)^2\,dt = \frac{\eigone{\Q^{(k)}(\alpha_c)}}{k+2}.
\]
\end{lemma}
\begin{proof}[Sketch of proof]
The first identity, usually referred to as the Feynman--Hellmann formula,
follows from first order perturbation theory,
\[
\frac{\partial}{\partial\alpha}\eigone{\Q^{(k)}(\alpha)} = -2\int_{-\infty}^{+\infty}
\Bigl(\frac{t^{k+1}}{k+1}-\alpha\Bigr)u_{\alpha}(t)^2\,dt.
\]
The second is a virial type identity and is proved by
scaling. We refer to~\cite{hepe} for the details.
\end{proof}
\subsection{Positive second derivative}
A key element in our approach is the following Lemma~\ref{lem:possecdiff},
which can be used to rule out local maxima under appropriate estimates on the
first eigenvalues.
\begin{lemma}[Lemma~2.3 in~\cite{hepe}]
\label{lem:possecdiff}
If $\alpha_c$ is a critical point of $\eigone{\Q^{(k)}(\alpha)}$ and
\[
\frac{k+2}{k+6}\eigtwo{\Q^{(k)}(\alpha_c)}>\eigone{\Q^{(k)}(\alpha_c)}
\]
then
\[
\frac{\partial^2}{\partial\alpha^2}
\eigone{\Q^{(k)}(\alpha)}\Big|_{\alpha=\alpha_c}>0.
\]
\end{lemma}
We give a sketch of the proof for the sake of completeness.
\begin{proof}[Sketch of proof]
The proof is based on perturbation theory. The second derivative of
$\eigone{\Q^{(k)}(\alpha)}$ is given by
\[
\frac{\partial^2}{\partial\alpha^2}\eigone{\Q^{(k)}(\alpha)}
=
2-4\int_{-\infty}^{+\infty} \Bigl(\frac{t^{k+1}}{k+1}-\alpha\Bigr)u_\alpha
\bigl(\partial_\alpha u_\alpha\bigr)\,dt.
\]
Here
\[
\partial_\alpha u_\alpha = -2(\Q^{(k)}(\alpha)-\eigone{\Q^{(k)}(\alpha)})^{-1}
\Bigl(\frac{t^{k+1}}{k+1}-\alpha\Bigr)u_\alpha,
\]
where the inverse is the regularized resolvent. The rest of the proof uses
Lemma~\ref{lem:Virial}, the bound
\[
\|(\Q^{(k)}(\alpha_c)-\eigone{\Q^{(k)}(\alpha_c)})^{-1}\|
\leq (\eigtwo{\Q^{(k)}(\alpha_c)}-\eigone{\Q^{(k)}(\alpha_c)})^{-1},
\]
and the Cauchy-Schwarz inequality.
\end{proof}
To apply Lemma~\ref{lem:possecdiff} we need good upper bounds on
$\eigone{\Q^{(k)}(\alpha)}$ and lower bounds on $\eigtwo{\Q^{(k)}(\alpha)}$. These will
be presented in the sections below.
\subsection{Upper bounds}
We will at several points need upper bounds on the first eigenvalue of
$\Q^{(k)}(\alpha)$. They are given in this section.
\begin{lemma}
\label{lem:criticalub}
Assume that $\alpha_c$ is a critical point of
$\alpha\mapsto\eigone{\Q^{(k)}(\alpha)}$. Then, for all $\alpha\in\mathbb{R}$ it holds
that
\[
\eigone{\Q^{(k)}(\alpha)}\leq \eigone{\Q^{(k)}(\alpha_c)}+(\alpha-\alpha_c)^2.
\]
\end{lemma}
\begin{proof}
This follows by inserting the eigenfunction $u_{\alpha_c}$ corresponding to
$\eigone{\Q^{(k)}(\alpha_c)}$ of $\Q^{(k)}(\alpha_c)$
into the quadratic form corresponding to $\Q^{(k)}(\alpha)$ and using
Lemma~\ref{lem:Virial}
\end{proof}
\begin{lemma
\label{lem:trial}
For all $\alpha\geq 0$ it holds that
\[
\eigone{\Q^{(k)}(\alpha)} \leq \alpha^2 + A_k,
\]
with
\[
A_k=
\begin{cases}
\frac{2^{3/2}}{9}
\bigl(\frac{4\pi^6-210\pi^4+4410\pi^2-26775}{7}\bigr)^{1/4},& k=2,\\
\frac{\pi^2}{4}\frac{k+2}{k+1}
\bigl(\frac{1}{4}(k+1)(2k+3)(2k+4)(2k+5)\bigr)^{-1/(k+2)}, & k\geq 2.\\
\end{cases}
\]
\end{lemma}
\begin{proof}
For $k\geq 4$ we refer to Lemma~3.1 in~\cite{hepe}. For $k=2$ we use the same
idea but with a different trial state. A calculation of the energy of the
function
\[
u(t)=
\begin{cases}
\frac{2}{\sqrt{3\rho}}\cos^2\bigl(\frac{\pi t}{2\rho}\bigr), &|t|<\rho,\\
0, & |t|\geq \rho,\\
\end{cases}
\]
gives ($\|u\|=1$)
\[
\begin{aligned}
\eigone{\mathfrak{Q}^{(2)}(\alpha)}
&\leq
\int_{-\infty}^{+\infty}|u'(t)|^2
+\Bigl(\frac{t^3}{3}-\alpha\Bigr)^2|u(t)|^2\,dt\\
&=
\alpha^2+\frac{\pi^2}{3\rho^2}
+\frac{4\pi^6-210\pi^4+4410\pi^2-26775}{252\pi^6}\rho^6.
\end{aligned}
\]
Minimizing in $\rho$, we get the bound
\begin{equation*}
\label{eq:zerobound}
\eigone{\mathfrak{Q}^{(2)}(\alpha)}\leq \alpha^2
+\frac{2^{3/2}}{9}
\Bigl(\frac{4\pi^6-210\pi^4+4410\pi^2-26775}{7}\Bigr)^{1/4}
\leq \alpha^2+ 0.6642,
\end{equation*}
attained for
\[
\rho=2^{1/4}\pi
\Bigl(\frac{4\pi^6-210\pi^4+4410\pi^2-26775}{7}\Bigr)^{-1/8}
\approx 2.57.
\]
\end{proof}
The upper bound given in Lemma~\ref{lem:trial} is graphed (for $\alpha=0$ and
$2\leq k\leq 70$) in Figure~\ref{fig:lambda1comp} on
page~\pageref{fig:lambda1comp}.
\begin{lemma}
\label{lem:increasingub}
The function
\[
k\mapsto \frac{\pi^2}{4}\frac{k+2}{k+1}
\Bigl(\frac{1}{4}(k+1)(2k+3)(2k+4)(2k+5)\Bigr)^{-1/(k+2)}
\]
appearing in Lemma~\ref{lem:trial} is increasing for $k\geq 2$. In particular
it is always bounded from above by $\pi^2/4$.
\end{lemma}
\begin{proof}
We will in the proof consider $k$ to be a real variable. Taking the logarithmic
derivative of the expression, we get
\[
\frac{a_3k^3+a_2k^2+a_1k+a_0}{(k+1)(k+2)^2(2k+3)(2k+5)}
\]
with (here we note that each term is increasing with $k$ and thus estimate
from below with $k=2$)
\[
\begin{aligned}
a_3 &= 4 \log (2 (k+1) (k+2) (2 k+3) (2 k+5))-20-8 \log 2\\
&\geq 4\log 378-20\geq 3.73,\\
a_2 &= 20 \log (2 (k+1) (k+2) (2 k+3) (2 k+5))-108-40 \log 2\\
&\geq 20 \log 378-108\geq 10.69,\\
a_1 &= 31 \log (2 (k+1) (k+2) (2 k+3) (2 k+5))-189-62 \log 2\\
&\geq 31\log 378-189 \geq -5.02,\\
a_0 &= 15 \log (2 (k+1) (k+2) (2 k+3) (2 k+5))-107-30 \log 2\\
&\geq 15\log 378-107\geq -17.98.
\end{aligned}
\]
Now, the polynomial
\[
p(k)=3.73k^3+10.69k^2-5.02k-17.98
\]
satisfies
\[
p(2)\approx 44.58\quad\text{and}\quad p'(k)=11.19k^2+21.38k-5.02.
\]
Since $p'(k)>0$ for $k\geq 2$ we find that $p$ is positive for $k\geq 2$. This
implies that the function in the statement is increasing. The final part follows
since the limit as $k\to+\infty$ is $\pi^2/4$.
\end{proof}
\subsection{Lower bounds}
To be able to use Lemma~\ref{lem:possecdiff} we need lower bounds on the
second eigenvalue. The following function will appear in the bounds.
\begin{lemma}
\label{lem:cmaxformula}
It holds that
\begin{equation}
\label{eq:cmaxformula}
\begin{aligned}
h(a)&:=\max_{0<\sigma<1} (1-\sigma^2)^{a/(a+2)}\sigma^{2/(a+2)}(a/2)^{4/(a+2)}\\
&= 2^{-4/(a+2)} a^{(a+4)/(a+2)} (a+1)^{1/(a+2)-1}.
\end{aligned}
\end{equation}
Moreover, $\lim_{a\to+\infty}h(a)=1$.
\end{lemma}
\begin{proof}
Differentiating $(1-\sigma^2)^{a/(a+2)}\sigma^{2/(a+2)}(a/2)^{4/(a+2)}$ with
respect to $\sigma$ gives
\[
\frac{2^{(a-2)/(a+2)}a^{4/(a+2)}\sigma^{-a/(a+2)}
\bigl(1-\sigma^2\bigr)^{-2/(a+2)}
\bigl(1-(a+1)\sigma^2\bigr)}{a+2},
\]
with the unique zero (in $0<\sigma<1$) at $\sigma=1/\sqrt{a+1}$. Since the
function is zero
at the endpoints and positive for $0<\sigma<1$ this must be the maximum.
This proves~\eqref{eq:cmaxformula}
The rest follows by a simple analysis of the right hand side
of~\eqref{eq:cmaxformula}. The derivative equals
\[
h'(a)=\frac{2^{-4/(a+2)} a^{2/(a+2)} (a+1)^{1/(a+2)-1}
\bigl[a \bigl(4+4\log 2 - 2 \log a-\log(a+1)\bigr)+8\bigr]}{(a+2)^2}.
\]
\end{proof}
\begin{lemma}
\label{lem:commutator}
For all real $\alpha$ and all even $k\geq 2$ it holds that
\[
\Q^{(k)}(\alpha) \geq h(k)
\biggl[-\frac{d^2}{dt^2}+\Bigl(\frac{t^{k/2}}{k/2}\Bigr)^2\biggr],
\]
where $h$ is the function from Lemma~\ref{lem:cmaxformula}.
\end{lemma}
\begin{proof}
Let $\mathfrak{A}=-i\frac{d}{dt}$ and
$\mathfrak{B}=\bigl(\frac{t^{k+1}}{k+1}-\alpha\bigr)$. Then
the commutator $[\mathfrak{A},\mathfrak{B}]$ equals
\[
[\mathfrak{A},\mathfrak{B}]=-it^k.
\]
With the Cauchy--Schwarz inequality and the weighted arithmetic-geometric mean
inequality, we find that (for all $0<\sigma<1$)
\[
\Q^{(k)}(\alpha)\geq -(1-\sigma^2)\frac{d^2}{dt^2}+\sigma t^k
= -(1-\sigma^2)\frac{d^2}{dt^2}+\sigma(k/2)^2 \Bigl(\frac{t^{k/2}}{k/2}\Bigr)^2.
\]
Scaling the variable and invoking Lemma~\ref{lem:cmaxformula} gives the result.
\end{proof}
\begin{lemma}
\label{lem:lb21}
Let $h$ be the function in Lemma~\ref{lem:cmaxformula}. For all real $\alpha$
and all even $k\geq 2$ it holds that
\[
\eigtwo{\Q^{(k)}(\alpha)}\geq B_k,
\]
with
\[
B_k=h(k)\frac{3^{2k/(k+2)}(k+2)}{2^{(2k-2)/(k+2)}k^{(k+4)/(k+2)}}
=\frac{3^{\frac{2k}{k+2}}(k+2)}{2^{\frac{2k+2}{k+2}}(k+1)^{\frac{k+1}{k+2}}}.
\]
\end{lemma}
\begin{proof}
Let $T>0$. We use the estimate
\[
\Bigl(\frac{t^{k/2}}{k/2}\Bigr)^2
\geq \frac{2}{k}T^{k-2}t^2-\frac{2k-4}{k^2}T^k,
\]
valid for all $t\in\mathbb{R}$.
Comparing with the harmonic oscillator, and using Lemma~\ref{lem:commutator},
we get the required estimate for the second eigenvalue. The optimal choice
of $T$ is
\begin{equation}
\label{eq:optT}
T=\Bigl(\frac{3\sqrt{2k}}{4}\Bigr)^{2/(k+2)}.
\end{equation}
\end{proof}
The lower bound of $\eigtwo{\Q^{(k)}(\alpha)}$ in Lemma~\ref{lem:lb21} will tend
to $9/4$ as $k\to+\infty$, which compared to the limit $\pi^2/4$ for the
first eigenvalue is not good enough. Our next aim is to improve this lower
bound on $\eigtwo{\Q^{(k)}(\alpha)}$ for large $k$.
\begin{lemma}
\label{lem:betterl2}
Assume that $k\geq 70$ is even and $\alpha\in\mathbb{R}$. Then
\[
\eigtwo{\Q^{(k)}(\alpha)}\geq \widetilde{B}_k,
\]
with
\[
\widetilde{B}_k=
\frac{\sqrt{5}-1}{2}
\Biggl(\frac{\pi-\arctan
\Bigl(\sqrt{\frac{(\pi/1.1)^2}{1.1^{70}-(\pi/1.1)^2}}\Bigr)}{1.1}\Biggr)^2
\geq 4.719.
\]
\end{lemma}
\begin{proof}
We first do the commutator estimate
\[
\Q^{(k)}(\alpha)\geq -(1-\sigma^2)\frac{d^2}{dt^2}+\sigma t^k
= \frac{\sqrt{5}-1}{2}\Bigl(-\frac{d^2}{dt^2}+t^k\Bigr),
\]
where $\sigma$ in the latter step is chosen to be $\frac{\sqrt{5}-1}{2}$. Next
we note that the second eigenvalue of
\[
-\frac{d^2}{dt^2}+t^k
\]
in $L^2(\mathbb{R})$ equals the first eigenvalue of the operator
\[
-\frac{d^2}{dt^2}+t^k
\]
in $L^2(\mathbb{R}^+)$ with Dirichlet condition at $t=0$. Let $T>1$. Then
\[
-\frac{d^2}{dt^2}+t^k \geq \mathfrak{D}^{(k)}:=-\frac{d^2}{dt^2}+T^k\chi_{\{t>T\}},
\]
where we, again, impose a Dirichlet condition at $t=0$.
Here $\chi_D$ denotes the characteristic function of the set $D$.
Let us estimate the first eigenvalue $\eigone{\mathfrak{D}^{(k)}}$ of $\mathfrak{D}^{(k)}$.
Clearly
\[
\eigone{\mathfrak{D}^{(k)}}\leq \Bigl(\frac{\pi}{T}\Bigr)^2,
\]
which is what one gets considering $(0,T)$ and imposing a Dirichlet condition
at $t=T$. The ground state of $\mathfrak{D}^{(k)}$ is given by (in the rest of this proof we
write $\lambda=\eigone{\mathfrak{D}^{(k)}}$)
\[
u(t)=
\begin{cases}
c_1\sin(\sqrt{\lambda}t), & 0\leq t\leq T,\\
c_2e^{-\omega t}, & t\geq T,
\end{cases}
\]
where
\[
-\omega^2+T^k=\lambda
\]
and where we have the gluing conditions at $t=T$:
\[
\begin{aligned}
c_1\sin(\sqrt{\lambda}T)&=c_2 e^{-\omega T}\quad\text{and}\\
c_1\sqrt{\lambda}\cos(\sqrt{\lambda}T)&=-c_2 \omega e^{-\omega T}.
\end{aligned}
\]
This gives the equation (in $\sqrt{\lambda}$)
\[
\tan(\sqrt{\lambda}T)=-\frac{\sqrt{\lambda}}{\omega}\quad\text{i.e.}\quad
\tan(\pi-\sqrt{\lambda}T)=\frac{\sqrt{\lambda}}{\omega},
\]
which has a unique solution in the interval
$\frac{\pi}{2T}<\sqrt{\lambda}<\frac{\pi}{T}$. We think of $T>1$ and $k$
large, so that $\sqrt{\lambda}/\omega$ is small, and get
\[
\frac{\sqrt{\lambda}}{\omega}=\sqrt{\frac{\lambda}{T^k-\lambda}}
\leq \sqrt{\frac{(\pi/T)^2}{T^k-(\pi/T)^2}}.
\]
And so by monotonicity
\[
\pi-\sqrt{\lambda}T\leq
\arctan\biggl(\sqrt{\frac{(\pi/T)^2}{T^k-(\pi/T)^2}}\biggr),
\]
i.e.
\[
\lambda\geq
\Biggl(\frac{\pi-\arctan
\Bigl(\sqrt{\frac{(\pi/T)^2}{T^k-(\pi/T)^2}}\Bigr)}{T}\Biggr)^2.
\]
Now, without optimizing, we find that with $T=1.1$ and $k\geq 70$ it holds
that
\[
\eigtwo{\Q^{(k)}(\alpha)}\geq \frac{\sqrt{5}-1}{2}\lambda
\geq \frac{\sqrt{5}-1}{2}
\Biggl(\frac{\pi-\arctan
\Bigl(\sqrt{\frac{(\pi/1.1)^2}{1.1^{70}-(\pi/1.1)^2}}\Bigr)}{1.1}\Biggr)^2
\geq 4.719.
\]
\end{proof}
We will also need lower bounds on $\eigone{\Q^{(k)}(\alpha)}$ for large $\alpha$.
This is the content of the following two Lemmas.
\begin{lemma}
\label{lem:lb32}
For $\alpha\geq 3/2$ and even $k\geq 2$ it holds that
\begin{equation}
\label{eq:minbound}
\eigone{\Q^{(k)}(\alpha)}\geq C_k,
\end{equation}
with
\[
C_k=
\min\Biggl(\Bigl(\frac{3}{2}-\frac{1}{k+1}\Bigr)^2,
\frac{\frac{3}{2}(k+1)-1}{(k+1)
\bigl(\bigl(\frac{3}{2}(k+1)\bigr)^{1/(k+1)}-1\bigr)}\Theta_0\Biggr).
\]
In particular, if $2\leq k\leq 68$ it holds that
\[
\eigone{\Q^{(k)}(\alpha)}>\eigone{\Q^{(k)}(0)}
\]
for all $\alpha\geq 3/2$.
\end{lemma}
\begin{proof}
First we note that the potential $\bigl(\frac{t^{k+1}}{k+1}-\alpha\bigr)^2$ is
decreasing for all $t<1$ (in fact for all $t<((k+1)\alpha)^{1/(k+1)}$), and thus
it is greater than $(1/(k+1)-3/2)^2$ for all $t<1$ and all
$\alpha\geq 3/2$.
For $t\geq1$ and $\alpha\geq 3/2$, we estimate
\begin{align*}
\Bigl(\frac{t^{k+1}}{k+1}-\alpha\Bigr)^2
& =
\frac{1}{(k+1)^2}
\Biggl(\sum_{j=0}^{k}t^{k-j}\bigl(\alpha(k+1)\bigr)^{1/(k+1)}\Biggr)^2
\Bigl(t- \bigl(\alpha(k+1)\bigr)^{1/(k+1)}\Bigr)^2\\
&\geq \frac{1}{(k+1)^2}
\biggl(\frac{\frac32(k+1)-1}{\bigl(\frac32(k+1)\bigr)^{1/(k+1)}-1}\biggr)^2
\Bigl(t- \bigl(\alpha(k+1)\bigr)^{1/(k+1)}\Bigr)^2.
\end{align*}
Here we used that the expression in the big sum is increasing
both in $t$ and in $\alpha$, and then applied the formula for a geometric sum.
Thus, comparing with the minimum of the potential for $t<1$ and with the
de~Gennes operator for $t\geq 1$ we conclude~\eqref{eq:minbound}.
The last part follows by comparing the upper bound in Lemma~\ref{lem:trial}
with the just obtained lower bound (and using the fact that $\Theta_0>0.59$
which is known from~\cite{bono}). This is done in Figure~\ref{fig:lambda1comp}.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{lambda1-15-greater.pdf}
\caption{The disks are the upper bounds on $\eigone{\Q^{(k)}(0)}$ from
Lemma~\ref{lem:trial} as a function of $k$. The squares
are the lower bounds on $\eigone{\Q^{(k)}(\alpha)}$, $\alpha\geq 3/2$, as a function
of $k$, from Lemma~\ref{lem:lb32}.}
\label{fig:lambda1comp}
\end{figure}
\end{proof}
We need a better bound for large $k$ than the one given in Lemma~\ref{lem:lb32}.
We use instead $\alpha=2.8$ as lower bound and find that
\begin{lemma}
\label{lem:lb3}
For $\alpha\geq 2.8$ it holds that
\[
\eigone{\Q^{(k)}(\alpha)}\geq
\min\Biggl(\Bigl(2.8-\frac{1}{k+1}\Bigr)^2,
\frac{2.8(k+1)-1}{(k+1)
\bigl(\bigl(2.8(k+1)\bigr)^{1/(k+1)}-1\bigr)}\Theta_0\Biggr).
\]
For $k\geq 70$ the first term is the smallest one, i.e.
\[
\eigone{\Q^{(k)}(\alpha)}\geq \Bigl(2.8-\frac{1}{k+1}\Bigr)^2
\geq \Bigl(2.8-\frac{1}{71}\Bigr)^2
\geq 7.76
.
\]
In particular $\eigone{\Q^{(k)}(\alpha)}$ cannot obtain its global minimum for
$\alpha\geq 2.8$.
\end{lemma}
\begin{proof}
The proof is exactly the same as the proof of Lemma~\ref{lem:lb32}.
The second statement follows from
noticing that the second term in the minimum is increasing and that its value
at $k=70$ is
\[
\frac{2.8(70+1)-1}{(70+1)
\bigl(\bigl(2.8(70+1)\bigr)^{1/(70+1)}-1\bigr)}\Theta_0
\geq 21.2,
\]
while the first term in the minimum is less than $2.8^2=7.84$.
The last statement follows by using Lemma~\ref{lem:increasingub} to conclude
that the upper bound on $\eigone{\Q^{(k)}(0)}$ in Lemma~\ref{lem:trial} is less
than $\pi^2/4$ for all $k$. Since $\pi^2/4$ is less than $7.76$ we are done.
\end{proof}
\section{Proof of Theorem~\ref{thm:main} for $2\leq k\leq 68$}
\label{sec:proofsmallk}
\begin{lemma}
\label{lem:zerotoa}
For each even $k$, $2\leq k\leq 68$, let
\[
\alpha^*=\sqrt{\frac{k+2}{k+6}B_k-A_k},
\]
where $A_k$ is the upper bound on $\eigone{\Q^{(k)}(0)}$
from Lemma~\ref{lem:trial} and $B_k$ is the lower bound on
$\eigtwo{\Q^{(k)}(\alpha)}$ from Lemma~\ref{lem:lb21}.
Then, $\alpha\mapsto\eigone{\Q^{(k)}(\alpha)}$ has
no critical point in the interval $0<\alpha<\alpha^*$.
\end{lemma}
\begin{proof}
Assume, to get a contradiction, that $0<\alpha_c<\alpha^*$ is a critical point.
Then, invoking Lemma~\ref{lem:trial} and the definition
of $\alpha^*$ above, we find that
\[
\eigone{\Q^{(k)}(\alpha_c)}\leq A_k+\alpha_c^2 <A_k+(\alpha^*)^2=\frac{k+2}{k+6}B_k \leq
\frac{k+2}{k+6}\eigtwo{\Q^{(k)}(\alpha_c)},
\]
which by Lemma~\ref{lem:possecdiff} implies that $\alpha_c$ is a non-degenerate
local minimum. Hence all critical
points in $0<\alpha<\alpha^*$ must be non-degenerate local minimums. Now we
know that zero is a non-degenerate local minimum. Since there cannot be
more than one such in a row we get a contradiction.
\end{proof}
\begin{lemma}
\label{lem:zeroto2a}
With $k$ and $\alpha^*$ as in the previous Lemma it holds that
$\eigone{\Q^{(k)}(\alpha)}$ cannot attain its global minimal value in the
interval $[\alpha^*,2\alpha^*)$.
\end{lemma}
\begin{proof}
Assume, to get a contradiction, that we have one $\alpha_c$ in this interval
where we have have a global minimum. Then, in particular,
$\eigone{\Q^{(k)}(\alpha_c)}\leq\eigone{\Q^{(k)}(0)}$. Thus, combining again
Lemmas~\ref{lem:possecdiff} and~\ref{lem:criticalub} we find that any critical
point in $[\alpha^*,\alpha_c)$ must be a non-degenerate minimum. However, by
the previous Lemma we know that there are no critical points in $(0,\alpha^*)$,
and so again we would have two non-degenerate minimums in a row. Since that is
not possible we get a contradiction.
\end{proof}
\begin{lemma}
\label{lem:alb}
Assume that $2\leq k\leq 68$ is even. Denote by
\[
\alpha^{**}=\frac{3}{2}-\sqrt{C_k-A_k},
\]
where, again, $A_k$ is the upper bound on $\eigone{\Q^{(k)}(0)}$
from Lemma~\ref{lem:trial} and $C_k$ is the lower bound on
$\eigone{\Q^{(k)}(\alpha)}$ from
Lemma~\ref{lem:lb32}.
If $\alpha>\alpha^{**}$
then $\eigone{\Q^{(k)}(\alpha)}$ cannot attain its global minimum.
\end{lemma}
\begin{proof}
First we note that if $\alpha\geq 3/2$ then
$\eigone{\Q^{(k)}(\alpha)}\geq \eigone{\Q^{(k)}(0)}$ by Lemma~\ref{lem:lb32}.
Assume, to get a contradiction, that $\eigone{\Q^{(k)}(\alpha)}$ attains its global
minimum for a $\alpha^{**}<\alpha_c<3/2$. Then, by
Lemma~\ref{lem:criticalub} it holds that
\[
\begin{aligned}
\eigone{\Q^{(k)}(3/2)}&\leq \eigone{\Q^{(k)}(\alpha_c)}+(\alpha_c-3/2)^2\\
&\leq \eigone{\Q^{(k)}(0)}+(\alpha^{**}-3/2)^2\\
&< A_k+C_k-A_k=C_k.
\end{aligned}
\]
But this contradicts Lemma~\ref{lem:lb32}.
\end{proof}
The proof of Theorem~\ref{thm:main} is completed for $2\leq k\leq 68$ by
ploting $2\alpha^*$ and
$\alpha^{**}$ and noting that $2\alpha^{*}>\alpha^{**}$ for these $k$. This
is done in Figure~\ref{fig:completeproof}.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{proofgraph}
\caption{The disks are $\alpha^{**}$ from Lemma~\ref{lem:alb}. The squares
are $2\alpha^*$, where $\alpha^*$ is defined in Lemma~\ref{lem:zerotoa}.}
\label{fig:completeproof}
\end{figure}
\section{Proof of Theorem~\ref{thm:main} for $k\geq 70$}
\label{sec:prooflargek}
\begin{lemma}
\label{lem:alpha1largek}
Assume that $k\geq 70$. Then $\eigone{\Q^{(k)}(\alpha)}$ cannot have its global
minimum for $0<\alpha<2.83$.
\end{lemma}
\begin{proof}
This follows the same lines as the proofs of Lemmas~\ref{lem:zerotoa}
and~\ref{lem:zeroto2a}. We let
\[
\alpha^* = \sqrt{\frac{k+2}{k+6}\widetilde{B}_k-A_k},
\]
where $A_k$ is the upper bound on
$\eigone{\Q^{(k)}(0)}$ from Lemma~\ref{lem:trial} (which is increasing in $k$ by
Lemma~\ref{lem:increasingub}) and $\widetilde{B}_k$ is the lower bound on
$\eigtwo{\Q^{(k)}(\alpha)}$ from Lemma~\ref{lem:betterl2}.
For $k\geq 70$ we note that
\[
2\alpha^*\geq 2\sqrt{\frac{72}{76}\times 4.719-\frac{\pi^2}{4}}\geq
2.83.
\]
\end{proof}
Combining this result with Lemma~\ref{lem:lb3} we find that
$\eigone{\Q^{(k)}(\alpha)}$ cannot have its minimum attained for $\alpha>0$. This
proves Theorem~\ref{thm:main}.
\section*{Acknowledgements}
SF was partially supported by the Lundbeck
Foundation, the Danish Natural Science Research Council and the European
Research Council under the
European Community's Seventh Framework Program (FP7/2007--2013)/ERC grant
agreement 202859.
\bibliographystyle{abbrv}
| {
"timestamp": "2013-09-11T02:03:18",
"yymm": "1309",
"arxiv_id": "1309.2141",
"language": "en",
"url": "https://arxiv.org/abs/1309.2141",
"abstract": "We study for $\\alpha\\in\\R$, $k \\in {\\mathbb N} \\setminus \\{0\\}$ the family of self-adjoint operators \\[ -\\frac{d^2}{dt^2}+\\Bigl(\\frac{t^{k+1}}{k+1}-\\alpha\\Bigr)^2 \\] in $L^2(\\R)$ and show that if $k$ is even then $\\alpha=0$ gives the unique minimum of the lowest eigenvalue of this family of operators. Combined with earlier results this gives that for any $k \\geq 1$, the lowest eigenvalue has a unique minimum as a function of $\\alpha$.",
"subjects": "Spectral Theory (math.SP)",
"title": "A uniqueness theorem for higher order anharmonic oscillators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9811668698342151,
"lm_q2_score": 0.8056321796478255,
"lm_q1q2_score": 0.790459603942773
} |
https://arxiv.org/abs/math/9912116 | Symmetry breaking and other phenomena in the optimization of eigenvalues for composite membranes | We consider the following eigenvalue optimization problem: Given a bounded domain $\Omega\subset\R^n$ and numbers $\alpha\geq 0$, $A\in [0,|\Omega|]$, find a subset $D\subset\Omega$ of area $A$ for which the first Dirichlet eigenvalue of the operator $-\Delta + \alpha \chi_D$ is as small as possible.We prove existence of solutions and investigate their qualitative properties. For example, we show that for some symmetric domains (thin annuli and dumbbells with narrow handle) optimal solutions must possess fewer symmetries than $\Omega$; on the other hand, for convex $\Omega$ reflection symmetries are preserved.Also, we present numerical results and formulate some conjectures suggested by them. |
\section{Problem and Main Results} \label{secintro}
We study qualitative properties of solutions of
a certain eigenvalue optimization problem. In physical terms,
the problem can be stated as follows:
\begin{quote} {\bf Problem (P) }
Build a body of prescribed shape out of given materials
(of varying densities)
in such a way that the body has a prescribed
mass and
so that the basic frequency of the resulting
membrane (with fixed boundary) is as small as possible.
\end{quote}
In fact, we will consider a more general problem, which we now
state in mathematical terms:
Given a domain $\Omega\subset\R^n$ (bounded, connected, with Lipschitz boundary)
and numbers $\alpha>0$, $A\in [0,|\Omega|]$ (with $|\cdot|$ denoting
volume).
For any measurable subset $D\subset\Omega$ let $\chi_D$ be its characteristic
function and $\lambda_\Omega(\alpha,D)$ the lowest eigenvalue $\lambda$ of the
problem
\begin{align} \label{eqeveq}
\begin{split}
-\Delta u + \alpha \chi_D u &= \lambda u \quad \text{ on }\Omega \\
u & = 0 \quad\text{ on } \partial \Omega.
\end{split}
\end{align}
Define
\begin{equation} \label{eqevmin}
\Lambda_\Omega(\alpha,A) = \inf_{\substack{ D\subset\Omega \\
|D| = A}}
\lambda_\Omega(\alpha,D).
\end{equation}
Any minimizer $D$ in \eqref{eqevmin} will be called an
{\em optimal configuration} for the data $(\Omega,\alpha,A)$.
If $D$ is an optimal configuration
and $u$ satisfies \eqref{eqeveq} then $(u,D)$
will be called an {\em optimal pair} (or {\em solution}).
Our problem now reads:
\begin{quote} {\bf Problem (M)}
Study existence, uniqueness and qualitative properties of optimal pairs.
\end{quote}
As is well-known,
$u$ is uniquely determined, up to a scalar multiple, by $D$,
and may be chosen to be positive on $\Omega$. In addition, we will
always assume $$\int_\Omega u^2 = 1.$$
(Integrals over $\Omega$ are always taken with respect to the standard measure.)
Clearly, changing $D$ by a set of measure zero does not affect
$\lambda_\Omega(\alpha,D)$ or $u$. Therefore, we will consider sets $D$
that differ by a null-set as equal.
At first sight, it is not obvious that problem (M) generalizes
problem (P).
In fact, we will see (Theorem \ref{thprobPM})
that there is a number $\abar_\Omega(A)>0$
such that solutions of problem (P) are in one to one correspondence
with solutions of problem (M) with parameters in the range
$\alpha\leq \abar_\Omega(A)$.
The number $\abar_\Omega(A)$ is characterized as the unique value of $\alpha$
satisfying
\begin{equation} \label{eqabardef0}
\Lambda_\Omega(\abar_\Omega(A),A) = \abar_\Omega(A),
\end{equation}
see Proposition \ref{propparam}.
Our investigations are theoretical and numerical: Numerical results
(obtained by M.I.\ and I.O.) suggest properties of optimal configurations;
this leads to the formulation of conjectures, and some of these
are proved rigorously (by S.C., D.G.\ and K.K.).
A central tool in our
investigations is the variational characterization
of the eigenvalue:
$$ \lambda_\Omega(\alpha,D) = \inf_{u\in H_0^1(\Omega)} R_\Omega(u,\alpha,D),
\qquad R_\Omega(u,\alpha,D) :=
\frac{\int_\Omega |\grad u|^2 + \alpha\int_\Omega \chi_D u^2}{\int_\Omega u^2},$$
and the eigenfunction $u$ is a minimizer.
So $\Lambda_\Omega(\alpha,A)$
is characterized by
$$ \Lambda_\Omega(\alpha,A) = \inf_{\substack{
u\in H_0^1(\Omega) \\
|D|=A}}
R_\Omega(u,\alpha,D). $$
We first prove the following theorem on existence and basic properties
of solutions. It is fundamental for all further considerations.
\begin{theorem} \label{thexist}
For any $\alpha>0$ and $A\in [0,|\Omega|]$ there exists an optimal pair.
Moreover, any optimal pair $(u,D)$ has the following properties:
\begin{enumerate}
\item[(a)] $u\in C^{1,\delta}(\Omega) \cap H^2(\Omega)
\cap C^\gamma(\overline{\Omega})$
for some $\gamma>0$ and every $\delta < 1$.
\item[(b)] $D$ is a sublevel set of $u$, i.e.\
there is a number $t\geq 0$ such that
$$D = \{u\leq t\}.$$
\item[(c)] Every level set $\{u=s\}$, $s\geq0$, has measure zero, except
possibly in the case $\alpha=\abar_\Omega(A)$, $s=t$.
\end{enumerate}
\end{theorem}
Here we use the short notation $\{u=t\} = \{x: u(x) = t\}$.
Since $\chi_D$ is discontinuous, solutions $u$ may not be twice
differentiable, so equation \eqref{eqeveq} is understood in the weak sense.
Note that Theorem \ref{thexist}(b)
shows in particular that our problem is equivalent
to finding the smallest eigenvalue and associated eigenfunctions
of the nonlinear problem (with free variables $u$ and $t$)
\begin{align} \label{eqnonlinearev}
\begin{split}
-\Delta u + \alpha \chi_{\{u\leq t\}} u &= \lambda u \quad \text{ on }\Omega \\
u & = 0 \quad\text{ on } \partial \Omega \\
|\{u\leq t\}| &= A.
\end{split}
\end{align}
The question of {\em uniqueness} is much more subtle: For some domains
$\Omega$ there will be a unique optimal pair for all $\alpha,A$,
while for others there will be many, for certain ranges of $\alpha,A$.
This follows from our
results on symmetry preservation and symmetry breaking below.
We now list a few questions that naturally come to mind:
\begin{enumerate}
\item[(SY)] If $\Omega$ has symmetries, does $D$ have the same symmetries?
(Note that if $\Omega$ and $D$ have a symmetry in common then $u$ will
also have this symmetry since it is uniquely determined by $\Omega$ and $D$.)
\item[(CX)] Assume $\Omega$ is convex. Is $D^c:=\Omega\setminus D$ convex?
Is $D$ unique?
\item[(CN)] Is $D$ or $D^c$ connected?
\item[(FB)] What is the regularity of the {\em free boundary} $\partial D$?
\end{enumerate}
We give partial answers to all of these questions. Some proofs, mainly
relating to (FB), and additional results can be found in the companion paper
[CGK].
Many open problems remain, see Section \ref{secopen}.
At this point, the reader is invited to look at Figures 1-3 for a first
impression.
\vspace{\baselineskip}
We now state our qualitative results.
As a general convention, constants only depend on the quantities
indicated as subscripts or in parentheses, unless otherwise specified.
Often we suppress the subscript $\Omega$.
First, as an easy consequence of Theorem \ref{thexist} one has:
\begin{theorem} \label{thtub}
Fix $\alpha>0$, $A>0$, and let $D$ be an optimal configuration.
\begin{enumerate}
\item[(a)] $D$ contains a tubular neighborhood of the boundary $\partial
\Omega$.
\item[(b)] If $\alpha<\abar_\Omega(A)$ then every connected component $D_0$ of
the interior of $D$
hits the boundary, i.e. $\overline{D_0}\cap\partial\Omega \not=\emptyset$.
\end{enumerate}
In particular, if $\Omega$ is simply connected and $\alpha<\abar_\Omega(A)$
then $D$ is connected.
\end{theorem}
The number $\abar_\Omega(A)$ was defined above, see
\eqref{eqabardef0}.
The significance of the condition $\alpha<\abar_\Omega(A)$ is that it is
equivalent to $\Delta u < 0$ on $\Omega$.
One always has
$$\abar_\Omega(A) \geq \mu_\Omega.$$
Here and throughout the paper, $\mu_\Omega$ denotes the first
eigenvalue of the Dirichlet Laplacian on $\Omega$, and $\psi_\Omega$
the positive, $L^2$-normalized
eigenfunction:
$$ -\Delta \psi_\Omega = \mu_\Omega \psi_\Omega \quad\text{on }\Omega,
\quad\psi_{\Omega} = 0\quad\text{on }\partial \Omega.$$
Next, we consider the dependence of $\Lambda_\Omega$ and solutions $(u,D)$
on $\alpha$ and $A$.
Here it is convenient to formulate our problem also for $\alpha=0$,
as follows: If $\alpha=0$ then a solution
(unique in this case) is a pair $(\psi_\Omega,D)$
where $D$ is the sublevel set of $\psi_\Omega$ of area $A$.
(Since $\psi_\Omega$ is real analytic and non-constant, such $D$ exists
for every $A$ and is unique.)
We will prove strict monotonicity and
Lipschitz continuity of $\Lambda_\Omega$ in both parameters (Proposition
\ref{propparam}). Continuous dependence of optimal pairs $(u,D)$
on the parameters may be expected only at parameter values
where they are unique.
This is the case, in particular, if $\alpha=0$
or $A=0$ or $A=|\Omega|$;
in these cases $u=\psi_\Omega$, and the continuity is proved in [CGK].
Here we only state the results. They are used only in the proof
of Theorem \ref{thasmallcxfb}.
For example, we have the following:
\begin{theorem} \label{thDclose}
For $s\geq 0 $ let $[\Omega]^s = \{\psi_\Omega\leq s\}$, where
$\psi_\Omega$ is the positive $L^2$-normalized first eigenfunction
of $-\Delta$ on $\Omega$.
Fix $A\in [0,|\Omega|]$ and choose $t_\Omega$ such that $|[\Omega]^{t_\Omega}|=A$.
Then for any $\delta>0$ there is $\alpha_0=\alpha_0(\delta,\Omega)$ such that
whenever $\alpha<\alpha_0$ and $D$ is an optimal configuration for $(\alpha,A)$
then
$ |t-t_\Omega| < \delta $
and
$$ [\Omega]^{t_\Omega-\delta} \subset D \subset [\Omega]^{t_\Omega+\delta}.$$
\end{theorem}
We now address questions of {\em symmetry}.
First, we prove {\em symmetry preservation} in the presence of convexity:
\begin{theorem} \label{thsymmpres}
Assume that the domain $\Omega$ is symmetric and convex with respect
to the hyperplane $\{x_1=0\}$. In other words, for each $x'=(x_2,\ldots,x_n)$
the set
\begin{equation} \label{eqsy0}
\{x_1:\, (x_1,x') \in \Omega\}
\end{equation}
is either empty or an interval of the form $(-c,c)$.
Then for any solution $(u,D)$ both $u$ and $D$ are symmetric
with respect to $\{x_1=0\}$, $D^c$ is convex with respect
to $\{x_1=0\}$, and $u$ is decreasing in $x_1$ for $x_1\geq 0$.
\end{theorem}
For example, any solution in an elliptic region has a double reflection
symmetry, see Figure 1.
The principal tool here is Steiner symmetrization. See [K2] for an
overview on such methods. Theorem \ref{thsymmpres} easily
implies the following uniqueness result (the only case where we
can prove uniqueness!):
\begin{cor} \label{corball}
Let $\Omega = \{|x|<1\}$ be the ball.
Then there is a unique optimal configuration $D$
for any $\alpha,A$, and $D$ is a shell region
$$ D= \{x:\, r(A) < |x| < 1\}. $$
\end{cor}
One of the most interesting phenomena studied in this paper is
{\em symmetry breaking} for certain plane domains $\Omega$. That is,
an optimal configuration $D$ may have less symmetry than $\Omega$. We will
prove it for two types of domains: Thin annuli and dumbbells with narrow handle.
An annulus has rotational symmetry, a dumbbell has a reflection symmetry.
\begin{theorem} \label{thsymmann}
Fix $\alpha>0$ and $\delta\in (0,1)$. For $a>0$ let
$\Omega_a = \{x\in\R^2:\, a<|x|<a+1\}$. There exists $a_0=a_0(\alpha,\delta)$
such that whenever $a>a_0$ and $D$ is an optimal configuration for
$\Omega_a$ with parameters $\alpha$ and $A=\delta|\Omega_a|$ then
$D$ is not rotationally symmetric.
\end{theorem}
See Figure 2.
For dumbbells we prove a little more than symmetry breaking:
\begin{theorem} \label{thdumb} For $h\in (0,1)$ define the dumbbell
with handle width $2h$
$$ \Omega_h = B_1(-2,0) \cup ((-2,2)\times (-h,h)) \cup B_1 (2,0) $$
where $B_r(p) = \{x\in\R^2:\, |x-p|<r\}$.
Fix $\alpha>0$ and $A\in (0,2\pi)$. Then there is $h_0=h_0(\alpha,A)>0$
such that we have for $h<h_0$:
\begin{enumerate}
\item[(a)] Any optimal pair $(u,D)$ is not symmetric with respect
to the $x_2$-axis.
\item[(b)] If $A>\pi$ then for any optimal pair $(u,D)$ the complement
$D^c$ is contained in one of the lobes (i.e. one of the balls $B_1(\pm2,0)$).
\end{enumerate}
\end{theorem}
See Figure 3.
In fact, similar results hold for more general dumbbells.
As we remarked before, symmetry breaking implies non-uniqueness: For
example for a dumbbell the pair $(u',D')$ obtained from a solution
$(u,D)$ by reflection in the $x_2$-axis will be a solution, and
different from $(u,D)$ by the theorem.
\vspace{\baselineskip}
The following result on the regularity of the free boundary is proved
in [CGK]:
\begin{theorem} \label{thfb}
If $(u,D)$ is an optimal pair, $x\in\partial D$ and
$\grad u(x) \not=0$ then $\partial D$ is a real analytic hypersurface near
$x$.
\end{theorem}
The difficulty is that $\chi_D$ is discontinuous at $x\in\partial D$, so $u$ is
not even $C^2$ there. That the level set $\{u=t\}$ has $C^\omega$ regularity
nevertheless is proved by introduction of suitable local coordinates (with
$u$ as one coordinate) and analysis of the resulting nonlinear elliptic
equation.
Similar arguments and continuity considerations for $\alpha$ near zero
allow us to give partial answers to problems (CX) and (FB):
\begin{theorem} \label{thasmallcxfb}
Suppose $\Omega$ is convex and has a $C^2$ boundary. Then there
is $\alpha_0(A,\Omega)>0$ such that for any $\alpha<\alpha_0$ and
any optimal configuration $D$, one has:
\begin{enumerate}
\item[(a)] $\partial D\cap\Omega$ is real analytic.
\item[(b)] $D^c$ is convex.
\end{enumerate}
\end{theorem}
\vspace{\baselineskip}
Problem (P) and generalizations of it (to higher eigenvalues
and to a maximization problem), but with fewer qualitative results,
were studied before in
[Kr], [CM], and [C] (where Theorem \ref{thsymmpres} is stated, but
the proof is incomplete since the case of equality in the
rearrangement inequalities is not addressed).
Problems similar to problem (M) (e.g. with $L^p$ potentials) were considered in
[AH], [Eg], [AHS], [CL], and [HKK].
\vspace{\baselineskip}
The paper is organized as follows:
In Section \ref{secbasics}, we prove Theorems \ref{thexist} and
\ref{thtub} and discuss the parameter dependence of $\Lambda_\Omega$.
Also, in Subsection \ref{subsecPM} we discuss the relation of
problems (P) and (M).
In Section \ref{secsymm} we prove Theorems \ref{thsymmpres},
\ref{thsymmann}, and \ref{thdumb} on symmetry questions, and
Corollary \ref{corball}.
In Section \ref{secfbcx} we prove Theorem \ref{thasmallcxfb}.
In Section \ref{secnum} we describe the numerical algorithm used.
In Section \ref{secopen} we state some open problems and conjectures.
Finally, we collect some standard facts about elliptic
PDEs in the Appendix.
\vspace{\baselineskip}
\begin{center} ACKNOWLEDGMENT \end{center}
\nopagebreak
We started this work while K.\ Kurata was visiting the
Erwin Schr\"{o}dinger International Institute for Mathematical Physics (ESI) in
Vienna. K.\ Kurata is partially supported by Tokyo Metropolitan University
maintenance costs and by ESI and wishes to thank Professor
T.\ Hoffmann-Ostenhof for his invitation and the members of
ESI for their hospitality.
D.\ Grieser was also at the ESI, and thanks L.\ Friedlander for inviting him.
M.\ Imai deeply appreciates helpful comments and heartful encouragement
by Professor Teruo Ushijima in the University of Electro-Communications
(Tokyo).
We thank Professor
M.\ Loss for his interest and some valuable comments.
We thank Professor E.\ Harrell for his interest in our work and for
informing us of the related works [AH], [AHS], and valuable discussions.
S. Chanillo was supported by NSF grant DMS-9970359.
\section{Basic results} \label{secbasics}
\subsection{Existence and regularity. Proof
of Theorem \ref{thexist}}
We first prove {\em existence and regularity:}
The regularity statements in (a) hold for solutions of
equations
$$ -\Delta u + \rho u = 0$$
with $\rho$ bounded by standard elliptic theory, see for example
[GT, Theorem 8.29 and Corollary 8.36].
To prove existence, fix $\alpha$ and $A$, and write $\Lambda=\Lambda_\Omega(\alpha,A)$,
$\lambda(D) = \lambda_\Omega(\alpha,D)$ for simplicity.
Let $D_j$ be a minimizing sequence, i.e. $\lambda(D_j)\to \Lambda$
as $j\to\infty$.
Let $u_j\in H_0^1$ (all function spaces are defined on $\Omega$)
be the positive $L^2$-normalized first
eigenfunction of $-\Delta+\alpha\chi_{D_j}$.
Since $\lambda(D_j)$ is bounded, the sequence $\{u_j\}$ is bounded
in $H^1_0$. Also, $\{\chi_{D_j}\}$ is bounded in $L^2$.
Therefore, we may choose a subsequence (again denoted $u_j,D_j$)
and $u\in H^1_0$, $\eta\in L^2$ such that
$u_j \weakto u$ in $H^1_0$ (weak convergence) and
$\chi_{D_j}\weakto\eta$ in $L^2$. This implies
$u_j\to u$ (strongly) in $L^2$, $\chi_{D_j} u_j \weakto \eta u$
in $L^2$, and $\int_\Omega \eta = A$.
Now taking limits in the weak form of the eigenvalue equation
$$ \int_\Omega \grad u_j\cdot \grad\psi + \alpha \int_\Omega
\chi_{D_j} u_j\psi = \lambda(D_j) \int_\Omega u_j\psi
\qquad \forall \psi\in H^1_0$$
we get
\begin{equation} \label{equeta}
-\Delta u + \alpha \eta u = \Lambda u \quad\text{ (weakly).}
\end{equation}
We have
$$ 0\leq\eta\leq 1 \quad\text{ a.e.}$$
since $0\leq\chi_{D_j} \leq 1$ for all $j$ and weak convergence preserves
pointwise inequalities a.e. (exercise!). Therefore, $u$ has the regularity
stated in (a).
It remains to prove that $\eta$ may be replaced by a characteristic function.
Since $\int_\Omega u^2=1$, \eqref{equeta} shows that
\begin{equation} \label{eqetaineq}
\int_\Omega |\grad u|^2 + \alpha \int_\Omega \eta u^2 = \Lambda.
\end{equation}
Now the minimization problem
$$ \inf_{\substack{\eta:\int\eta=A \\ 0\leq\eta\leq 1}}
\int_\Omega \eta u^2 $$
has a solution $\eta=\chi_D$ where $D$ is any set with $|D| = A$ and
\begin{equation} \label{eqDineq}
\{u<t\} \subset D \subset \{u\leq t\},
\quad t:=\sup\{s:|\{u<s\}|<A\}
\end{equation}
(compare the 'bathtub principle', Theorem 1.18 in [LL]).
Therefore, we get from \eqref{eqetaineq}
$$\int_\Omega |\grad u|^2 + \alpha \int_\Omega \chi_D u^2 \leq \Lambda.$$
By definition of $\Lambda$ as a minimum, this must actually be an
equality, and $(u,D)$ is a solution.
(b) Let $(u,D)$ be any solution. Then it is obvious that
\eqref{eqDineq} must hold (always up to a set of measure zero;
if \eqref{eqDineq} didn't hold then one could reduce $\int_D u^2$ by
shifting a part of $D$ from $\{u>t\}$ to $\{u\leq t\}$).
Set $\calN_s = \{u=s\}$ for any $s>0$.
Using Lemma 7.7 from [GT] twice, we see that $\Delta u = 0$ a.e.
on $\calN_s$ (since $u\equiv$ const on $\calN_s$; recall that
$u$ is in $H^2$). Therefore,
\begin{equation} \label{equaeonN}
(\Lambda-\alpha\chi_D)u = 0 \quad\text{ a.e. on }\calN_s.
\end{equation}
Since $u>0$ and $\Lambda>0$, this shows that $D^c\cap\calN_s$ has
measure zero. Taking $s=t$ we get (b).
(c) If $s>t$ then $\calN_s\subset D^c$, so $|\calN_s|=0$ by
\eqref{equaeonN}.
The same argument works if $s=t$ and $\alpha\not=\Lambda$.
Finally, $u$ satisfies $-\Delta u = (\Lambda-\alpha) u$
on the open set $\{u<t\}$, hence $u$ is real analytic there,
and therefore the level sets $\calN_s$ have measure zero for $s<t$.
\qed
\pf[Proof of Theorem \ref{thtub}]
Part (a) is clear from Theorem \ref{thexist}(b).
To prove (b), assume this was false.
Then there is an open subset $D_0\subset \{u\leq t\}$ with
$\partial D_0 \subset \overline{D^c} = \{u\geq t\}$ and therefore
$u=t$ on $\partial D_0$.
Then $u$ assumes a minimum at some $x_0\in D_0$.
But this is a contradiction since $\alpha<\abar_\Omega(A)$ implies
$\Lambda(\alpha,A) > \alpha$ (see Proposition \ref{propparam} below)
and therefore
$\Delta u = (\alpha-\Lambda(\alpha,A))u < 0$ on $D_0$.
\qed
\subsection{Parameter dependence of $\Lambda$}
\begin{prop} \label{propparam}
\begin{enumerate}
\item[(a)] The function $(\alpha,A)\mapsto \Lambda(\alpha,A)$
is Lipschitz continuous, uniformly on bounded sets. More precisely,
we have, for any $\alpha,\alpha'\geq 0$, $A,A'\in [0,|\Omega|]$,
\begin{multline} \label{eqlipest}
|\Lambda(\alpha,A) - \Lambda(\alpha',A')| \leq \\
|\alpha-\alpha'|\,\frac{\max\{A,A'\}}{|\Omega|}
+ |A-A'|\, \min\{\alpha,\alpha'\} C_{\Omega,\max\{\alpha,\alpha'\}}
\end{multline}
with $C_{\Omega,\alpha}$ bounded for $\alpha$ bounded.
\item[(b)] $\Lambda(\alpha,A)$ is strictly increasing in $A$ for
fixed $\alpha>0$, strictly increasing in $\alpha$ for fixed $A>0$,
and $\Lambda(\alpha,A) - \alpha$ is strictly decreasing in $\alpha$
for fixed $A<|\Omega|$.
\item[(c)]
If $A<|\Omega|$ then there is a unique value $\alpha=\abar_\Omega(A)$ with
\begin{equation} \label{eqabardef}
\Lambda(\abar_\Omega(A),A) = \abar_\Omega(A).
\end{equation}
The function $\abar_\Omega$ is continuous and strictly increasing,
$\abar_\Omega(0) = \mu_\Omega$ and $\abar_\Omega(A) \to\infty$
as $A\to |\Omega|$.
\end{enumerate}
\end{prop}
\pf
(a)
Write $\Lambda=\Lambda(\alpha,A)$ and $\Lambda'=\Lambda(\alpha',A')$,
and let $(u,D)$, $(u',D')$ be minimizers for $\Lambda$, $\Lambda'$
respectively. We may assume $\int_\Omega u^2 = \int_\Omega (u')^2 = 1$, so that
\begin{alignat*}{2}
\Lambda &= \int_\Omega |\grad u|^2 + \alpha \int_D u^2, & \quad |D| &= A,
\end{alignat*}
and similarly for $\Lambda'$ etc.
By symmetry of \eqref{eqlipest} we may assume
that $A'\geq A$. Choose $D_1\subset D'$ with $|D_1|=A$
and $D'_1\supset D$ with $|D'_1| = A'$. Here we may assume that $D'_1$
is of the form $\{u\leq s\}$ for a suitable number $s$.
Using the optimality of $(u,D)$ for $\Lambda$ we get
\begin{equation} \label{eqlamineq1}
\Lambda \leq \int_\Omega |\grad u'|^2 + \alpha \int_{D_1} (u')^2
= \Lambda' + (\alpha-\alpha')\int_{D'} (u')^2 - \alpha
\int_{D'\setminus D_1} (u')^2.
\end{equation}
Similarly, using the optimality of $(u',D')$ for $\Lambda'$ we get
\begin{equation} \label{eqlamineq2}
\Lambda' \leq \int_\Omega |\grad u|^2 + \alpha' \int_{D'_1} u^2
= \Lambda + (\alpha'-\alpha)\int_{D'_1} u^2 + \alpha
\int_{D'_1\setminus D} u^2.
\end{equation}
Alternatively, we may rewrite this as
\begin{equation} \tag{\ref{eqlamineq2}'}
\Lambda' \leq \Lambda + (\alpha'-\alpha)\int_D u^2 + \alpha'\int_{D'_1\setminus D}
u^2.
\end{equation}
In order to estimate the integrals in \eqref{eqlamineq1}, \eqref{eqlamineq2}
and (\ref{eqlamineq2}') which are multiplied by $\pm(\alpha-\alpha')$,
observe that for any $s>0$ and any function $u$ we have
$$ \frac {\int_{\{u\leq s\}} u^2}{\int_\Omega u^2} \leq \frac{|\{u\leq s\}|}{|\Omega|}.$$
The other integrals are estimated using the uniform estimate \eqref{eqpdeunif}:
$u$ solves the equation $-\Delta u + \alpha \chi_D u = \Lambda u.$
$\Lambda$ is bounded in terms of $\Omega$ and $\alpha$ since one
may apply \eqref{eqlamineq1} with $\alpha'=0, A=A'$, to obtain
$ \Lambda \leq \mu_\Omega + \alpha.$
Therefore, the uniform bound \eqref{eqpdeunif}, applied to $G=\Omega$, yields
$$ \int_{D'_1\setminus D} u^2 \leq (A'-A) \sup_\Omega u^2 \leq
(A'-A) C_{\Omega,\alpha}. $$
Finally, we obtain \eqref{eqlipest} by applying these estimates to
\eqref{eqlamineq1} and \eqref{eqlamineq2} in the case $\alpha\leq\alpha'$,
and to \eqref{eqlamineq1} and (\ref{eqlamineq2}') if $\alpha\geq\alpha'$.
(b) This follows immediately from \eqref{eqlamineq1} and the unique
continuation theorem.
(c) This follows easily from (a) and (b) since $\Lambda(\alpha,A)-\alpha$
equals $\mu_\Omega>0$ for $\alpha=0$ and tends to $-\infty$ as $\alpha\to
\infty$ by (a).
\qed
We now consider continuous dependence of optimal pairs $(u,D)$ on
the data. First, near $\alpha=0$:
\begin{prop} \label{propparam2}
Fix $D\subset\Omega$. Let $u_{\alpha,D}$ be the (positive,
$L^2$-normalized) first eigenfunction of
$-\Delta + \alpha\chi_D$, and $\psi_\Omega=u_{0,D}$ the first
eigenfunction of $-\Delta$.
Then there is a constant $C=C_\Omega$ such that, for $0\leq\alpha\leq 1$,
\begin{align*}
\|u_{\alpha,D}-\psi_{\Omega}\| &\le C \alpha,
\end{align*}
in the $H^2(\Omega)$ and $L^\infty(\Omega)$ norms, and in $C^{1,\delta}(\Omega)$
if $\partial\Omega$ is in $C^{1,\delta}(\Omega)$.
\end{prop}
\pf
See [CGK]. \qed
\pf[Proof of Theorem \ref{thDclose}]
This is almost immediate from Proposition \ref{propparam2}, see [CGK]. \qed
Similarly, one has continuity in $A$ at $A=0$ and at $A=|\Omega|$. Here we only
consider the latter case:
\begin{prop} \label{propAlarge}
Let $\Omega$ be a smooth bounded domain and fix $\alpha >0$. Let
$$ M = \max_\Omega \psi_\Omega.$$
Then, for any $\delta >0$ there is $A_0=A_0(\delta,\alpha,\Omega)<|\Omega|$
such that whenever $A > A_0$ and $D$ is an
optimal configuration for $(\alpha,A)$ then
$$
D^c
\subset \{\psi_\Omega > M -\delta\}.
$$
\end{prop}
\pf
See [CGK]. \qed
\subsection{Relation of problems (P) and (M)} \label{subsecPM}
We want to show that problem (P)
(see Section \ref{secintro}) is a special case of problem (M).
The mathematical formulation of problem (P) is:
Given $0\leq h < H$ (lower and upper bounds for the densities of the materials
that
are available) and the prescribed total mass $M\in [h|\Omega|,H|\Omega|], M>0$,
consider measurable 'density functions'
$\rho$ satisfying
$$ h\leq \rho \leq H,\quad \int_\Omega \rho = M.$$
Then the objective is to find $\rho$ and $u$ which realize the minimum in
\begin{equation} \label{eqPvar}
\Theta (h,H,M) := \inf_\rho \inf_{u\in H_0^1(\Omega)}
\frac {\int_\Omega |\grad u|^2} {\int_\Omega \rho u^2}.
\end{equation}
The corresponding eigenvalue problem is
\begin{equation} \label{eqPpde}
-\Delta u = \Theta\rho u,\qquad u_{|\partial\Omega} = 0.
\end{equation}
(We assume the modulus of elasticity to be the same for all materials.)
Problem (P) and problem (M) are related in the following way:
\begin{theorem} \label{thprobPM}
\begin{enumerate}
\item[(a)] If $(u,\rho)$ is a minimizer for problem (P) then
$\rho$ is of the form
$$ \rho_D = h\chi_D + H\chi_{D^c}$$
for a set $D$ of the form $D=\{u\leq t\}$. That is, only two types of
materials occur.
\item[(b)] The pair $(u,\rho_D)$ is a minimizer for problem (P),
with parameter values $(h,H,M)$, if and only if $(u,D)$ is a
minimizer (optimal pair) for problem (M), with parameter values
$(\alpha,A)$ given by
\begin{align}
\alpha &= (H-h) \Theta(h,H,M), \label{eqalphaP} \\
A & = \frac{H|\Omega| - M} {H-h}. \label{eqAP}
\end{align}
The minimal eigenvalues are related by
\begin{equation} \label{eqLP}
\Lambda(\alpha,A) = H\Theta(h,H,M).
\end{equation}
\item[(c)] The values of $(\alpha,A)$ that occur when $h,H,M$ vary are
precisely those satisfying
\begin{alignat*}{2}
A &\in [0,|\Omega|), &\quad 0&<\alpha\leq \abar_\Omega(A) \quad \text{or} \\
A& = |\Omega|, &\quad 0&<\alpha<\infty,
\end{alignat*}
where $\abar_\Omega(A)$ is defined in \eqref{eqabardef}.
In particular, $\alpha=\abar_\Omega(A)$ corresponds to $h=0$.
\end{enumerate}
\end{theorem}
Note that problem (P) really depends on two parameters only since
for $\kappa>0$ one has $$\Theta(\kappa h,\kappa H,\kappa M) =
\kappa^{-1} \Theta(h,H,M),$$
with the same minimizers (up to a factor $\kappa$ for $\rho$).
This is obvious from \eqref{eqPvar}.
\pf
(a) This is almost obvious from \eqref{eqPvar}, and
proved just like part (b) of Theorem \ref{thexist}.
(b)
First, if $\rho=\rho_D$ and $|D|=A$ then
$ M = \int_\Omega \rho = Ah + (|\Omega|-A)H,$
which gives \eqref{eqAP}.
Simple manipulation shows that
\begin{equation}
-\Delta u = \Theta\rho_D u = \Theta(h\chi_D + H\chi_{D^c}) u \label{eqPM1}
\end{equation}
is equivalent to
\begin{equation}
-\Delta u + (H-h) \Theta\chi_D u = H\Theta u \label{eqPM2}.
\end{equation}
Now if $(u,\rho_D)$ is a minimizer for problem (P) then it satisfies
\eqref{eqPM1} with $\Theta = \Theta(h,H,M)$,
and then \eqref{eqPM2} shows that $\Lambda(\alpha,A)
\leq H\Theta(h,H,M)$ with $\alpha$ satisfying \eqref{eqalphaP}.
Conversely, if $(u,D)$ is a minimizer for problem (M) with parameter
values $(\alpha,A)$ given by \eqref{eqalphaP}, \eqref{eqAP}
then \eqref{eqPM2} holds with $H\Theta$ replaced by
$\Lambda=\Lambda(\alpha,A)$, so instead of \eqref{eqPM1} we get
$-\Delta u = \Theta \rho_D u + (\Lambda-H\Theta)u$
where $\Theta=\Theta(h,H,M)$. Multiplying by $u$ and integrating gives
$$ \int_\Omega |\grad u|^2 = \Theta \int_\Omega \rho_D u^2 + (\Lambda-H\Theta)
\int_\Omega u^2.$$
Now the definition of $\Theta$ implies that
$ \int_\Omega |\grad u|^2 \geq \Theta \int_\Omega \rho_D u^2,$
so we get $\Lambda\geq H\Theta$.
This proves $\Lambda(\alpha,A) = H\Theta(h,H,M)$ and part (b).
(c)
If $A=|\Omega|$ then $D=\Omega$, $\rho\equiv h$ and therefore
$h\Theta(h,H,M) = \mu_\Omega$ from \eqref{eqPpde},
so $\alpha = \frac{H-h}{h}\mu_\Omega$ can take any positive value
by suitable choice of $h$ and $H$.
Now let $A<|\Omega|$. By Proposition \ref{propparam}(b) and (c), $\alpha$ varies
in the indicated range precisely when $\Lambda(\alpha,A) - \alpha$
varies in $[0,\mu_\Omega)$. From \eqref{eqalphaP} and \eqref{eqLP} one has
$$\Lambda (\alpha,A) - \alpha = h\Theta:= h\Theta(h,H,M),$$
so we only need to show that $h\Theta$ has range
$[0,\mu_\Omega)$ (with $A$ fixed).
First, $h\Theta\geq 0$ by definition, and $h\Theta = \Lambda-\alpha < \mu_\Omega$
by Proposition \ref{propparam}, since $\alpha = (H-h)\Theta > 0$, so the
range of $h\Theta$ is contained in $[0,\mu_\Omega)$.
Next, $h\Theta=0$ for $h=0$ (and then $M$ can be adjusted to $A$),
and in the limit $H=h$ one has $\rho\equiv h$ and $h\Theta = \mu_\Omega$,
so when $H\to h$ then $h\Theta\to\mu_\Omega$, and clearly $M$ can be adjusted
to $A$.
Using continuity of $h\Theta$
(which is proved as for $\Lambda$ in Proposition \ref{propparam})
we get the claim.
\qed
\section{Symmetry preservation and symmetry breaking} \label{secsymm}
\subsection{Symmetry preservation in the presence of convexity}
Here we prove Theorem \ref{thsymmpres}.
\pf[Proof of Theorem \ref{thsymmpres}]
We use Steiner symmetrization (symmetrically decreasing rearrangement)
$u\mapsto u^\#$
with respect to the hyperplane $\{x_1=0\}$.
This is defined as follows. Assume $u\in H^1_0(\Omega)\cap C^0(\Omega)$:
For each $x'$, $u^\#(\cdot,x')$ is the unique
function of $x_1$ which is symmetric in $x_1$ and decreasing for $x_1\geq 0$
such that $|\{x_1: u^\#(x_1,x') > t\}| = |\{x_1: u(x_1,x') > t\}|$
for all $t\in \R$.
It is well-known (see, e.g., [LL], [AB]) that, for all $x'$ and
$i=1,\ldots,n$, with integrals taken over the set \eqref{eqsy0},
\begin{eqnarray}
\int |\partial_{x_i} u^\#|^2\,dx_1 &\le & \int|\partial_{x_i} u|^2\,dx_1,
\label{eqsy1}\\
\int(u^\#)^2\,dx_1 &=& \int u^2\,dx_1, \label{eqsy1'} \\
\int(\alpha\chi_D)_\#(u^\#)^2\,dx _1
&\le& \int\alpha\chi_D u^2\,dx_1. \label{eqsy2}
\end{eqnarray}
Here, $f_\#$ is the increasing symmetric rearrangement of a function $f$,
which is defined by $f_\#=-(-f)^\#$.
Note that \eqref{eqsy1} for $i=1$ is just the standard rearrangement
inequality in one dimension, while for $i>1$ it is proved as follows:
Replace the partial derivatives by difference quotients
$(v_\eps(x_1) - v_0(x_1))/\eps$
with $v_\eps(x_1) = u(x_1,\ldots,x_i+\eps,\ldots)$.
After multplication by $\eps^2$ the claimed inequality
becomes simply $ \int |v_\eps^\#-v_0^\#|^2 dx_1 \leq \int |v_\eps-v_0|^2 dx_1$
which is well-known.
Fix $\alpha$ and $A$ and assume $(u,D)$ is an optimal pair.
Define the set $D^\#$ by $\chi_{D^\#} = (\chi_D)_\#$.
Integrating \eqref{eqsy1}, \eqref{eqsy1'} and \eqref{eqsy2}
over $x'$ and summing \eqref{eqsy1} over $i$ we get
\begin{eqnarray}
\lambda(\alpha,D^\#) &\le&\frac{\int_{\Omega}|\nabla u^\#|^2\,dx +
\int_{\Omega} (\alpha \chi_D)_\#(u^\#)^2\,dx}
{\int_{\Omega}(u^\#)^2\,dx}\nonumber\\
&\le&
\frac{\int_{\Omega}|\nabla u|^2\,dx
+ \int_{\Omega} \alpha \chi_D u^2\,dx
}{\int_{\Omega}u^2\,dx}
= \lambda(\alpha;D).
\end{eqnarray}
Since we have $|D^\#| = |D|=A$ (by \eqref{eqsy1'} applied to $\chi_D$),
optimality of $(u,D)$ implies that $(u^\#,D^\#)$ is also a minimizer
and that equality holds in \eqref{eqsy1} and \eqref{eqsy2}, for all $i$
and almost all $x'$.
We need to show that this implies $u=u^\#$. The statements about $D$
then follow from the characterization $D=\{u\leq t\}$.
First note that since $(u^\#,D^\#)$ is a minimizer, the function $u^\#$ solves
the equation
$-\Delta u^\# +\alpha\chi_{D^\#} u^\#=\lambda(\alpha;D^\#)u^\#$.
Therefore, $u$ and $u^\#$ are continuously differentiable by Theorem
\ref{thexist}, so equality in \eqref{eqsy1} holds for all $x'$.
By a result of Brothers and Ziemer (see [BZ])
this equality implies $u^\#(x_1,x')= u(x_1,x')$
for all $x_1$ provided the set
$ \{x_1: \partial_{x_1} u^\#(x_1,x') = 0\}$ has measure zero.
Therefore, we will be done once we have shown that
the set
\begin{equation}
\{v = 0\}\quad\text{ has measure zero, where }
v=\partial_{x_1} u^\#. \tag{*}
\end{equation}
We will give two proofs of this: The first proof works whenever
$\alpha\not=\abar_\Omega(A)$ and the second proof works whenever
$\alpha\leq\abar_\Omega(A)$, so together they cover all cases.
First proof of (*), assuming $\alpha\not=\abar_\Omega(A)$:
Assume this was not so. Define $t^\#$ by $D^\#=\{u^\#\leq t^\#\}$.
$v$ satisfies $-\Delta v + \alpha\chi_{D^\#} v= \lambda(\alpha,D^\#) v$
on $\{u^\#\not=t^\#\}$.
Since $\{u^\#=t^\#\}$ has measure zero by Theorem
\ref{thexist} and the assumption $\alpha\not=\abar_\Omega(A)$,
$v$ vanishes on a set of positive
measure in the open set $\{u^\#\not= t^\#\}$,
so the unique continuation theorem
(for sets of positive measure, see [FG])
applied to $v$ implies that $v\equiv 0$ on some connected component $K$
of $\{u^\#\not= t^\#\}$. Therefore, $u^\#$ is constant in
the $x_1$-direction on $K$.
Since $u^\#=0$ or $t^\#$ on $\partial K$
we conclude that then $u^\#$ must actually
be constant on $K$. This is a contradiction to Theorem \ref{thexist}(c).
Second proof of (*), assuming $\alpha\leq\abar_\Omega(A)$
(this proof is taken from Cox [C]): We show that actually $v<0$
for $x_1>0$, so that $\{v=0\}$ is contained in the hyperplane
$\{x_1=0\}$.
We have $-\Delta u^\# = \Lambda(\alpha,A) u^\# - \alpha \chi_{D^\#} u^\#$, and
the right hand side is decreasing in $x_1$ (for $x_1> 0$) by definition
of the rearrangement and since $\alpha\leq\Lambda(\alpha,A)$ by
Proposition \ref{propparam}. Taking the $x_1$-derivative (in the sense
of distributions), we get $\Delta v \geq 0$ as distribution. Also,
$v$ is continuous, so by the classical theory of subharmonic functions
it satisfies the maximum principle (alternatively, it is in $H^1$ and
then the maximum principle as in [GT], Ch. 8, applies). Since
$v\leq 0$, we conclude that $v<0$ unless $v$ vanishes identically
in $x_1>0$, which is clearly impossible. This proves (*).
This concludes the proof that $u=u^\#$ and hence the proof of the theorem.
Note that in the case $\alpha\leq\abar_\Omega(A)$ the second proof of (*)
above actually shows that $u_{x_1}<0$ for $x_1>0$.
\qed
\pf[Proof of Corollary \ref{corball}]
The only set $D\subset \{|x|<1\}$ which has the symmetry and convexity
properties stated in Theorem \ref{thsymmpres} in all directions
is a shell region as stated. Clearly, $r(A)$ is uniquely determined
by $A$. Therefore, $D$ is unique.
\qed
\subsection{Symmetry breaking on annuli}
We now give the proof of Theorem \ref{thsymmann} about
symmetry breaking on an annulus,
$$
\Omega=\Omega_a=\{x\in {\bf R}^2; a < |x| < a +1\},\quad a> 0.
$$
Let $D$ be any radial set in $\Omega$,
$$D=\{ (r,\theta ) ; r\in D_1, 0\le \theta < 2\pi\},\quad D_1\subset (a,a+1),$$
and let $u$ be the first eigenfunction for $D$, with eigenvalue $\sigma$:
\begin{equation} \label{eqannpde}
-\Delta u +\alpha \chi_D u=\sigma u \quad \text{ on } \Omega,
\quad u|_{\partial \Omega}=0.
\end{equation}
For $a$ sufficiently large (depending on $\alpha$ and $\delta=|D|/|\Omega|$)
we
will construct a comparison domain $\Dtilde$ and a function $\utilde$
which satisfy
\begin{equation} \label{eqanngoal}
\frac{\int_{\Omega_a} |\grad \utilde|^2 + \int_{\Omega_a} \chi_{\Dtilde} \utilde^2}
{\int_{\Omega_a} \utilde^2} \overset{!}{<} \sigma.
\end{equation}
This shows that $D$ is not an optimal configuration and hence
implies the theorem.
In order to construct $\Dtilde$ and $\utilde$, first pick $N=N(\delta)$
with
$$\delta < 1-\frac1{2N}$$
and consider the sector
$$
E_+= \Omega_a\cap \{ (r,\theta); 0\le \theta \le \pi/N\}.
$$
Then let $\utilde$ be the first Dirichlet eigenfunction of the Laplacian
on $E_+$ and $\lambda_1(E_+)$ be the first eigenvalue,
\begin{equation} \label{eqannpde2}
-\Delta \utilde = \lambda_1(E_+) \utilde \quad\text{ on } E_+,
\quad \utilde|_{\partial E_+} = 0,
\end{equation}
extended by zero on $\Omega\setminus E_+$; the set $\Dtilde$ can
be taken to be any subset of $\Omega\setminus E^+$ with $|\Dtilde|=|D|$.
This is possible since
$|D|/|\Omega| = \delta < 1-\frac1{2N} = |\Omega\setminus E_+|/|\Omega|$.
Note that since $\supp \utilde \cap \Dtilde=\emptyset$, we have
$(\int_{\Omega_a} |\grad \utilde|^2 + \int_{\Omega_a} \chi_{\Dtilde} \utilde^2)/
\int_{\Omega_a} \utilde^2 = \int_{E_+} |\nabla \utilde|^2/
\int_{E_+} \utilde^2 = \lambda_1(E_+)$, so \eqref{eqanngoal} is equivalent to
\begin{equation} \label{eqanngoal1}
\lambda_1(E_+) \overset{!}{<} \sigma.
\end{equation}
In order to prove this,
we need to introduce a third eigenvalue problem,
which is intermediate
between \eqref{eqannpde} and \eqref{eqannpde2}.
Define $v$ to be the lowest eigenfunction for the problem \eqref{eqannpde}
among functions of the form
$$ v(r,\theta) = h(r)\sin N\theta,$$
and let $\tau$ be the associated eigenvalue.
Note that problem \eqref{eqannpde} for such functions is equivalent to the
problem
\begin{gather} \label{eqhode}
-h''(r)-\frac{1}{r}h'(r)+\frac{N^2}{r^2}h(r)
+\alpha\chi_{D_1}(r)h(r)=\tau h(r) \quad \text{ on } [a,a+1],\\
h(a)= h(a+1)=0
\end{gather}
for $h$. Thus, $h$ is the first eigenfunction of this Sturm-Liouville problem,
and the eigenvalue $\tau$ is characterized by
\begin{equation} \label{eqhvar}
\tau =\inf_{g\in {\calS}}\frac{\int_a^{a+1}
((g')^2 + (\alpha\chi_{D_1} + \frac{N^2}{r^2})g^2 )r\,dr }
{\int_a^{a+1}g^2r\,dr},
\end{equation}
where ${\calS}=\{ g\in C^1[a, a+1]; g(a)=g(a+1)=0\}$. From this
the (well-known) fact that $h$ does not change sign on $[a,a+1]$
is evident; so we may assume $$h\geq 0.$$
We will compare $u$ with $v$ and $v$ with $\utilde$. The following
two lemmas provide the needed estimates.
\begin{lem} \label{propsymma}
Let $\sigma$ be the lowest eigenvalue for the problem \eqref{eqannpde}
(with $D$ radial)
on $\Omega_{a,b} = \{x\in \R^2: a<|x|<b\}$, and let $\tau$ be the lowest
eigenvalue for eigenfunctions of the form
$v(r,\theta)=h(r)\sin N\theta$ on $\Omega_{a,b}$. Then we have
$$
\tau-\sigma \le N^2/a^2.
$$
\end{lem}
\pf
Since $\chi_D$ is assumed radial, the first eigenfunction of \eqref{eqannpde}
is a radial function $u=f(r)$. Now consider the trial function
$w(r,\theta)=f(r)\sin N\theta$.
We have
$$
\tau \le \frac{\int_{\Omega_{a,b}}(|\nabla w|^2 + \alpha\chi_D w^2)\,dx}
{\int_{\Omega_{a,b}} w^2\,dx}.
$$
Thus,
$$
\tau\le
\frac{\int_a^b( (f'(r))^2 + \frac{N^2}{r^2}f(r)^2
+ \alpha \chi_{D_1} f(r)^2)r\,dr }
{\int_a^bf(r)^2r\,dr}.
$$
By definition of $f(r)$ we get
$$
\tau \le \sigma +
\frac{\int_a^b(\frac{N^2}{r^2}f(r)^2)r\,dr}
{\int_a^bf(r)^2r\,dr}\le \sigma + N^2/ a^2.
$$
The claim follows.
\qed
\begin{lem} \label{lemsymma}
Define $v$ as above. Assume $D$ is radial and
$|D|/|\Omega|=\delta$. There exists a positive constant
$c_{\alpha,\delta}$, independent of $a$,
such that for all $a \ge 1$ we have
$$
\frac{\int_Dv^2\,dx}{\int_{\Omega}v^2\,dx}\ge c_{\alpha,\delta}.
$$
\end{lem}
\pf
We see from $v(r,\theta)=h(r)\sin N\theta$ that
\begin{equation}
\frac{\int_Dv^2\,dx}{\int_{\Omega}v^2\,dx}
=
\frac{\int_a^{a+1}\chi_{D_1}(r)h(r)^2r\,dr}
{\int_a^{a+1}h(r)^2r\,dr}.
\label{eq:9.2}
\end{equation}
$h$ satisfies equation \eqref{eqhode}. For $\tau$ one
has a uniform bound $\tau\leq C_{\alpha,\delta}$ with
$C_{\alpha,\delta}$ independent of $a\geq 1$, because
from \eqref{eqhvar} one gets
$$ \tau \leq \inf_{g\in {\calS}} \frac {\int_a^{a+1} (g')^2 r\,dr}
{\int_a^{a+1} g^2 r\, dr}
+ \alpha + N^2,$$
and by using for $g$ the translate of any fixed test function on $[0,1]$
one sees that the first term on the right is bounded by some absolute
constant.
Therefore, the coefficients of equation \eqref{eqhode} are
uniformly bounded for $a\geq 1$. Also, we have $h\geq 0$.
Lemma \ref{lhar} in Section \ref{secpdefacts} then implies that
one has
\begin{equation} \label{eqh1}
\inf_{[a+\delta/4,a+1-\delta/4]} h \geq c_{\alpha,\delta} \|h\|_{L^2(a,a+1)}.
\end{equation}
Since $|D_1|=\delta$, we
have $|[a+\delta/4,a+1-\delta/4] \cap D_1| \geq \delta/2$. Therefore,
\begin{equation} \label{eqh2}
\int_a^{a+1} \chi_{D_1}(r)h(r)^2r\, dr \geq
\frac\delta2 a \inf_{[a+\delta/4,a+1-\delta/4]} h^2
\end{equation}
and
\begin{equation} \label{eqh3}
\int_a^{a+1} h(r)^2r\, dr \leq (a+1) \int_a^{a+1} h^2 \leq 2a\int_a^{a+1}h^2.
\end{equation}
Combining \eqref{eqh1}, \eqref{eqh2} and \eqref{eqh3} with
\eqref{eq:9.2} we get the Lemma.
\qed
\pf[End of proof of Theorem \ref{thsymmann}]
We have
\begin{equation} \label{eqanntau}
\tau=
\frac{\int_{\Omega}|\nabla v|^2\,dx}{\int_{\Omega}v^2\,dx}
+\frac{\alpha\int_{\Omega}\chi_Dv^2\,dx}{\int_{\Omega}v^2\,dx}.
\end{equation}
Since $v(r,\theta)=h(r)\sin N\theta$, $v$ vanishes on the rays
$\theta=0$ and $\theta=\pi/N$. Since $|v|$ and $|\grad v|$ are periodic
in $\theta$ of period $\pi/N$, we can replace $\Omega$ by $E_+$
in the first quotient.
Therefore, we can use $v$ as test function in the Rayleigh quotient for
the Dirichlet Laplacian on $E_+$ and obtain
$$ \frac{\int_\Omega |\grad v|^2 \, dx} {\int_\Omega v^2 \, dx}
= \frac{\int_{E_+} |\grad v|^2 \, dx} {\int_{E_+} v^2 \, dx}
\geq \lambda_1(E_+).$$
Combining this with \eqref{eqanntau} and Lemma \ref{lemsymma} we therefore get
\begin{equation} \label{eqsya1}
\tau\ge \lambda_1(E_+) + \alpha c_{\alpha,\delta}.
\end{equation}
From Lemma \ref{propsymma} we then get
$$ \sigma > \tau - N^2/a^2 \geq \lambda_1(E_+) +
\alpha c_{\alpha,\delta}-N^2/a^2.$$
If $a$ is chosen so large that $N^2/a^2 \leq \alpha c_{\alpha,\delta}$
then this gives \eqref{eqanngoal1} and hence the theorem.
\qed
\subsection{Symmetry breaking on dumbbells}
\pf[Proof of Theorem \ref{thdumb}]
Since $\alpha$ is fixed throughout, we will write $\lambda_\Omega(D)=
\lambda_\Omega(\alpha,D)$, $\Lambda_\Omega(A) = \Lambda_\Omega(\alpha,A)
= \inf_{|D|=A} \lambda_\Omega(D)$. Here we keep the index $\Omega$
since we will also consider these quantities with $\Omega$ replaced
by one of the \lq lobes' $B_\pm = B_1(\pm 2,0)$. All (implied) constants will
only depend on $\alpha$ and $A$. Write $\Lambda_B = \Lambda_{B_\pm}$,
and given $D$, let
$$ D_\pm = D\cap B_\pm,\quad A_\pm = |D_\pm|.$$
\newcommand {\Amin} {A_{\rm min}}
Further, we introduce
$$ \Amin = \min\{\min(|D_-|,|D_+|):\, D\subset \Omega,
|D|=A\}.$$
Thus, if $D$ is distributed over $\Omega$ with the greatest possible
imbalance between $D_+$ and $D_-$
then the smaller of $D_\pm$ will have area $\Amin$.
It is easily checked that $$\Amin = \max (0,A-|B_-^c|).$$
We first sketch the idea of the proof:
1. For $h=0$, i.e. two disconnected balls, one clearly has
\begin{equation} \label{eqdumb0}
\Lambda_\Omega (A)= \min (\Lambda_B(A_-), \Lambda_B(A_+)).
\end{equation}
Since $\Lambda_B$ is strictly increasing,
it is optimal to put
as much of $D$ as possible in one ball, say $B_+$, and the \lq small' remainder
in the other. Thus $$\Lambda_\Omega(A) = \Lambda_B(\Amin),$$
and the eigenfunction is zero in $B_+$.
2. For small positive $h$, this situation should be approximately
the same: Equation \eqref{eqdumb0} will hold with an error that is
a power of $h$ (compare equation (\ref{eqdumb2A}) below), so the
same argument as in 1. implies symmetry breaking. Also, the eigenfunction
must be small on one lobe, and since $D=\{u\leq t\}$, one gets
(b) from an estimate of $t$.
\vspace{\baselineskip}
We now carry out the details.
Let $(u,D)$ be an optimal pair. Assume $$\int_\Omega u^2=1.$$
First we need an estimate ensuring that
the perturbation introduced by the handle is small.
This is provided by the following estimate near the boundary
(see [GT, Theorem 8.27 with $R_0=1$ and $R=3h$]), which is applicable
since $\Omega$ satisfies a uniform exterior cone condition (uniformly in $h$):
There is $\beta\in (0,1]$ such that
\begin{equation} \label{eqbdest}
\max_{x:\dist(x,\partial\Omega)\leq 3h} u(x) \leq Ch^\beta \|u\|_{L^2(\Omega)}.
\end{equation}
From this it follows that there is
a cut-off function $\sigma = \sigma_h$ on $\Omega$ having the following
properties:
\begin{enumerate}
\item $0\leq\sigma\leq 1$ on $\Omega$.
\item $\supp \sigma \subset B_-\cup B_+$.
\item $|u| = O(h^\beta)$ on $\supp (1-\sigma)$.
\item $\int_\Omega |\grad\sigma|^2 < C$, uniformly as $h\to 0$.
\end{enumerate}
To construct $\sigma$, choose $\chi\in C_0^\infty([0,2))$, $0\leq\chi\leq1$,
that equals one on $[0,3/2]$ and set $\sigma(x) = 1-\chi(|x-(\pm1,0)|/h)$ on
$B_\pm$ and $\sigma\equiv 0$ on the handle.
Properties 1,2 and 4 are easily checked directly, and property 3 follows
from \eqref{eqbdest}.
For brevity, denote, for $\Omega'\subset\Omega$,
$$Q_{\Omega'}(u) = \|\grad u\|_{L^2(\Omega')}^2 +
\alpha \|u\|_{L^2(D\cap\Omega')}^2$$
so that $ \Lambda_\Omega(A) = Q_\Omega(u).$
Without loss of generality we may assume
$$\Lambda_B(A_-) \leq \Lambda_B(A_+).$$
First, we show
\begin{equation} \label{eqdumb1}
\Lambda_B(\Amin) \geq \Lambda_\Omega(A).
\end{equation}
This is easy: Take an optimal pair $(\utilde,\Dtilde)$
for $\Lambda_B(\Amin)$,
extend $\utilde$ by zero outside $B_-$ and
define a domain $\Dbar = \Dtilde\cup D'\subset\Omega$, $|\Dbar|=A$, by
choosing any $D'\subset B_-^c$ with $|D'| = A-\Amin$.
Since $\utilde\equiv0$ on $D'$, one gets \eqref{eqdumb1} by
using $(\utilde,\Dbar)$ as
a test pair for $\Lambda_\Omega(A)$.
Next, we show a reverse inequality.
Using the properties of $\sigma$ and
$\supp \grad\sigma \subset \supp (1-\sigma)$
we obtain, with $\|\cdot\|$ denoting the $L^2$-norm on $B_\pm$,
$\|\grad(\sigma u)\|^2 = \|\sigma\grad u + (\grad\sigma) u\|^2
\leq (\|\grad u\| +\|\grad\sigma\| \max_{\supp (1-\sigma)} u)^2 \leq
\|\grad u\|^2 + O(h^\beta)$ and therefore
$Q_{B_\pm} (\sigma u) \leq Q_{B_\pm} (u) + O(h^\beta).$
Now we can use $\sigma u$ as test function for the lowest eigenvalue
of $-\Delta + \alpha \chi_{D\cap B_\pm}$ on $B_{\pm}$, and this gives the
third inequality in
\begin{eqnarray}
\Lambda_\Omega(A)=Q_\Omega(u) &\geq& \sum_\pm Q_{B_\pm}(u) \notag
\geq \sum_\pm Q_{B_\pm}(\sigma u) - O(h^\beta)\notag\\
&\geq& \sum_\pm \lambda_{B_\pm}(D_\pm) \int_{B_\pm} (\sigma u)^2
- O(h^\beta)\notag\\
&\geq& \sum_\pm \lambda_{B_\pm}(D_\pm) \int_{B_\pm} u^2
- O(h^\beta) \notag \\
&\geq& \Lambda_B(A_-) + (\Lambda_B(A_+)-\Lambda_B(A_-))
\int_{B_+}u^2 - O(h^\beta). \label{eqdumb1a}
\end{eqnarray}
In the last two inequalities we have used
property 3. of $\sigma$, the optimality of $\Lambda_B(A_\pm)$,
and $\int_\Omega u^2 = 1$.
Since we assume $\Lambda_B(A_+)\geq\Lambda_B(A_-)$,
this and inequality (\ref{eqdumb1}) imply
\begin{equation} \label{eqdumb2A}
\Lambda_B(\Amin) \geq \Lambda_\Omega(A) \geq \Lambda_B(A_-) - O(h^\beta).
\end{equation}
By strict monotonicity of $\Lambda_B$ one easily gets from this
$A_- \leq \Amin + o(1) \quad (h\to0).$
Next, from $D\subset D_+\cup D_- \cup H$ and $|H|<4h$ we have
$A< A_+ + A_- + 4h$, so
$A_+ - A_- > A-2A_- - 4h \geq A-2\Amin - o(1)$, and then
$\Amin = \max(0,A-|B_-^c|) \leq \max(0,A-\pi)$ gives
\begin{equation}\label{eqdumbb}
A_+ - A_- \geq \min (A,2\pi-A) - o(1).
\end{equation}
This shows $A_+\not= A_-$ for $h<h_0(A,\alpha)$ and therefore
proves part (a) the theorem.
Now we prove part (b). From \eqref{eqdumbb} we have $A_+ - A_- > c_0$
for some constant $c_0>0$, whenever $h<h_0(A,\alpha)$, so strict monotonicity and
continuity of $\Lambda_B$ imply
\begin{equation} \label{eqdumb2c}
\Lambda_B(A_+) - \Lambda_B(A_-) > c
\end{equation}
with $c>0$ independent of $h$.
Now from \eqref{eqdumb1} and \eqref{eqdumb1a}, and using
$\Lambda_B(A_-)\geq\Lambda_B(\Amin)$ (since $A_-\geq \Amin$)
and monotonicity, we conclude
$(\Lambda_B(A_+) - \Lambda_B(A_-)) \int_{B_+} u^2 = O(h^\beta).$
This and \eqref{eqdumb2c} give
$\int_{B_+} u^2 = O(h^\beta).
Since, by \eqref{eqbdest},
$u_{|\partial B_+} = O(h^\beta)$, this $L^2$ bound
implies a pointwise bound for $u$ on $B_+$ by \eqref{eqpdeunif}.
Combined with \eqref{eqbdest}, applied on the handle, this gives
\begin{equation}\label{eqdumb4}
\sup_{x\not\in B_-} u(x) = O(h^{\beta/2}).
\end{equation}
Finally, we want to deduce from \eqref{eqdumb4} that $D^c\subset B_-$
if $A>\pi$ and $h$ is sufficiently small:
Since $(u,D)$ is an optimal pair, we have $D=\{u \leq t\}$ for
some $t>0$. Equation (\ref{eqdumb4}) shows that we are done if we can show
that $t>c$ for
a constant $c>0$ independent of $h$.
For $r\in (0,1)$ let $B_-(r)$ be the closed ball
of radius $r$ concentric with $B_-$.
Applying Lemma \ref{lhar} to $G=B_-$ we see, since $\|u\|_{L^2(B_-)}
\geq 1 - O(h^\beta)$ by \eqref{eqdumb4}, that
\begin{equation}\label{eq6} \inf_{B_-(r)} u \geq c_r
\end{equation}
for any $r\in (0,1)$, with $c_r>0$ only depending on $r$, $A$ and $\alpha$,
and this implies
$ |\{u \geq c_r\}| \geq |B_-(r)|.$
Therefore, we can conclude $t>c_r$ as soon as
$|B_-(r)| > |\Omega| - A$. Since $|\Omega| \leq 2\pi + 4h$
and $A>\pi$, one can find such an $r$
if $h<h_0$, both $r$ and $h_0$ only
depending on $A$ (and $\alpha$). This completes the proof of the theorem.
\qed
\section{Free boundary and convex domains} \label{secfbcx}
\pf [Proof of Theorem \ref{thasmallcxfb}, Part (a)]
First recall, as a consequence of results by Brascamp-Lieb [BL] and
Caffarelli-Spruck [CS], that the first eigenfunction $\psi$
on a convex domain possesses only
one point where $\nabla \psi=0$. This point is necessarily the point where
$\psi$ attains its maximum.
Now given $A$, we select $t_\Omega$ as in Theorem \ref{thDclose}, and we select
$\delta_0<t_\Omega$
such that $t_\Omega+\delta_0< M$ where $M=\max_\Omega \psi$. With
this choice of $\delta_0$ we use Theorem \ref{thDclose}
to determine a value $\alpha_1$
for which $[\Omega]^{t_\Omega-\delta_0} \subset
D\subset [\Omega]^{t_\Omega+\delta_0}$ for all $\alpha<\alpha_1$. Then
the free boundary $\{u=t\}$ is contained in the closed annulus
$A = \{t_\Omega - \delta_0 \leq \psi \leq t_\Omega + \delta_0\}$.
We have $\grad\psi\not=0$ on $A$, so $C:=\min_A |\grad\psi|$ is positive.
Thus decreasing $\alpha_1$
to a smaller value $\alpha_0>0$, we can use Proposition \ref{propparam2}
to conclude
that for all $\alpha<\alpha_0$ we have $|\nabla u| > C/2$
on $A$ and hence on the free boundary
$\{u=t\}$.
Applying Theorem \ref{thfb} we now get the first part of Theorem
\ref{thasmallcxfb}.
\pf [Proof of Theorem \ref{thasmallcxfb}, part (b)]
We only sketch the proof.
Fix $x_0$ with $\grad\psi(x_0)\not=0$.
Choose coordinates in which $\grad\psi(x_0)=(0,\ldots,0,a), a>0$, and
for $x'$ near $x_0'$ (where $x'=(x_1,\ldots,x_{n-1})$) and $t$ near $t_0=\psi(x_0)$
denote
the locally unique solution $x_n$ of the equation $\psi(x',x_n)=t$
by $F_0(x',t)$. For $\alpha$ near zero and $x$ near $x_0$ one has
$\partial u_\alpha/\partial x_n\not=0$ by Proposition \ref{propparam2},
so we may define $F_\alpha$ similarly for $u_\alpha$ instead of $\psi$.
By a result of
Korevaar and Lewis [KL] the level set of $\psi$ through $x_0$ is
strictly convex, in the sense that the matrix
$(\frac{\partial^2 F_0}{\partial x_i\partial x_j})_{i,j=1,\ldots,n-1}$
is positive definite at $(x_0',t_0)$. Therefore, the result
follows if one can show continuity of
$\frac{\partial^2 F_\alpha}{\partial x_i\partial x_j}$ in $\alpha$
and $(x',t)$. Now the equation for $u$ gives for $F_\alpha$ a uniformly elliptic,
quasi-linear equation (writing $y=(x',t)$)
$$\sum_{i,j=1}^n b_{ij}(\grad F_\alpha)
\frac{\partial^2 F_\alpha(y)} {\partial y_i \partial y_j} =
\alpha\chi_{G_\alpha}(y_n)y_n-\Lambda(\alpha,A)y_n\ \ \ $$
with $b_{ij}$ real analytic
and $G_\alpha=(-\infty,t_\alpha]$, where $t_\alpha$ is such that
$|\{u_\alpha\leq t_\alpha\}|=A$.
From this it is easy to derive the desired regularity, cf. the proof
of Lemma 3 in [CGK].
\qed
\section{Numerical results} \label{secnum}
In this section we
make a few remarks on our method for the numerical
solution of our eigenvalue problem.
We use the finite element method for the discretization of
our eigenvalue problem, with conforming P-1 elements.
To create the mesh we have utilized the
automatically spatial meshing program
encoded by Y. Tsukuda (see [TK]).
In order to calculate the approximate first eigenvalue and the
corresponding eigenfunction,
we employ the power method.
Our method to obtain an optimal configuration
is based on an algorithm that was introduced
in [Pi]. However, we do not insist on $D$ (the sought-for
optimal configuration) to be a union of elements. This flexibility
allows us to find a good approximation even without remeshing.
We now describe the main procedure.
The given data are $A$ and $\alpha$.
We first take any initial domain $D_0$ satisfying $|D_0| = A$.
Next, if we have obtained $D_{n-1}$ ($n=1,2,3, \cdots$) then
we calculate the first eigenvalue
$\lambda_{n-1}$ and the corresponding eigenfunction $u_{n-1}$
for the finite element approximation problem for the operator
$-\Delta + \alpha\chi_{D_{n-1}}$.
Then we obtain $D_n$ from $u_{n-1}$ by finding a number $t_0$ such
that $|\{u_{n-1}\leq t_0\}| = A$ and setting
$$D_n = \{u_{n-1} \leq t_0\}.$$
The number $t_0$ is determined by a bisection method, i.e. by
setting $down_0=0$, $up_0=\max_\Omega u_{n-1}$, $j=0$ and
then iterating Steps 1 and 2 (with
$L(t) := |\{u_{n-1} \leq t \} |$)
\begin{enumerate}
\item[Step 1:] Let $interm_j$: = $(up_j+down_j)/2$ and calculate $L(interm_j)$.
\item[Step 2:] If $L(interm_j)$ $<$ $A$, then $up_{j+1} := up_j$ and
$down_{j+1} := interm_j$, else if $L(interm_j)$ $>$ $A$,
then $up_{j+1} := interm_j$ and $down_{j+1} := down_j$.
Increase $j$ by one.
\end{enumerate}
The iteration is stopped when $L(interm_k)$ nearly equals
$A$ and $up_k$ and $down_k$ nearly equal $interm_k$
according to the adopted precision of approximation, and
then we set $t_0=interm_k$.
Having obtained $D_n$ we repeat the procedure above to find $u_n, D_{n+1}$ etc.
It is easily seen that $\lambda_n \leq \lambda_{n-1}.$
We iterate
until $| \lambda_n - \lambda_{n-1} | < \eps$, where $\eps$ is given.
In the numerical experiments that we have done, we have taken
$\eps$ between $10^{-7} $ and $10^{-10}$.
By the monotonicity of $\{\lambda_n\}$, the limit $\lim_{n\to\infty}
=\lambda_\infty$ exists.
However, it is not clear a priori whether $\lambda_\infty=
\Lambda_{\Omega}(\alpha,A)$ or not. In order to avoid the latter case,
we have repeated the same procedure with several different initial
shapes $D_0$.
The results of some of the computations that we have done are shown
in Figures 1-3.
They illustrate well Theorems
\ref{thtub}, \ref{thsymmpres} \ref{thsymmann}, \ref{thdumb},
and \ref{thasmallcxfb}.
\section{Some open problems and conjectures} \label{secopen}
In this section $D=D_{\alpha,A}$ will always denote an optimal configuration.
\begin{conj}(Uniqueness and convexity)
If $\Omega$ is convex then $D$ is unique,
and $D^c$ is convex (at least for $\alpha\leq\abar_\Omega(A)$).
\end{conj}
Concerning the restriction on $\alpha$ compare the remark
after Theorem \ref{thtub}. We have proved convexity for small
$\alpha$ in Theorem \ref{thasmallcxfb}.
\begin{prob} (Regularity of the free boundary)
When is the boundary of an optimal configuration smooth everywhere?
In general, how can we control the size of
singular sets of the free boundary?
\end{prob}
In the convex case we have proved smoothness for small $\alpha$ in Theorem
\ref{thasmallcxfb}. A similar method should easily yield smoothness
of the free boundary for small $A$ and smooth $\partial\Omega$.
\begin{conj}
In dimension two the free boundary $\partial D$ is smooth outside
a finite set.
\end{conj}
We prove some restrictions on the singular set of $\partial D$
in [CGK].
\begin{prob}(Topology of $D$ and $D^c$)
If $\Omega$ is simply connected, is
$D$ also connected
even in the case $\alpha > \abar_\Omega(A)$ (cf. Theorem \ref{thtub})?
If $A$ or $|\Omega|-A$ is small enough (with $\alpha$ fixed),
is $D^c$ always connected?
\end{prob}
Compare Proposition \ref{propAlarge} for the case of $A$ close to $|\Omega|$.
In a dumbbell $\psi_\Omega$ has two maxima. But numerical evidence
suggests the following conjecture:
\begin{conj} (One component of $D^c$ for dumbbell)
Let $\Omega_h$ be a dumbbell. Then for every $\alpha>0$ there is $\rho_0(\alpha,h)>0$
such that $D^c$ consists of one component (near one of the maxima of $\psi_\Omega$)
whenever $|\Omega|-A < \rho_0(\alpha,h)$.
\end{conj}
Clearly, one would expect $\rho_0(\alpha,h)\to0$ as $\alpha\to 0$.
We now turn to questions of symmetry. A very general problem is the following:
\begin{prob} (Symmetry and symmetry breaking)
Determine (at least qualitatively) the region in the space of parameters
where symmetry breaking occurs.
For annuli the parameters are $\alpha$, $\delta=A/|\Omega|$ and the ratio
$\tau$ of outer and inner radius (\lq thickness').
For dumbbells the parameters are $\alpha$, $A$
and the thickness of the handle.
\end{prob}
First results on this general problem are given by
Theorems \ref{thsymmann} and \ref{thdumb}.
The next three
conjectures address other aspects of this problem, i.e. they concern
other regions in parameter space. They are motivated by numerical
experiments.
\begin{conj} (Symmetry on dumbbells)
Let $\Omega_h$ be a dumbbell. Then for every $\alpha>0$ there is
$\rho_1(\alpha,h)>0$ such that symmetry breaking occurs if and only if
$|\Omega|-A < \rho_1(\alpha,h)$.
\end{conj}
\begin{conj} (Symmetry on annuli)
For each $\alpha,\delta>0$ there is $\tau_0(\alpha,\delta)$ such that
symmetry breaking occurs for the annulus of thickness $\tau$ if and only
if $\tau<\tau_0(\alpha,\delta)$.
\end{conj}
Theorem \ref{thsymmann} gives one half of this. The other half means that
the optimal configuration is rotationally symmetric for \lq thick' annuli.
Some aspects of this conjecture are discussed in [CGK].
More generally, it would be interesting to prove symmetry preservation in
{\em any} situation not covered by Theorem \ref{thsymmpres} (i.e. in a
non-convex situation). In particular, a natural conjecture is:
\begin{conj}(Symmetry preservation for small $\alpha$)
For any domain $\Omega$ and any $A$ there is $\alpha_0(A,\Omega)$ such that for
$\alpha\leq\alpha_0(A,\Omega)$ any optimal configuration
$D$ has the same symmetries as $\Omega$.
\end{conj}
Also, the analysis of the transition between the symmetric and asymmetric
situations would be interesting, as well as the shape of asymmetric
solutions for the annulus.
\begin{prob} (Relation between $D$ and the curvature of $\partial \Omega$)
Prove that $D$ is fat near points where $\partial \Omega$
has large positive curvature.
\end{prob}
For example see Figure 1.
For $\alpha=0$ and $A$ near zero this should
be not too hard. See [K1] for the case $\alpha=0$ under additional
geometric assumptions.
From this one should obtain the result at least for small $\alpha$ and $A$
by perturbation.
In [CGK], Thm. 9, we prove in a model case that $D$ is thin near a portion
of the boundary which has large negative curvature.
\begin{prob}(Limit $\alpha\to\infty$)
Consider the restricted minimization problem, allowing only such
sets $D$ for which $D^c$ is a ball.
How does this relate to the limit $\alpha\to\infty$ in our problem?
Where does the center of an optimal ball lie?
\end{prob}
This is motivated as follows:
Formally, for $\alpha=\infty$ the eigenvalue
$\lambda_\Omega(\alpha,D)$ equals the first Dirichlet eigenvalue
of $D^c$. (The convergence to this value as $\alpha\to\infty$ is
proved in [HH] and [DKM], for example.)
Now by the Faber-Krahn inequality (see [Ch], for example),
the first Dirichlet eigenvalue
of a domain of prescribed area is minimal if the domain is a ball.
So the optimal configuration for $\alpha$ large should be close to
a ball, at least when $A$ is close enough to $|\Omega|$ (so that
a ball of volume $|\Omega|-A$ fits into $\Omega$).
\begin{prob} (Other Elliptic Operators)
Consider the same optimization problem for a magnetic Schr\"{o}dinger operator
$(i\nabla -\alpha\chi_D A(x))^2$
with constant magnetic field
or
a uniformly elliptic operator of divergence type
$-\nabla\{ (1+\alpha\chi_D(x))\nabla \}$.
\end{prob}
We have no results for these operators, even if $\Omega$ is a ball.
\section*{Appendix: Basic PDE facts} \label{secpdefacts}
Here we collect some well-known facts about uniform estimates for solutions
of elliptic equations. We will state these for an equation
$$Pu=0,\quad P = \Delta + \sum_{j=1}^n b_j(x) \frac\partial{\partial x_j}
+ c(x),\quad x\in G, $$
where $P$ has measurable, uniformly bounded coefficients,
$u\in C^1(G)\cap C^0(\overline{G})$, and $G\subset\R^n$ is a
bounded open set.
In the following estimates, saying that the constants depend on $P$
will mean that they depend on $\sup_G (b_1,\ldots,b_n,c)$ and stay
bounded when this quantity stays bounded.
First, we have the uniform bound (see [GT, Thm.\ 8.15 and (8.38)])
\begin{equation} \label{eqpdeunif}
\sup_G |u| \leq C_{G,P} (\|u\|_{L^2(G)} + \sup_{\partial G} |u|).
\end{equation}
Second, we have Harnack's inequality: If $u\geq 0$ on $G$ and
$G'$ is a compact subset of $G$ then
\begin{equation} \label{eqpdehar}
\sup_{G'} u \leq C_{G,G',P} \inf_{G'} u.
\end{equation}
Combining these two we get the following slightly less standard estimate.
For $\eps>0$ let $G_\eps = \{x\in G:\, \dist(x,\partial G) \geq \eps\}.$
\begin{lem} \label{lhar}
For any $\eps>0$ there is a positive constant $c_{G,P,\eps}$ such that
for any $u\in C^1(G)\cap C^0(\overline{G})$ that solves $ Pu = 0$
and satisfies $u\geq 0$ one has
$$\inf_{G_{\eps}} u
\geq c_{G,P,\eps} (\|u\|_{L^2(G)} - \sup_{\partial G} u). $$
\end{lem}
Here we set ${\displaystyle\inf_\emptyset u :=\infty}$.
\proof
We have
\begin{eqnarray*}
\int_G u^2 &=& \int_{G_\eps} u^2 + \int_{G\setminus G_\eps} u^2
\leq |G_\eps|\, \sup_{G_\eps} u^2 + |G\setminus G_\eps|\, \sup_G u^2 \\
& \leq & C_{G,P,\eps} \inf_{G_\eps} u^2 + |G\setminus G_\eps|\,
C'_{G,P} (\int_G u^2 +
\sup_{\partial G} u^2)
\end{eqnarray*}
where we used Harnack's inequality and the uniform estimate \eqref{eqpdeunif}.
If $\eps$ is so small that $|G\setminus G_{\eps}|\, C'_{G,P} < 1/2$
then we can subtract the last two terms, and the claim follows easily.
The claim for larger $\eps$ then follows from the fact that
$\inf_{G_{\eps'}} u \geq \inf_{G_\eps} u$ if $\eps'\geq\eps$.
\qed
\section*{REFERENCES}
{\footnotesize
[AB] Ashbaugh, M.S., Benguria, R.D.,
A sharp bound for the ratio of the first two
eigenvalues of Dirichlet Laplacians and extensions, Ann. of Math.,
135(1992), 601--628.
[AH] Ashbaugh, M.S., Harrell, E.,
Maximal and minimal eigenvalues and their associated nonlinear equations, J.Math.Phys., 28(1987), 1770-1786.
[AHS] Ashbaugh, M.S., Harrell, E., Svirsky, R.,
On minimal and maximal eigenvalue gaps
and their causes,
Pacific J.Math., 147(1991), 1--24.
[BL] Brascamp, H.J., Lieb, E.,
On extensions of the Brunn-Minkowski and Prekopa-Leindler
theorems, including inequalities for log concave functions, and with an application to a diffusion equations, J.Fun.Ana., 22(1976), 366--389.
[BZ] Brothers, J.E., Ziemer, W.P.,
Minimal rearrangements of Sobolev functions,
J. reine angew. Math., 384(1988), 153--179.
[CS] Caffarelli, L.A., Spruck, J.,
Convexity properties of solutions to some
classical variational problems,
Comm. in Partial Diff. Equ., 7(1982), 1337--1379.
[CGK] Chanillo, S., Grieser, D., and Kurata, K.,
The free boundary problem in the optimization of composite membranes.
Preprint.
[Ch] Chavel, I., {\it Eigenvalues in Riemannian Geometry}, Academic Press,
New York, 1984.
[C] Cox, S.J., The two phase drum with the deepest bass note,
Japan J. Indust. Appl. Math. 8(1991), 345-355.
[CL] Cox, S.J., Lipton, R., Extremal eigenvalue problems for two-phase
conductors, Arch. Rat. Mech. Anal. 136(1996), 101-117.
[CM] Cox, S.J., McLaughlin, J.R., Extremal eigenvalue problems for composite
membranes, I and II, Appl. Math. Optim. 22(1990), 153--167, 169--187.
[DKM] Demuth, M., Kirsch, W., McGillivray, I.,
Schr\"{o}dinger operators-Geometric estimates in terms of the
occupation time,
Comm. in Partial Diff. Equ., 20(1995), 37--57.
[Eg] Egnell, H., Extremal properties of the first eigenvalue
of a class of elliptic eigenvalue problems,
Ann. Scuol.Norm.Sup.Pisa, (1987), 1--48.
[FG] de Figueiredo, D.G., Gossez, J-P.,
Strict monotonicity of eigenvalues and unique continuation, Comm. P.D.E.,
17(1\& 2), (1992), 339--346.
[F] Friedman, A., On the regularity of the solutions of non-linear
elliptic and parabolic systems of partial differential equations,
J. Math. Mech. (now Indiana Math. J.) 7(1958), 43-60.
[GT] Gilbarg, D., Trudinger, N.T.,
{\it Elliptic Partial Differential Equations of Second Order}, Springer-Verlag, New York/Berlin, 1983.
[G] Giraud, G., Ann. Scient. de l'Ec. Norm. 43(1926), 1-128.
[HKK] Harrell, E., Kr\"{o}ger, P., Kurata, K.,
On the placement of an obstacle or a well so as to optimize the fundamental eigenvalue, preprint.
[HH] Hempel, P., Herbst, I.,
Strong magnetic fields, Dirichlet boundaries, and spectral gaps, Comm. Math. Phys., 169(1995), 237--260.
[H] Hopf, E., \"Uber den funktionalen, insbesondere den analytischen
Character der L\"osungen elliptischer Differentialgleichungen zweiter
Ordnung, Math. Z. 34(1931), 194-233.
[K1] Kawohl. B., On the location of maxima of the gradient for
solutions to quasi-linear elliptic problems and a problem
raised by Saint Venant, J. of Elasticity 17(1987), 195-206.
[K2] Kawohl, B., Symmetrization -- or how to prove symmetry of
solutions to a PDE, in Partial Differential Equations, Theory and
Numerical Solution (W.J\"ager, J.Neusetal Eds.), Chapman \& Hall Res.
Notes in Math. 406(1999), 214-229.
[KL] Korevaar, N.J., Lewis, J.L., Convex solutions of certain elliptic
equations have constant rank Hessians, Arch. Rat. Mech. Anal. 97(1987),
19-32.
[Kr] Krein, M.G., On certain problems on the maximum and minimum
of characteristic values and on the Lyapunov zones of stability, AMS Translations
Ser. 2, 1(1955), 163--187.
[LL] Lieb, E., Loss, M., {\it Analysis}, Amer. Math. Soc., 1997.
[M] Morrey, C.B., On the analyticity of solutions of analytic
non-linear elliptic systems of PDE I, Amer. J. Math. 80(1958), 198-218.
[Pi] Pironneau, O.,
{\it Optimal shape design for elliptic systems}, Springer-Verlag, New York Inc.,
1984.
[TK] Tsukuda, Y., Kaizu, S.,
Proceedings of the tenth symposium of numerical fluid mechanics, (1996),
220--221 (in Japanese with English abstract).
}
| {
"timestamp": "2000-04-12T14:44:23",
"yymm": "9912",
"arxiv_id": "math/9912116",
"language": "en",
"url": "https://arxiv.org/abs/math/9912116",
"abstract": "We consider the following eigenvalue optimization problem: Given a bounded domain $\\Omega\\subset\\R^n$ and numbers $\\alpha\\geq 0$, $A\\in [0,|\\Omega|]$, find a subset $D\\subset\\Omega$ of area $A$ for which the first Dirichlet eigenvalue of the operator $-\\Delta + \\alpha \\chi_D$ is as small as possible.We prove existence of solutions and investigate their qualitative properties. For example, we show that for some symmetric domains (thin annuli and dumbbells with narrow handle) optimal solutions must possess fewer symmetries than $\\Omega$; on the other hand, for convex $\\Omega$ reflection symmetries are preserved.Also, we present numerical results and formulate some conjectures suggested by them.",
"subjects": "Analysis of PDEs (math.AP); Optimization and Control (math.OC)",
"title": "Symmetry breaking and other phenomena in the optimization of eigenvalues for composite membranes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787834701878,
"lm_q2_score": 0.8006920092299293,
"lm_q1q2_score": 0.790426163605902
} |
https://arxiv.org/abs/1705.04844 | On disjoint $(v,k,k-1)$ difference families | A disjoint $(v,k,k-1)$ difference family in an additive group $G$ is a partition of $G\setminus\{0\}$ into sets of size $k$ whose lists of differences cover, altogether, every non-zero element of $G$ exactly $k-1$ times. The main purpose of this paper is to get the literature on this topic in order, since some authors seem to be unaware of each other's work. We show, for instance, that a couple of heavy constructions recently presented as new, had been given in several equivalent forms over the last forty years. We also show that they can be quickly derived from a general nearring theory result which probably passed unnoticed by design theorists and that we restate and reprove in terms of differences. We exploit this result to get an infinite class of disjoint $(v,k,k-1)$ difference families coming from the Fibonacci sequence. Finally, we will prove that if all prime factors of $v$ are congruent to 1 modulo $k$, then there exists a disjoint $(v,k,k-1)$ difference family in every group, even non-abelian, of order $v$. | \section{Introduction}
Throughout this paper all groups will be understood finite and written in additive notation but not necessarily abelian.
Given a subset $B$ of a group $G$, the {\it list of differences of $B$}
is the multiset $\Delta B$ of all possible differences between two distinct elements of $B$.
A collection $\cal F$ of subsets of $G$ is a {\it difference family} (DF) of index $\lambda$ if the multiset sum
$\Delta{\cal F}:=\displaystyle\biguplus_{B\in{\cal F}}\Delta B$ covers every non-zero element of $G$
exactly $\lambda$ times. In particular, one says that ${\cal F}$ is a $(v,k,\lambda)$-DF if $G$ has
order $v$ and its members ({\it base blocks}) have all size $k$.
A difference family is said to be {\it disjoint} (DDF) if its blocks are pairwise disjoint. It is a
{\it partitioned difference family} (PDF) if its blocks partition $G$.
Partitioned difference families, introduced by Ding and Yin \cite{DY} for the construction
of {\it optimal constant composition codes}, are also important from the
design theory perspective; for instance, in \cite{BYW} it is shown a strict connection between PDFs having
all blocks of the same size $k$ but one of size $k-1$ and certain {\it resolvable $2$-designs} (RBIBDs) with block size $k$. It seems that this paper
passed almost completely unnoticed in spite of the fact that it contains many new RBIBDs a couple of which are particularly remarkable
since their parameters are new; a $(45,5,2)$-RBIBD and a $(175,7,2)$-RBIBD.
The importance of the former was pointed out in the paper itself, here we underline
the importance of the latter considering that, according to Table 7.40 in \cite{AGY}, even the
existence of a $(175,7,6)$-RBIBD was previously in doubt while it is now obvious that it can be obtained
by simply tripling the obtained $(175,7,2)$-RBIBD.
After the above discussed paper the notion of a PDF apparently disappeared from the literature for a long time but, as a matter of fact, it has been considered
under the name of a {\it zero difference balanced function} (ZDBF). A function $f$ from a group $G$ to a group $H$ is
defined to be a $(v,\lambda)$-ZDBF if $ord(G)=v$ and the equation $f(g+x)=f(x)$ in the unknown $x$ has always $\lambda$ solutions whichever is $g\in G\setminus\{0\}$.
It is an easy exercise to prove that this is completely equivalent to say that the set of non-empty {\it fibers} of $f$ form a PDF in $G$.
It seems to be usual to assume that both $G$ and $H$ are abelian (see, e.g., \cite{WZ} and \cite{ZTWY}) but, in our opinion, there is no good reason to make this restriction.
Very recently PDFs have returned with their original name in a paper \cite{LWG} where the authors develop the
composition constructions of \cite{BYW} making use of {\it difference matrices}.
It is clear that every DDF can be ``completed" to a PDF by adding suitable blocks of size 1.
We note that a $(v,k,\lambda)$-DDF necessarily has $1\leq \lambda\leq k-1$ apart the very trivial
case of a $(k,k,k)$ difference set.
The existence of $(v,k,1)$-DDFs is in general a quite hard problem. Among the few results on this problem
we recall that Dinitz and Rodney \cite{DR} found a $(v,3,1)$-DDF for any admissible $v$ and that
any {\it radical} $(v,k,1)$-DF (see \cite{Bpairwise}) is disjoint when $k$ is odd.
On the contrary, the literature on $(v,k,k-1)$-DDFs is quite rich and our main purpose is to
get this literature in order.
First of all it is worth mentioning that the $G$-orbit of any a $(v,k,k-1)$-DDF in $G$ is the {\it near resolution}
of a $(v, k, k-1)$ {\it near resolvable design} (NRB for short). We refer to \cite{AGY} for general background on NRBs.
In the core of this paper we restate and reprove in terms of differences an old nearring theory result - probably passed almost unnoticed by other theorists -
which, starting from {\it Ferrero pairs}, implicitly give a wide class of $(v,k,k-1)$-DDFs in the kernel of a {\it Frobenius group}.
We will show that several constructions for $(v,k,k-1)$-DDFs obtained over the years could be quickly
obtained as a corollary of that result. In order to further appreciate its effectiveness we apply it to get
some new DDFs, in particular the Pisano $(p^4,k,k-1)$-DDFs in $\mathbb{Z}_{p^2}\times\mathbb{Z}_{p^2}$ which
arise from the Fibonacci sequence. In the last section we give an example of an infinite class of non-abelian DDFs obtainable
via the Ferrero construction and, more importantly, we will prove that if all prime factors of $v$ are congruent to 1 modulo $k$,
then there exists a $(v,k,k-1)$-DDF in any group of order $v$.
\section{Some known results}
%
A $(v,k,k-1)$-DDF in $G$ is known in each of the following cases.
\begin{itemize}
\item[(i)] $v$ is odd, $k=2$, and $G$ is any group of order $v$.
\item[(ii)] $v\equiv1$ (mod 4), $k=4$, $G=\mathbb{Z}_{v}$ and there exists a {\it $\mathbb{Z}$-cyclic whist tournament} on $v$ players $($briefly $Wh(v))$.
\item[(iii)] $v\equiv1$ (mod $k$) is a prime power and $G$ is the additive group of $\mathbb{F}_v$ (the field of order $v$).
\item[(iv)] The maximal prime power divisors of $v$ are all congruent to 1 (mod $k$)
and $G$ is a direct product of elementary abelian groups.
\item[(v)]
All prime factors of $v$ are congruent to 1 (mod $k$) and $G=\mathbb{Z}_v$.
\end{itemize}
A DDF as in (i) is nothing but a {\it starter} of $G$ (see, e.g., \cite{D}).
The reason of (ii) is that the {\it initial round} of a $\mathbb{Z}$-cyclic $Wh(v)$
is a $(v,4,3)$-DDF (but the converse is not generally true). For general background on $\mathbb{Z}$-cyclic whist tournaments we refer to \cite{AF}.
For a DDF as in (iii) - which a special case of the DDFs in (iv) - one can simply take the set of all cosets of the $k$-th roots of unity in the multiplicative group of $\mathbb{F}_v$.
This DDF is usually attributed to Wilson \cite{W} but we note that it was given earlier in an equivalent form by Ferrero \cite{F}.
We also note that this DDF can be presented as the $(v,k-1)$-ZDBF mapping any $x\in \mathbb{F}_{v}$ into $x^{(v-1)/k}\in \mathbb{F}_{v}$.
A large class of DDFs as in (iii) and the additional condition that $k\equiv2$ (mod 4) have been very recently given by Li \cite{L};
each block of these DDFs is a suitable union of two cosets of the ${k\over2}$-th roots of unity in $\mathbb{F}_v$.
Results (iv) and (v) have been recently described using the language of ZDBFs and obtained with quite involved proofs in
\cite{CZHTY} and \cite{DWX}, respectively.
We note, however, that a simple proof of (iv) was given by Furino in 1991 (see end of section 3 in \cite{Fu}) and that
the same proof was implicitly given by Boykett in 2001 (see the proof of Proposition 7 in \cite{B}).
There is another approach for getting (iv) very quickly from (ii) with the use of {\it difference matrices}; this has been
very recently noted by Li, Wei and Ge \cite{LWG} but traces of the same approach
can be found in a very old paper by Jungnickel (see Corollary 4.5 in \cite{J}).
We note that difference matrices would also allow to obtain (v) very quickly from the existence of a cyclic $(p^n,k,k-1)$-DDF for every
prime $p\equiv1$ (mod $k$). Such a difference family was obtained by Furino (see Lemma 4.3 in \cite{F}) and it is also deducible
from an even earlier result by Phelps \cite{P} (see Theorem 4.6 in the paper \cite{B1} by the present author).
In the next section we will give a clean construction for $(v,k,k-1)$-DDFs in the kernel of a Frobenius group
which can be deduced from an old nearring result.
We will show how (iv) and (v) can be almost immediately obtained as special cases of this construction.
In the last section we will prove that (v) can be generalized to any group $G$ of order $v$.
\section{Ferrero $(v,k,k-1)$ difference families}
A Frobenius group $F$ is a semidirect product $G\rtimes A$ with $A$ a non-trivial group of automorphisms of $G$
acting {\it semiregularly} on $G\setminus\{0\}$ (one also says that $A$ is {\it fixed point free}).
This means that for $g\in G$ and $\alpha\in A$ we have $\alpha(g)=g$ if and only if either $\alpha=id_G$ or $g=0$.
The groups $G$ and $A$ are said to be the {\it kernel} and the {\it complement} of $F$, respectively.
Any such pair $(G,A)$ is called a {\it Ferrero pair} by {\it nearring} theorists \cite{B, C}.
Note that if $(G,A)$ is a Ferrero pair, then $(H,B)$ is a Ferrero pair as well for every non-trivial subgroup $H$ of $G$ and any
non-trivial subgroup $B$ of $A$.
For general background on Frobenius groups we refer to \cite{I}.
The following theorem is the highlight of this section but we point out that the first part of the theorem is not really new since it is essentially the same as Theorem 5.5 in \cite{C}.
The crucial diversity is that our presentation and proof are given in terms of differences. In the original statement it is said that
any Ferrero pair $(G,A)$ with $ord(G)=v$ and $ord(A)=k$ generates a 2-$(v,k,k-1)$ design and then in the proof it is shown that the blocks
of this design are all the translates of the $A$-orbits of the non-zero elements of $G$ under the natural action of $G$.
For the main subject of the present paper, the crucial fact is that the set of $A$-orbits on $G\setminus\{0\}$ is a $(v,k,k-1)$-DDF.
\begin{theorem}\label{Frob}
If $(G,A)$ is a Ferrero pair with $ord(G)=v$ and $ord(A)=k$, then the set of $A$-orbits on $G\setminus\{0\}$
is a $(v,k,k-1)$-DDF in $G$. In the hypothesis that $G$ is abelian and that $vk$ is odd, this DDF is splittable into two $(v,k,{k-1\over2})$-DDFs.
\end{theorem}
\begin{proof}By definition, $A$ acts semiregularly on $G\setminus\{0\}$. This easily implies that
each $A$-orbit on $G\setminus\{0\}$ has size $k$ and that the map $g\in G \longrightarrow \alpha(g)-g\in G$
is a bijection for every $\alpha\in A\setminus\{id_G\}$. Thus we can write:
\begin{equation}\label{ortom}
\{\alpha(g)-g \ | \ g\in G\setminus\{0\}\}=G\setminus\{0\} \quad \forall \alpha\in A\setminus\{id_G\}
\end{equation}
Let $X$ be a complete set of representatives for the $A$-orbits on $G\setminus\{0\}$ and for each $x\in X$ let
$Orb(x)$ be the $A$-orbit of $x$.
We have to prove that ${\cal O}:=\{Orb(x) \ | \ x\in X\}$ is a $(v,k,k-1)$-DDF. It is evident that the ordered pairs of distinct elements of $Orb(x)$
with a fixed second coordinate $y$ are exactly those of the form $(\alpha(y),y)$ with $\alpha\in A\setminus\{id_G\}$. Thus we have
$\Delta Orb(x)=\biguplus_{y\in{\cal O}(x)}\{\alpha(y)-y \ | \ \alpha\in A\setminus\{id_G\}\}$, hence
$$\Delta{\cal O}=\biguplus_{x\in X}\biguplus_{y\in{\cal O}(x)}\{\alpha(y)-y \ | \ \alpha\in A\setminus\{id_G\}\}=
\biguplus_{g\in G\setminus\{0\}}\{\alpha(g)-g \ | \ \alpha\in A\setminus\{id_G\}\}.$$
So we can write $\displaystyle\Delta{\cal O}=\biguplus_{\alpha\in A\setminus\{id_G\}}\{\alpha(g)-g \ | \ g\in G\setminus\{0\}\}$ which,
by (\ref{ortom}), is the union of $k-1$ copies of $G\setminus\{0\}$.
The first part of the assertion follows.
From now we assume that $kv$ is odd and that $G$ is abelian.
Suppose that two opposite elements $g$ and $-g$ are in the same $A$-orbit.
In this case there is an $\alpha\in A$ such that $\alpha(g)=-g$.
Then, by induction, we would have $\alpha^i(g)=g$ or $-g$ according to whether $i$ is even or odd, respectively.
Thus, in particular, we would have $\alpha^k(g)=-g$. On the other hand $k$ is the order of $A$ so that $\alpha^k=id_G$.
It follows that $g=-g$. So, considering that $G$ does not have involutions since $v$ is odd, we necessarily
have $g=0$. We conclude that the set $X$ considered in the first part of our proof can be chosen of the form
$X=Y \cup \ -Y$ with $Y$ a suitable ${k-1\over2}$-subset of $G$. In this way we have that $\cal O$ is splittable into
the two parts ${\cal O}_1=\{Orb(y) \ | \ y\in Y\}$ and ${\cal O}_2=\{Orb(-y) \ | \ y\in Y\}$. Now note that $Orb(-y)=-Orb(y)$.
Hence we obviously have $\Delta Orb(-y)=\Delta Orb(y)$ for each $y\in Y$ since $G$ is abelian. We conclude that
$\Delta{\cal O}_1=\Delta{\cal O}_2$ and then, recalling that $\cal O$ is a $(v,k,k-1)$-DDF, we deduce that both
${\cal O}_1$ and ${\cal O}_2$ are $(v,k,{k-1\over2})$-DDFs.
\end{proof}
The $(v,k,k-1)$-DDFs produced by the above theorem will be said {\it Ferrero difference families}.
Note that the {\it patterned starter} of a group $G$ of odd order (namely the set of all possible pairs
$\{g,-g\}$ of opposite elements of $G\setminus\{0\}$) can be seen as the Ferrero DF
determined by the Ferrero pair $(G,\{id,-id\})$ when $G$ is abelian.
As a first immediate consequence of Theorem \ref{Frob} we have the following result which
was also stated in a weaker form by Furino (\cite{Fu}, Lemma 4.2).
\begin{lemma}\label{ring}
Let $R$ be a ring of order $v$ with unity, and let $U(R)$ be the group of units of $R$. If $U$ is a subgroup of order $k$ of $U(R)$
with $u-1\in U(R)$ for each $u\in U\setminus\{1\}$, then there exists a Ferrero $(v,k,k-1)$-DDF in the additive group of $R$.
\end{lemma}
\begin{proof} Any subgroup $U$ of $U(R)$ can be seen as an automorphism group of the additive group $G$ of $R$.
Indeed any $u\in U(R)$ can be identified with the automorphism of $G$ mapping $x$ into $ux$.
It is also clear that $u-1\in U(R)$ for each $u\in U\setminus\{1\}$ implies that $U$ acts semiregularly on $G\setminus\{0\}$.
The assertion then follows from Theorem \ref{Frob}.\end{proof}
Now we show how the previous lemma allows to obtain result (iv) very quickly. We essentially give the same old easy proof given by Furino \cite{Fu},
not comparable to the recent tortuous proof in \cite{DWX}.
\begin{corollary}\label{EA}
Let $v$ be a product of prime powers $q_1,\dots, q_n$ all congruent to $1$ $($mod $k)$.
Then there exists a Ferrero $(v,k,k-1)$-DDF in the additive group of $\mathbb{F}_{q_1}\times\dots\times \mathbb{F}_{q_n}$.
\end{corollary}
\begin{proof}For $1\leq i\le n$, take a $k$-th primitive root $u_i$ of $\mathbb{F}_{q_i}$.
It is immediate that $u:=(u_1,\dots,u_n)$ is a unit of order $k$ of the ring
$R=\mathbb{F}_{q_1}\times\dots\times\mathbb{F}_{q_n}$ and that $u^i-1$ is a unit of $R$ for $1\leq i\leq k-1$.
The assertion then follows from Lemma \ref{ring}.\end{proof}
Let us say that a Ferrero pair $(G,A)$ has parameters $(v,k)$ if $v$ and $k$ are the orders of $G$ and $A$,
respectively. A trivial necessary condition for $(v,k)$ to be the parameters of a suitable Ferrero pair is that
$k$ divides $v-1$. From the above corollary one deduces that a sufficient condition is that $q\equiv1$ (mod $k$)
for every maximal prime power factor $q$ of $v$. This condition has been proved to be also necessary
by Boykett (\cite{B}, Corollary 6) as a consequence of other results using, in particular, Thompson's
theorem on Frobenius groups. We reprove this below in a simpler and more direct way.
\begin{proposition}
There exists a suitable Ferrero pair of parameters $(v,k)$ - or equivalently a Ferrero $(v,k,k-1)$-DDF in a suitable group -
if and only if
$q\equiv1$ $($mod $k)$ for every maximal prime power factor $q$ of $v$.
\end{proposition}
\begin{proof}
$(\Longrightarrow)$ Let $(G,A)$ be a Ferrero pair of parameters $(v,k)$ and let $p^e$, $p$ prime, be a
maximal prime power factor of $v$.
The group $A$ acts as a permutation group on the set $Syl_p(G)$ of all Sylow $p$-subgroups of $G$.
If $A$ does not fix any member of $Syl_p(G)$, then each $A$-orbit on
$Syl_p(G)$ would have size $|A|=k$ so that $k$ divides $|Syl_p(G)|$. In its turn $|Syl_p(G)|$ divides ${v\over p^e}$
by the third Sylow theorem. We conclude that $k$ divides both $v-1$ and ${v\over p^e}$, hence $k=1$
which is absurd. So $A$ fixes a suitable $S\in Syl_p(G)$. It follows that $(S,A)$ is a Ferrero pair so that
$ord(A)=k$ divides $ord(S)-1=p^e-1$ which is the assertion.
$(\Longleftarrow)$ See Corollary \ref{EA}.
\end{proof}
Now we show how result (v) can be generalized to any abelian group and that
this can be also quickly obtainable as a corollary of Lemma \ref{ring}.
\begin{corollary}\label{cyclic}
If all prime factors of $v$ are congruent to $1$ $($mod $k)$,
then there exists a Ferrero $(v,k,k-1)$-DDF in any abelian group $G$ of order $v$.
\end{corollary}
\begin{proof}First recall that if $q=p^e$ with $p$ prime, then
$U(\mathbb{Z}_{q})$ is cyclic of order $\phi(q)=p^{e-1}(p-1)$ (see, e.g., \cite{A}).
Also, the subgroup $S_q$ of $U(\mathbb{Z}_{q})$ of order $p^{e-1}$ consists of all elements $s\in \mathbb{Z}_q$ with $s\equiv1$ (mod $p$).
Let $G$ be an abelian group of order $v$. By the Fundamental Theorem of Finite Abelian Groups, there are suitable prime powers $q_1,\dots, q_n$ dividing $v$ such that,
up to isomorphism, $G$ is the additive group of the ring $R=\mathbb{Z}_{q_1}\times\dots\times\mathbb{Z}_{q_n}$. Of course we have
$U(R)=U(\mathbb{Z}_{q_1})\times\dots \times U(\mathbb{Z}_{q_n})$.
For $1\leq i\leq n$, set $q_i=p_i^{e_i}$ with $p_i$ prime. By assumption, $k$ divides $p_i-1$, hence $U(\mathbb{Z}_{q_i})$ has an element $u_i$ of order $k$.
Obviously $\langle u_i\rangle$ has trivial intersection with $S_{q_i}$, the subgroup of $U(\mathbb{Z}_{q_i})$ of order $p_i^{e_i-1}$.
Hence, for $1\leq i\leq n$ and $1\leq j\leq k-1$, we have $u_i^j\notin S_{q_i}$, i.e., $u_i^j\not\equiv1$ (mod $p$).
We conclude that $u:=(u_1,\dots,u_n)$ is a unit of $R$ of order $k$ and that $u^j-1$ is also a unit of $R$ for $1\leq j\leq k-1$.
The assertion then follows from Lemma \ref{ring}.\end{proof}
There are infinite classes of Ferrero DDFs which neither Corollary \ref{EA} nor Corollary \ref{cyclic}
are able to capture. One of these classes will be given in the next section.
Here we only give an easy example of a Ferrero $(q^4,3,2)$-DDF in $\mathbb{Z}_{q^2}\times \mathbb{Z}_{q^2}$
for any prime power $q$ not divisible by 3. Such a DDF cannot be obtained from Corollary \ref{cyclic} when $q$ is a
power of 2 or the power of a prime $p\equiv5$ (mod 6).
Consider the automorphism $\alpha$ of $\mathbb{Z}_{q^2}\times \mathbb{Z}_{q^2}$
defined by $\alpha(x,y)=(y-x,-x)$. One can see that the group $A$ generated by $\alpha$ has order 3 and acts
semiregularly on $G\setminus\{(0,0)\}$. So we obtain the required $(q^4,3,2)$-DDF by applying
Theorem \ref{Frob}.
For instance, for $q=2$, we get the following Ferrero $(16,3,2)$-DDF in $\mathbb{Z}_4\times \mathbb{Z}_4$
(in order to save space, any
pair $(x,y)$ is denoted by $xy$):
$$\bigl{\{}\{01,10,33\}, \ \{02,20,22\}, \ \{03,30,11\}, \ \{12,13,23\}, \ \{21,32,31\}\bigl{\}}.$$
\section{Ferrero difference families from the Fibonacci sequence}
Here we present an infinite class of Ferrero PDFs obtainable via the Fibonacci sequence which, in many cases,
cannot be deduced from Corollary \ref{EA} or Corollary \ref{cyclic}. For this, we need to recall some number theoretic arguments.
The {\it Pisano period modulo $n$}, denoted $\pi(n)$, is the period of the Fibonacci sequence modulo $n$ which is also equal to
the period of the {\it Fibonacci matrix} $\bf F=\begin{pmatrix}1&1\cr1&0\end{pmatrix}$ in the group $GL_2(n)$ of all $2\times2$ invertible
matrices of the ring $\mathbb{Z}_{n}$. There is no known formula for $\pi(p)$ with $p$ a prime but the following properties have been established so far
(see, e.g., \cite{R}):
\begin{itemize}
\item[$(P_1)$] $\pi(p^2)$ is equal either to $p\pi(p)$ or $\pi(p)$;
\item[$(P_2)$] $\pi(p)$ is equal to the least common multiple of the periods of the two eigenvalues of $\bf F$ in the multiplicative group of $\mathbb{F}_{p^2}$;
\item[$(P_3)$] $\pi(p)=\begin{cases}3 \hfill\mbox{ if $p=2$}
\smallskip\cr20 \hfill\mbox{ if $p=5$}
\smallskip\cr
\mbox{an even divisor of $p-1$}\hfill \mbox{if $p\equiv\pm1$ (mod 10)}
\smallskip\cr
\mbox{${2(p+1)\over d}$ with $d$ an odd divisor of $p+1$}\quad\hfill \mbox{if $p\equiv\pm3$ (mod 10)} \end{cases}$
\end{itemize}
Regarding property $(P_1)$, it should be noted that there is no known prime $p$ for which $\pi(p^2)=\pi(p)$ holds.
\begin{proposition}\label{pisano}
There exists a Ferrero $(p^4,k,k-1)$-DDF in $\mathbb{Z}_{p^2}\times\mathbb{Z}_{p^2}$ for any prime
$p\neq5$ and any divisor $k$ of the Pisano period $\pi(p)$.
\end{proposition}
\begin{proof} An example of a $(16,3,2)$-DDF in $\mathbb{Z}_{4}\times\mathbb{Z}_{4}$ has been given at the end of the previous section,
therefore the assertion is true for $p=2$. For $p\equiv\pm1$ (mod 10) the assertion is an immediate consequence of Corollary \ref{cyclic}
and property $(P_3)$. So, in the following, we will assume that $p\equiv\pm3$ $($mod $10)$. This implies
that $5$ is not a square in $\mathbb{F}_p$, hence the two eigenvalues $\lambda_1$, $\lambda_2$ of $\bf F$ are ``conjugates" in
$\mathbb{F}_{p^2}$. Indeed we have $\{\lambda_1,\lambda_2\}=\{{1+\sqrt{5}\over2},{1-\sqrt{5}\over2}\}$. So the $i$-th powers of $\lambda_1$ and $\lambda_2$
are conjugates as well. It follows that $\lambda_1^i=1$ if and only if $\lambda_2^i=1$ which clearly implies that $\lambda_1$ and $\lambda_2$
have the same period in the multiplicative group of $\mathbb{F}_{p^2}$.
Let us identify any matrix $\begin{pmatrix}a&b\cr c&d\end{pmatrix}\in GL_2(\mathbb{Z}_{p^2})$
with the automorphism of $\mathbb{Z}_{p^2}\times\mathbb{Z}_{p^2}$ mapping $(x,y)$ into $(ax+by,cx+dy)$.
The period of the Fibonacci matrix $\bf F$
in $GL_2(\mathbb{Z}_{p^2})$ is equal to $\pi(p^2)$ which, by property $(P_1)$, is equal either to $p\pi(p)$ or $\pi(p)$.
Let $A$ be the subgroup of order $\pi(p)$ of the group generated by
$\bf F$. Thus $A=\langle \Phi\rangle$ with $\Phi=\bf F$ or $\Phi= {\bf F}^p$ according to whether $\pi(p^2)=\pi(p)$ or $\pi(p^2)=p\pi(p)$, respectively.
Assume that 1 is an eigenvalue of $\Phi^i$.
Then, considering that for any matrix $M$ and any positive integer $j$ we have $Spec(M^j)=\{\lambda^j \ | \ \lambda\in Spec(M)\}$, we have
\begin{center}
$\begin{cases}
\lambda_1^i=\lambda_2^i=1\quad\,\,\,\,\,\, \mbox{if $\pi(p^2)=\pi(p)$}\cr \lambda_1^{pi}=\lambda_2^{pi}=1\quad \mbox{ if $\pi(p^2)=p\pi(p)$}
\end{cases}$
\end{center}
Thus, by property $(P_2)$, we have that $\pi(p)$ divides $i$ in the former case while $\pi(p)$ divides $pi$ in the latter.
Anyway $\pi(p)$ and $p$ are coprimes by property $(P_3)$ so that, in both cases, $i$ should be divisible by $\pi(p)$.
Recalling that $\pi(p)$ is the order of $\Phi$, we conclude that $\Phi^i$ is the identity matrix.
Now note that a matrix $M\in GL_2(\mathbb{Z}_{p^2})$ is fixed point free if and only if 1 does not belong to $Spec(M)$.
So we have proved that the group $A$ acts semiregularly on $\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2}\setminus\{(0,0)\}$,
i.e., $(\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2},A)$ is a Ferrero pair. Of course $(\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2},\langle \Phi^{\pi(p)/k}\rangle)$
is a Ferrero pair as well for each divisor $k$ of $\pi(p)$ and then the assertion follows from Theorem \ref{Frob}.
\end{proof}
The {\it Pisano DDFs}, namely the $(p^4,k,k-1)$-DDFs which are built as in the proof of the above proposition,
allow to largely enrich the set of known values of $k$ for which there exists a $(p^4,k,k-1)$-DDF in $\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2}$
in the case of $p\equiv\pm3$ (mod 10).
In particular, for a Mersenne prime $p=2^{4n+3}-1$ we have $p\equiv-3$ (mod 10) and then we necessarily have $\pi(p)=2^{4n+4}$ by property $(P_4)$.
Thus there exists a Pisano $(p^4,2^i,2^i-1)$-DDF in $\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2}$ for $1\leq i\leq 4n+4$.
\medskip
As an example, let us apply Proposition \ref{pisano} with $p=3$. We have $\pi(3)=6$ and $\pi(3^2)=3\cdot\pi(3)=18$.
Then the matrix $\Phi$ considered in the proof of the above proposition is ${\bf F}^3=\begin{pmatrix}3&2\cr2&1\end{pmatrix}$
and the group $A$ generated by $\Phi$ is the following:
$$\biggl{\{}
\begin{pmatrix}3&2\cr2&1\end{pmatrix},
\begin{pmatrix}4&8\cr8&5\end{pmatrix},
\begin{pmatrix}1&7\cr7&3\end{pmatrix},
\begin{pmatrix}8&0\cr0&8\end{pmatrix},
\begin{pmatrix}6&7\cr7&8\end{pmatrix},
\begin{pmatrix}3&2\cr2&1\end{pmatrix},
\begin{pmatrix}5&1\cr1&4\end{pmatrix},
\begin{pmatrix}1&0\cr0&1\end{pmatrix}
\biggl{\}}$$
The ten orbits of $A$ on $\mathbb{Z}_9\times \mathbb{Z}_9\setminus\{(0,0)\}$ are listed below where, again,
any pair $(x,y)$ will be simply denoted by $xy$:
$$B_0=\{21, 85, 73, 08, 78, 14, 26, 01\};\quad B_1=\{42,71,56,07,57,28,43,02\};$$
$$B_2=\{63,66,30,06,36,33,60,03\};\quad B_3=\{04,52,13,05,15,47,86,04\};$$
$$B_4=\{32,48,17,80,67,51,82,10\};\quad B_5=\{53,34,81,88,46,65,18,11\};$$
$$B_6=\{74,20,64,87,25,70,35,12\};\quad B_7=\{68,72,77,83,31,27,22,16\};$$
$$B_8=\{37,54,55,76,62,45,44,23\};\quad B_9=\{58,40,38,75,41,50,61,24\}.$$
Thus the $B_i$s are the blocks of a Pisano $(81,8,7)$-DDF in $\mathbb{Z}_9\times \mathbb{Z}_9$.
\subsection{Non-abelian disjoint $(v,k,k-1)$ difference families}
Here we give some constructions for DDFs in non-abelian groups.
Let us start by giving a class of non-abelian Ferrero DDFs.
\begin{proposition}\label{notabelian}
If $R=(V,+,\cdot)$ is a ring with unity admitting a group $U$ of units
such that $u^2-1\in U(R)$ for each $u\in U\setminus\{1\}$, then there exists a non-abelian Ferrero $(v^3,k,k-1)$-DDF with $v=|V|$ and $k=|U|$.
\end{proposition}
\begin{proof}
Let us equip the set $V^3$ with the operation $\oplus$ defined by the rule
$$(x_1,y_1,z_1)\oplus (x_2,y_2,z_2)=(x_1+x_2, \ y_1+y_2, \ z_1+z_2+x_1\cdot y_2).$$
It is an easy exercise to check that $G=(V^3,\oplus)$ is a group. Also note that $G$ is non-abelian
since we have, for instance, $(0,1,0)\oplus(1,0,0)=(1,1,0)$ while $(1,0,0)\oplus(0,1,0)=(1,1,1)$.
Now note that for each $u\in U$, the map $\alpha_u: (x,y,z)\in V^3 \longrightarrow (u\cdot x,u\cdot y,u^2\cdot z)\in V^3$
is an automorphism of $G$ and that $(G,A:=\{\alpha_u \ | \ u\in U\})$ is a Ferrero pair. The assertion then follows
from Theorem \ref{Frob}.
\end{proof}
We remark that a group $U$ as in the statement of the above proposition is necessarily of odd order.
Indeed, in the opposite case, $U$ would have at least one involution, say $u$, and then $u^2-1=0\notin U(R)$
against the assumption.
Applying Proposition \ref{notabelian} with $R=\mathbb{F}_q$ we obtain the following.
\begin{corollary}
There exists a non-abelian Ferrero $(q^3,k,k-1)$-DDF
for any pair $(q,k)$ with $q$ a prime power and $k$ any odd divisor of $q-1$.
\end{corollary}
Let $\cal F$ be the $(v^3,k,k-1)$-DDF in $(V^3,\oplus)$ obtainable by Proposition \ref{notabelian}. We remark that $\cal F$
actually coincides with the $(v^3,k,k-1)$-DDF in the abelian group of the ring $R\times R\times R$ obtainable using Lemma \ref{ring}.
On the other hand to consider $\cal F$ as a DDF in $(V^3,\oplus)$ is not the same as to consider $\cal F$ as a DDF in $(V^3,+)$.
Indeed the $(v^3,k,k-1)$-NRB whose near resolution is the orbit of $\cal F$ under $(V^3,\oplus)$ does not coincide with the $(v^3,k,k-1)$-NRB
whose near resolution is the orbit of $\cal F$ under $(V^3,+)$.
\medskip
By Corollary \ref{cyclic} there exists a $(v,k,k-1)$-DDF in any abelian group $G$ of order $v$
provided that all primes in $v$ are congruent to 1 (mod $k$). We are going to see that this result
remains true if one removes the hypothesis of commutativity of $G$. The present author proved that the existence
of a $(p,k,\lambda)$-DF for any prime $p$ in $v$ implies the existence of a
$(v,k,\lambda)$-DF in any group $G$ of order $v$ (see Corollary 5.5 in \cite{B1}). Now we reprove
this theorem showing that if all component $(p,k,\lambda)$-DFs are disjoint, then the resultant
$(v,k,\lambda)$-DF in $G$ is disjoint as well.
\begin{theorem}\label{composingDDFs}
If $G$ is a group of order $v$ and there exists a $(p,k,\lambda)$-DF (resp. DDF) for every prime factor $p$ of $v$,
then there exists a $(v,k,\lambda)$-DF (resp. DDF) in $G$.
\end{theorem}
\begin{proof}
The cases $k=1$ and $k=2$ are trivial. So, in the following, we assume $k>2$.
We prove the theorem by induction on $v$. The assertion is trivially true for $v=1$; in this case the required DF is the empty family.
Let $G$ be a group of order $v>1$ as in the statement and assume that the assertion is true for all groups of order less than $v$.
First observe that $v$ is necessarily odd since the existence of a $(p,k,\lambda)$-DF for any prime factor $p$ of $v$
implies that $p\geq k>2$ for any such prime $p$.
It follows, by the Feit-Thompson theorem, that $v$ is solvable. Thus, in particular, $G$ has a normal subgroup $N$ of prime index, say $p$.
By hypothesis there exists a $(p,k,\lambda)$-DF (resp. DDF), say ${\cal F}_1$, in $G/N$. By induction,
there also exists a $({v\over p},k,\lambda)$-DF (resp. DDF), say ${\cal F}_2$, in $N$.
For each block $B=\{g_1+N,\dots,g_k+N\}\in{\cal F}_1$
and any $n\in N$ consider the $k$-subset $B(n)$ of $G$ defined by
$B(n)=\{g_i+in \ | \ 1\leq i\leq k\}$. We claim that $${\cal F}:=\{B(n) \ | \ B\in{\cal F}_1, n\in N\} \ \cup \ {\cal F}_2$$
is a $(v,k,\lambda)$-DF (resp. DDF) in $G$.
Given $g\in G\setminus N$, let $g+N=(g_i+N)-(g_j+N)$ be a representation of $g+N$ as a difference from
a block $B=\{g_1+N,\dots,g_k+N\}$ of ${\cal F}_1$.
Consider the element $n:=-g_i+g+g_j$, necessarily belonging to $N$, and let $ord(n)$ be its order.
We have $1\leq |i-j|<k$, hence $i-j$ is coprime with $ord(n)$ since, by assumption,
every divisor of $ord(G)$ distinct by 1 is clearly greater than $k$. Thus there exists the inverse, say $x$, of $i-j$ modulo $ord(n)$.
Now check that $g$ is the difference between the $i$-th element and the $j$-th element of the block $B(xn)$ of ${\cal F}_1$:
$$(g_i+ixn)-(g_j+jxn)=g_i+(i-j)xn-g_j=g_i+n-g_j=g_i-g_i+g+g_j-g_j=g.$$
In this way we have proved that each of the $\lambda$ representations of $g+N$ as a difference from ${\cal F}_1$
leads to a representation of $g$ as a difference from ${\cal F}$. Thus every element of $G\setminus N$ is covered at
least $\lambda$ times by $\Delta{\cal F}$. The same is true for all elements of $N\setminus\{0\}$ since
$\Delta{\cal F}_2$ is $\lambda$ times $N\setminus\{0\}$.
Now note that the number of blocks of ${\cal F}$ is given by
\begin{center}
$|{\cal F}|=|{\cal F}_1|\cdot|N|+|{\cal F}_2|={\lambda(p-1)\over k(k-1)}\cdot{v\over p}+{\lambda\over k(k-1)}({v\over p}-1)={\lambda(v-1)\over k(k-1)}$
\end{center}
and then $|\Delta{\cal F}|=k(k-1)|{\cal F}|=\lambda(v-1)$. It follows, by the pigeon hole principle, that
every non-zero element of $G$ is covered by $\Delta{\cal F}$ exactly $\lambda$ times, i.e., ${\cal F}$ is a
$(v,k,\lambda)$-DF.
It remains to prove that ${\cal F}$ is disjoint in the hypothesis that both ${\cal F}_1$ and ${\cal F}_2$ are disjoint.
Every block of the form $B(n)$ with $B=\{g_1+N,\dots,g_k+N\}\in{\cal F}_1$ and $n\in N$ has no element in $N$
otherwise we would have $g_i+in\in N$ for some $i$, hence $g_i+N=N$
contradicting the fact that the blocks of ${\cal F}_1$ partition $G/N\setminus\{N\}$.
Thus $B(n)$ is disjoint with every block of ${\cal F}_2$.
Now assume that $B(n)$ and $B'(n')$ have an element in common for some blocks $B=\{g_1+N,\dots,g_k+N\}$ and
$B'=\{g'_1+N,\dots,g'_k+N\}$ of ${\cal F}_1$ and some elements $n$, $n'$ of $N$.
Thus we have $g_i+in=g'_j+jn'$ for suitable $i, j\in\{1,\dots,k\}$. This implies that $g_i+N=g'_j+N$,
hence $B=B'$ and $i=j$ since ${\cal F}_1$ is disjoint. It follows that $i(n-n')=0$, hence $ord(n-n')$ is a divisor of $i$
which implies $ord(n-n')=1$ since every divisor of $ord(G)$ distinct by 1 is greater than $k$. We conclude that
$B=B'$ and $n=n'$, i.e., $B(n)=B'(n')$.
Finally any two distinct blocks of ${\cal F}_2$ are disjoint by assumption. The assertion follows.
\end{proof}
We are now finally able to prove the main result of this section.
\begin{corollary}
If all prime divisors of $v$ are congruent to $1$ $($mod $k)$, then there exists a $(v,k,k-1)$-DDF and a $(v,k,{k-1\over2})$-DDF in $G$
for any group $G$ of order $v$.
\end{corollary}
\begin{proof}
We know that there exists a $(p,k,k-1)$-DDF and a $(p,k,{k-1\over2})$ for any prime $p\equiv1$ (mod $k$). Then the assertion immediately follows from Theorem \ref{composingDDFs}.
\end{proof}
\normalsize
\section*{Acknowledgement}
This work has been performed under the auspices of the G.N.S.A.G.A. of the
C.N.R. (National Research Council) of Italy.
| {
"timestamp": "2017-05-16T02:03:59",
"yymm": "1705",
"arxiv_id": "1705.04844",
"language": "en",
"url": "https://arxiv.org/abs/1705.04844",
"abstract": "A disjoint $(v,k,k-1)$ difference family in an additive group $G$ is a partition of $G\\setminus\\{0\\}$ into sets of size $k$ whose lists of differences cover, altogether, every non-zero element of $G$ exactly $k-1$ times. The main purpose of this paper is to get the literature on this topic in order, since some authors seem to be unaware of each other's work. We show, for instance, that a couple of heavy constructions recently presented as new, had been given in several equivalent forms over the last forty years. We also show that they can be quickly derived from a general nearring theory result which probably passed unnoticed by design theorists and that we restate and reprove in terms of differences. We exploit this result to get an infinite class of disjoint $(v,k,k-1)$ difference families coming from the Fibonacci sequence. Finally, we will prove that if all prime factors of $v$ are congruent to 1 modulo $k$, then there exists a disjoint $(v,k,k-1)$ difference family in every group, even non-abelian, of order $v$.",
"subjects": "Combinatorics (math.CO)",
"title": "On disjoint $(v,k,k-1)$ difference families",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787830929848,
"lm_q2_score": 0.800691997339971,
"lm_q1q2_score": 0.790426151566364
} |
https://arxiv.org/abs/2112.09269 | Seaweed Algebras and the Index Statistic for Partitions | In 2018 Coll, Mayers, and Mayers conjectured that the $q$-series $( q, -q^3; q^4 )_\infty^{-1}$ is the generating function for a certain parity statistic related to the index of seaweed algebras. We prove this conjecture. Thanks to earlier work by Seo and Yee, the conjecture would follow from the non-negativity of the coefficients of this infinite product. Using a variant of the circle method along with Euler-Maclaurin summation, we establish this non-negativity, thereby confirming the Coll-Mayers-Mayers Conjecture. | \section{Introduction and Statement of Results}
Recall that a partition $\lambda$ of the non-negative integer $n$ is a non-increasing sequence of positive integers $\lambda = \left( \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_k \right)$ which sum to $n$, and we say $\emptyset$ is the only partition of zero. Define the $q$-series $G(q)$ by
\begin{align*}
G(q) := \sum_{n \geq 0} a(n) q^n := \dfrac{1}{\left( q, -q^3; q^4 \right)_\infty},
\end{align*}
where $\left( a; q \right)_\infty := \prod_{n=0}^\infty \left( 1 - a q^n \right)$ and $\left( a, b; q \right)_\infty := \left( a; q \right)_\infty \left( b; q \right)_\infty$. The coefficients $a(n)$ can be found in the Online Encyclopedia of Integer Sequences (OEIS) as entry A300574. Interestingly, the values $a(n)$ appear to always be non-negative. Coll, Mayers, and Mayers in \cite{CollMayersMayers} have conjectured a combinatorial interpretation for the sequence $a(n)$, from which non-negativity would follow.
The conjectured combinatorial interpretation of $a(n)$ in \cite{CollMayersMayers} is expressed in terms of the {\it index} of a partition $\lambda$, which is defined in terms of certain Lie algebras called {\it seaweed algebras}. Given two partitions $\{ a_j \}_{1 \leq j \leq m}$ and $\{ b_j \}_{1 \leq j \leq \ell}$ of $n$ and letting $\{ e_j \}_{1 \leq j \leq n}$ be the standard basis of $k^n$ for some field $k$, Dergachev and Kirillov \cite{DergachevKirillov} defined seaweed algebras as Lie subalgebras of $\text{Mat}(n)$ which preserve the vector spaces $\text{span}\left( e_1, e_2, \dots, e_{a_1 + \dots + a_j} \right)$ for $1 \leq i \leq m$ and $\text{span}\left( e_{b_1 + \dots + b_j + 1}, \dots, e_n \right)$ for $1 \leq j \leq \ell$.
In \cite[Theorem 5.1]{DergachevKirillov}, Dergachev and Kirillov obtain an exact formula for the index of seaweed algebras. Motivated by the parameterization of seaweed algebras by partitions, Coll, Mayers, and Mayers define the index of a pair of partitions $\left( \lambda, \mu \right)$ of $n$ as the index of the associated seaweed algebra. There are a number of interesting specializations of the index. For example, certain indexes are related to the 2-colored partition function \cite{CollMayersMayers}.
The specialization which we call the {\it index} of the partition $\lambda \vdash n$ is the index of the pair $\left( \lambda, \{ n \} \right)$. Studying this index statistic, Coll, Mayers, and Mayers defined $e_n$ as the number of partitions of $n$ into odd parts whose index is even, and likewise $o_n$ as the number of partitions of $n$ into odd parts whose index is odd. With this notation, the conjecture of Coll, Mayers, and Mayers may be stated as follows.
\begin{namedconjecture}[Coll--Mayers--Mayers] \label{MAIN CONJECTURE}
The following are true:
\noindent \textnormal{(1)} All the coefficients of $G(q)$ are non-negative.
\noindent \textnormal{(2)} We have $G(q) = \sum\limits_{n \geq 0} \left| e_n - o_n \right| q^n$.
\end{namedconjecture}
In recent years, several papers have made progress towards proving this conjecture. Seo and Yee \cite{SeoYee} proved that (1) implies the full Coll--Mayers--Mayers Conjecture\footnote{Seo and Yee also conjectured non-negativity for the coefficients of $\left( q, -q^{m-1}; q^m \right)_\infty^{-1}$ for all $m \geq 4$, and if non-negativity is assumed then Corollary 3 of \cite{SeoYee} would yield combinatorial interpretations similar to the Coll--Mayers--Mayers Conjecture for $m = 4d$.}. After this work, Chern \cite{Chern} demonstrated using the circle method that $a(n) \geq 0$ for all $n > 2.4 \times 10^{14}$ and verified the non-negativity by computer for $0 \leq n \leq 10000$. In this paper, we use a modified approach, involving a different version of the circle method and Euler-Maclaurin summation, which reduces the last possible counterexample to $n \leq 4800$ and subsequently proves the Coll--Mayers--Mayers Conjecture.
\begin{theorem} \label{MAIN}
The Coll--Mayers--Mayers Conjecture is true.
\end{theorem}
The remainder of the paper is structured as follows. In Section \ref{Preliminaries} we lay out preliminary facts needed in the proof of Theorem \ref{MAIN}. In Section \ref{Estimates of Key Terms}, we collect various effective estimates related to $G(q)$. Finally, in Section \ref{Proof of MAIN} we prove Theorem \ref{MAIN} by an effective implementation of a variation of Wright's circle method.
\section*{Acknowledgements}
The author thanks Ken Ono, his Ph.D advisor, for helpful discussions related to the results in this paper and for comments from Ken Ono, Kathrin Bringmann, Shane Chern, and Josh Males for helpful comments on earlier versions of this paper. The author thanks the support of Ken Ono's grants, namely the Thomas Jefferson Fund and the NSF (DMS-1601306 and DMS-2055118).
\section{Preliminaries} \label{Preliminaries}
\subsection{Bernoulli Polynomials} \label{Bernoulli Section}
We first recall important facts about the Bernoulli polynomials $B_n(x)$. These polynomials are defined classically by their generating function
\begin{align*}
\sum_{n \geq 0} \dfrac{B_n(x)}{n!} z^n = \dfrac{z e^{(x-1)z}}{1 - e^{-z}}.
\end{align*}
The Bernoulli numbers are the constant terms of these polynomials, i.e. $B_n = B_n(0)$. The Bernoulli polynomials appear prominently throughout number theory and satisfy many interesting and important properties. However, in our application, we will need only one value. The Bernoulli polynomials appear as coefficients in certain infinite series closely related to $G(q)$, and in this setting the only properties of Bernoulli polynomials which interest us are their size in the interval $[0,1]$. Here, Lehmer \cite{Lehmer} proved the bound
\begin{align} \label{Lehmer's Bound}
\left| B_n(x) \right| \leq \dfrac{2 \zeta(n) n!}{(2\pi)^{2n}}.
\end{align}
We will also make use of the infinite series
\begin{align*}
B_{r,t}(z) := \dfrac{e^{-\frac rt z}}{z\left( 1 - e^{-z} \right)} = \sum_{n \geq -2} \dfrac{B_{n+2}\left( 1 - \frac rt \right)}{(n+2)!} z^n,
\end{align*}
where $0 < r \leq t$ are integers. Due to Lehmer's bound, this Laurent expansion is absolutely convergent in the punctured disk $0 < |z| < 2\pi$. This absolute convergence is important for producing effective estimates of certain infinite sums related to $G(q)$, which will be seen in Lemma \ref{B_rt Bound}.
\subsection{Euler--Maclaurin Approximation}
For integers $M, N > 0$ and any analytic function $f(z)$, the classical Euler-Maclaurin summation formula says
\begin{align*}
\int_0^M f(z) dz = \dfrac{f(0) + f(M)}{2} + \sum_{m = 1}^{M-1} f(m) + \sum_{n = 1}^{N-1} \dfrac{(-1)^{n+1} B_{n+1}}{(n+1)!} &\left( f^{(n)}(M) - f^{(n)}(0) \right) \\ &+ (-1)^N \int_0^M f^{(N)}(x) \dfrac{\widehat{B}_N(x)}{N!} dx,
\end{align*}
where $\widehat{B}_n(x) := B_n\left( x - \lfloor x \rfloor \right)$. Zagier observed \cite[Proposition 3]{Zagier} that for functions $f(z)$ with suitable growth conditions at infinity, the Euler-Maclaurin formula provides a very precise tool for approximating certain infinite sums involving $f(z)$. A modest generalization of Zagier's observation is that if $f(z) \sim \sum_{n = 0}^\infty c_n z^n$ as $z \to 0$ in a conical region $D_\delta := \{ z : 0 < \left| \mathrm{Arg}{z} \right| < \frac{\pi}{2} - \delta \}$, and if $f(z)$ and its derivatives decay faster than any negative power of $z$ as $z \to \infty$, then
\begin{align*}
\sum_{m \geq 0} f\left( (m+a)z \right) \sim \dfrac{I_f}{z} - \sum_{n = 0}^\infty c_n \dfrac{B_{n+1}(a)}{n+1} z^n
\end{align*}
as $z \to 0$ in $D_\delta$ for any real number $0 < a \leq 1$, where $I_f := \int_0^\infty f(x) dx$. The growth condition imposed on $f(z)$ near infinity is commonly referred to as $f(z)$ having {\it rapid decay} at infinity in the literature. The symbol $\sim$ for asymptotic expansions is used in a strong sense, namely we say $f(z) \sim \sum_{n=0}^\infty c_n z^n$ if $f(z) - \sum_{n=0}^{N-1} c_n z^n = O\left( z^N \right)$ as $z \to 0$. Because the proof of this expansion is based on an exact formula, it may be readily refined in a variety of ways. One of the most desirable generalizations in practice is to allow $f(z)$ to decay more slowly towards infinity, requiring only that $f(z) = O\left( z^{- 1 - \epsilon} \right)$ as $z \to \infty$, in which case we say $f(z)$ has {\it sufficient decay} at infinity. Such functions may have asymptotic expansions of the form $f(z) \sim \sum_{n = n_0}^\infty c_n z^n$ as $z \to 0$ in $D_\delta$ for any integer $n_0$. In this case, one may essentially treat the principal parts and non-principal parts separately, and this treatment gives the following more general lemma.
\begin{lemma}[Lemma 2.2 of \cite{BCMO}] \label{Euler-Maclaurin Sufficient Decay}
Let $0 < a \leq 1$ and $A, \delta \in \mathbb{R}^+$, and assume that $f(z)$ has the asymptotic expansion $f(z) \sim \sum_{n=n_0}^{\infty} c_n z^n$ as $z \rightarrow 0$ in $D_\delta$ for some integer $n_0$, possibly negative. Furthermore, assume that $f$ and all of its derivatives are of sufficient decay in $D_\delta$ as $z \to \infty$. Then
\begin{align*}
\sum_{n=0}^\infty f((n+a)z)\sim \sum_{n=n_0}^{-2} c_{n} \zeta(-n,a)z^{n}+ \frac{I_{f,A}^*}{z}-\frac{c_{-1}}{z} \left( \Log \left(Az \right) +\psi(a)+\gamma \right)-\sum_{n=0}^\infty c_n \frac{B_{n+1}(a)}{n+1} z^n
\end{align*}
uniformly as $z \rightarrow 0$ in $D_\delta$, where
\begin{align*}
I_{f,A}^*:= \int_{0}^{\infty} \left(f(u)-\sum_{n=n_0}^{-2}c_{n}u^n-\frac{c_{-1}e^{-Au}}{u}\right)du.
\end{align*}
\end{lemma}
Since the classical Euler--Maclaurin formula is an exact formula, the error terms in these asymptotic expansions can be made quite explicit. When $f(z) = \sum_{n = 0}^\infty c_n z^n$ in some disk $|z| < R$ has rapid decay at infinity, these bounds may be explicitly computed in the following lemma.
\begin{lemma}[Proposition 2.6 of \cite{Craig}] \label{Euler-Maclaurin Rapid Decay Effective}
Let $f(z)$ be $C^\infty$ in $D_\delta$ with power series expansion $f(z) = \sum_{n = 0}^\infty c_n z^n$ that converges absolutely in the region $|z| < R$ for some positive constant $R$. Suppose $f(z)$ and all its derivatives have rapid decay as $z \to \infty$ in $D_\delta$. Then for any real number $0 < a \leq 1$ and any integer $N > 0$, we have
\begin{align*}
\left| \sum_{m \geq 0} f\left( (m+a)z \right) - \dfrac{I_f}{z} + \sum_{n = 0}^{N-1} c_n \dfrac{B_{n+1}(a)}{n+1} z^n \right| \leq \dfrac{M_{N+1} J_{f,N+1}}{(N+1)!} |z|^N + \dfrac{3}{5} \sum_{n = N}^\infty \dfrac{n!}{(n-N)!} |c_n| |az|^n,
\end{align*}
provided $z \in D_\delta$ and $0 < |z| < R$, where $M_N := \max\limits_{0 \leq x \leq 1} \left| B_N(x) \right|$ and $J_{f,N} := \int_0^\infty \left| f^{(N)}\left( w \right) \right| dw$.
\end{lemma}
Results of this type appear in a variety of sources in the literature (see for example \cite{BCMO, BJM, Craig, Zagier}). For the proofs of Lemmas \ref{Euler-Maclaurin Sufficient Decay} and \ref{Euler-Maclaurin Rapid Decay Effective}, we refer the interested reader to the cited works. For our purposes, it is convenient to put them together into a single lemma which produces effective bounds in the setting of Lemma \ref{Euler-Maclaurin Sufficient Decay} by reducing to the case of Lemma \ref{Euler-Maclaurin Rapid Decay Effective}.
\begin{lemma} \label{Euler-Maclaurin General Effective}
Let $f(z)$ be $C^\infty$ in $D_\delta$ with Laurent series $f(z) = \sum_{n = n_0}^\infty c_n z^n$ that converges absolutely in the region $0 < |z| < R$ for some positive constant $R$. Suppose $f(z)$ and all its derivatives have sufficient decay as $z \to \infty$ in $D_\delta$. Then for any real number $0 < a \leq 1$ and any integer $N > 0$, we have
\begin{align*}
\bigg| \sum_{m \geq 0} f\left( (m+a)z \right) - \sum_{n = n_0}^{-2} c_n \zeta(-n,a) z^n - \dfrac{I_{f,A}^*}{z} &+ \dfrac{c_{-1}}{z} \left( \log\left( Az \right) + \gamma + \psi\left( a \right) \right) - \sum_{n = 0}^\infty c_n^* \dfrac{B_{n+1}(a)}{n+1} z^n \bigg| \\ &\leq \dfrac{M_{N+1} J_{g,N+1}}{(N+1)!} |z|^N + \dfrac{3}{5} \sum_{n = N}^\infty \dfrac{n!}{(n-N)!} |b_n| |az|^n,
\end{align*}
provided $z \in D_\delta$ and $|z| < R$, where $M_N$, $J_{g,N}$ are defined as in Lemma \ref{Euler-Maclaurin Rapid Decay Effective}, $b_n := c_n - \frac{(-A)^{n+1} c_{-1}}{(n+1)!}$, and
\begin{align*}
c_n^* := \begin{cases} c_n & \text{if } n \leq N-1, \\ \dfrac{(-A)^{n+1} c_{-1}}{(n+1)!} & \text{if } n \geq N. \end{cases}
\end{align*}
\end{lemma}
\begin{proof}
If we let
$$f(z) = g(z) + \dfrac{c_{-1} e^{-Az}}{z} + \sum_{n = n_0}^{-2} c_n z^n,$$
then $g(z)$ has rapid decay at infinity. Noting that $g(z) = \sum_{n = 0}^\infty b_n z^n$ and $I_g = I^*_{f,A}$ by definition, Lemma \ref{Euler-Maclaurin Rapid Decay Effective} implies for $N > 0$ that
\begin{align*}
\left| \sum_{m \geq 0} g\left( (m+a)z \right) - \dfrac{I_{f,A}^*}{z} + \sum_{n = 0}^{N-1} b_n \dfrac{B_{n+1}(a)}{n+1} z^n \right| \leq \dfrac{M_{N+1} J_{g,N+1}}{(N+1)!} |z|^N + \dfrac{3}{5} \sum_{n = N}^\infty \dfrac{n!}{(n-N)!} |b_n| |az|^n
\end{align*}
for $z \in D_\delta$ with $0 < |z| < R$. Since we have
\begin{align*}
g(z) = f(z) - \dfrac{c_{-1} e^{-Az}}{z} - \sum_{n = n_0}^{-2} c_n z^n,
\end{align*}
it follows, for $0 < |z| < R$, that
\begin{align*}
\Bigg| \sum_{m \geq 0} \left[ f\left( (m+a)z \right) - \dfrac{c_{-1} e^{-A(m+a)z}}{(m+a)z} \right] - \sum_{n = n_0}^{-2} &c_n \zeta(-n,a) z^n - \dfrac{I_{f,A}^*}{z} + \sum_{n = 0}^{N-1} b_n \dfrac{B_{n+1}(a)}{n+1} z^n \Bigg| \\ &\leq \dfrac{M_{N+1} J_{g,N+1}}{(N+1)!} |z|^N + \dfrac{3}{5} \sum_{n = N}^\infty \dfrac{n!}{(n-N)!} |b_n| |az|^n.
\end{align*}
By the definition of $b_n$ we have
\begin{align*}
\sum_{n = 0}^{N-1} b_n \dfrac{B_{n+1}(a)}{n+1} z^n = \sum_{n = 0}^{N-1} c_n \dfrac{B_{n+1}(a)}{n+1} z^n - \sum_{n = 0}^{N-1} \dfrac{(-A)^{n+1} c_{-1}}{(n+1)!} \dfrac{B_{n+1}(a)}{n+1} z^n,
\end{align*}
and if we adopt the notation
\begin{align*}
\dfrac{c_{-1}}{z} H_{a,N}(z) := \dfrac{c_{-1}}{z} \left( \sum_{m=0}^\infty \dfrac{e^{-A(m+a)z}}{m+a} + \sum_{n=0}^{N-1} \dfrac{B_{n+1}(a)}{(n+1) (n+1)!} (-Az)^n \right),
\end{align*}
it follows that
\begin{align*}
\bigg| \sum_{m \geq 0} f\left( (m+a)z \right) - \sum_{n = n_0}^{-2} c_n \zeta(-n,a) z^n -& \dfrac{c_{-1}}{z} H_{a,N}(Az) - \dfrac{I_{f,A}^*}{z} + \sum_{n = 0}^{N-1} c_n \dfrac{B_{n+1}(a)}{n+1} z^n \bigg| \\ &\leq \dfrac{M_{N+1} J_{g,N+1}}{(N+1)!} |z|^N + \dfrac{3}{5} \sum_{n = N}^\infty \dfrac{n!}{(n-N)!} |b_n| |az|^n.
\end{align*}
By \cite[Equation 5.10]{BJM}, it is known that
\begin{align*}
H_a(z) := \sum_{n = 0}^\infty \dfrac{e^{-(m+a)z}}{m+a} + \sum_{n=0}^\infty \dfrac{B_{n+1}(a)}{(n+1) (n+1)!} (-z)^n
\end{align*}
satisfies $H_a(Az) = - \log(Az) - \gamma - \psi(a)$ for any $A > 0$. Since $$H_{a,N}(Az) = H_a(Az) - \sum_{n=N}^\infty \frac{B_{n+1}(a)}{(n+1) (n+1)!} (-Az)^n,$$
this completes the proof.
\end{proof}
Because $B_{r,t}(z)$ is of sufficient decay, Lemma \ref{Euler-Maclaurin General Effective} can be applied for $0 < |z| < 2\pi$. To state this result, we first introduce convenient notation. Let
\begin{align*}
\beta_{r,t} := \log\left(\Gamma\left(\frac rt\right) \right) - \frac 12\log(2\pi), \hspace{0.5in} g_{r,t}(z) := B_{r,t}(z) - \frac{1}{z^2} - \frac{\left( \frac 12 - \frac rt \right) e^{-\frac rt z}}{z},
\end{align*}
and introduce the functions $F_a^{r,t}(z)$, $E_a^{r,t}(z)$ defined by
\begin{align} \label{B_rt Approx Def}
F_a^{r,t}(z) := \dfrac{\zeta(2,a)}{z^2} + \dfrac{\beta_{r,t}}{z} - \dfrac{1}{z} \left( \frac 12 - \frac rt \right) \left( \log(z) + \gamma + \psi(a) \right) + \sum_{n = 0}^\infty c_n^* \dfrac{B_{n+1}(a)}{n+1} z^n
\end{align}
and
\begin{align} \label{B_rt Error Def}
E_a^{r,t}(z) := \dfrac{J_{g_{r,t},2}}{12} |z| + \dfrac{3}{5} \sum_{n = 1}^\infty n \left| \dfrac{B_{n+2}\left( 1 - \frac rt \right)}{(n+2)!} - \dfrac{(-r)^{n+1} B_1\left( 1 - \frac rt \right)}{t^{n+1} (n+1)!} \right| |az|^n,
\end{align}
where we define the coefficients $c_n^*$ as in Lemma \ref{Euler-Maclaurin General Effective} by
\begin{align*}
c_n^* = \begin{cases} \dfrac{B_2\left( 1 - \frac rt \right)}{2} & \text{ if } n = 0, \\ \dfrac{(-r)^{n+1} \left( \frac 12 - \frac rt \right)}{t^{n+1} (n+1)!} & \text{ otherwise}. \end{cases}
\end{align*}
We now state our application of Lemma \ref{Euler-Maclaurin General Effective} to $B_{r,t}(z)$.
\begin{lemma} \label{B_rt Bound}
Let $0 < r \leq t$ be integers and $\delta > 0$ a constant. Then for any real number $0 < a \leq 1$ and $0 < |z| < 2\pi$, we have
\begin{align*}
\bigg| \sum_{m \geq 0} B_{r,t}\left( (m+a)z \right) - F_a^{r,t}(z) \bigg| \leq E_a^{r,t}(z).
\end{align*}
\end{lemma}
\begin{proof}
By the observations of Section \ref{Bernoulli Section}, $B_{r,t}(z)$ satisfies the criteria of Lemma \ref{Euler-Maclaurin General Effective}, and therefore for any $A > 0$ and $N = 1$ we have
\begin{align*}
\bigg| \sum_{m \geq 0} B_{r,t}\left( (m+a)z \right) - \dfrac{\zeta(2,a)}{z^2} &- \dfrac{I_{B_{r,t},A}^*}{z} + c_{-1}\left( \log(Az) + \gamma + \psi(a) \right) - \sum_{n = 0}^\infty c_n^* \dfrac{B_{n+1}(a)}{n+1} z^n \bigg| \\ &\leq \dfrac{M_2 J_{g_{r,t},2}}{2} |z| + \dfrac{3}{5} \sum_{n = 1}^\infty n \left| \dfrac{B_{n+2}\left( 1 - \frac rt \right)}{(n+2)!} - \dfrac{(-A)^{n+1} c_{-1}}{(n+1)!} \right| |az|^n.
\end{align*}
To simplify the integral
\begin{align*}
I_{B_{r,t},A}^* = \int_{0}^\infty \left(\dfrac{e^{-\frac rt z}}{z\left( 1 - e^{-z} \right)} - \dfrac{1}{z^2} + \left( \dfrac{r}{t} - \dfrac{1}{2} \right) \frac{e^{-Az}}{z} \right) dz,
\end{align*}
we use the substitutions $z \mapsto \frac{t}{r} z$ and $A = \frac{r}{t}$, which give
\begin{align*}
I_{B_{r,t},\frac{r}{t}}^* = \int_{0}^\infty \left(\dfrac{e^{-z}}{z \left( 1 - e^{- \frac{t}{r} z} \right)} - \dfrac{1}{\frac{t}{r} z^2} + \left( \dfrac{r}{t} - \dfrac{1}{2} \right) \frac{e^{-z}}{z} \right) dz.
\end{align*}
Lemma 2.3 of \cite{BCMO} states that for any real number $N > 0$,
\begin{multline*}
\int_0^\infty\left(\frac{e^{-x}}{x\left(1-e^{Nx}\right)}-\frac{1}{Nx^2}+\left(\frac 1N-\frac 12\right)\frac{e^{-x}}{x} \right)dx
=\log\left(\Gamma\left(\frac 1N\right) \right) +\left(\frac 12-\frac 1N\right) \log\left(\frac 1N\right)-\frac 12\log(2\pi),
\end{multline*}
and so the case $N = \frac tr$ implies
\begin{align*}
I_{B_{r,t},\frac rt}^* = \log\left(\Gamma\left(\frac rt\right) \right) +\left(\frac 12-\frac rt\right) \log\left(\frac rt\right)-\frac 12\log(2\pi) = \beta_{r,t} + \left(\frac 12-\frac rt\right) \log\left(\frac rt\right).
\end{align*}
A short calculation therefore shows
\begin{align*}
\bigg| \sum_{m \geq 0} B_{r,t}\left( (m+a)z \right) - \dfrac{\zeta(2,a)}{z^2} &- \dfrac{I_{B_{r,t},\frac rt}}{z} + B_1\left( 1 - \frac rt \right) \left( \log(\frac{r}{t} z) + \gamma + \psi(a) \right) - \sum_{n = 0}^\infty c_n^* \dfrac{B_{n+1}(a)}{n+1} z^n \bigg| \\ &\leq \dfrac{J_{g_{r,t},2}}{12} |z| + \dfrac{3}{5} \sum_{n = 1}^\infty n \left| \dfrac{B_{n+2}\left( 1 - \frac rt \right)}{(n+2)!} - \dfrac{(-r)^{n+1} B_1\left( 1 - \frac rt \right)}{t^{n+1} (n+1)!} \right| |az|^n.
\end{align*}
By the definitions \eqref{B_rt Approx Def} and \eqref{B_rt Error Def}, this completes the proof.
\end{proof}
\section{Estimates of Key Terms} \label{Estimates of Key Terms}
The proof of Theorem \ref{MAIN} uses a variation of Wright's circle method. As with any variation of the circle method, there are various stages where estimates must be made. For the sake of aesthetics, this section collects together the most important estimates.
This section is subdivided into three parts. The first two are dedicated to proving bounds on $G(q)$ on the {\it major arc} and {\it minor arc}, which play central roles in Wright's circle method and are defined in the first part. The last part considers elementary bounds on the functions $F_a^{r,t}(z)$ and $E_a^{r,t}(z)$ which make later computations more straightforward.
\subsection{Effective Major Arc Bounds}
Before we proceed, we define the terms {\it major arc} and {\it minor arc}. When using Wright's circle method, one must define the {\it major arc}, which is the region of some circle $C$ with fixed radius $|q|$, where $q = e^{-z}$ lies near the dominant pole of the generating function. In most examples this will mean $q$ lies near 1, which is true in our application. In terms of $z$, our major arc will consist of those $z = x + iy$ satisfying $0 < |y| < 30x$. The remainder of the circle $C$, called the {\it minor arc}, consists of those $z$ satisfying $30x \leq |y| < \pi$.
\begin{prop} \label{Major Arc Bound}
Let $q = e^{-z}$ for $\mathrm{Re}(z) > 0$ and $z$ on the major arc $0 < |y| < 30x$.
\noindent \textnormal{(1)} We have for $0 < x < \frac{2}{5t}$ that
\begin{align*}
\left| \Log\left(\lp q^r; q^t \right)_\infty^{-1}\right) - tz F_1^{r,t}(tz) \right| \leq |tz| E_1^{r,t}(tz).
\end{align*}
\noindent \textnormal{(2)} We have for $0 < x < \frac{1}{5t}$ that
\begin{align*}
\left| \Log\left(\lp -q^r; q^t \right)_\infty^{-1}\right) - tz F_1^{r,t}(2tz) + tz F_{1/2}^{r,t}(2tz) \right| \leq |tz| \left( E_1^{r,t}(2tz) + E_{1/2}^{r,t}(2tz) \right).
\end{align*}
\end{prop}
\begin{proof}
By expanding logarithms into Taylor series, we obtain
\begin{align*}
\Log\left(\lp \varepsilon q^r; q^t \right)_\infty^{-1}\right) = - \sum_{n \geq 0} \Log\left( 1 - \varepsilon q^{tn+r} \right) = \sum_{n \geq 0} \sum_{m \geq 1} \dfrac{\varepsilon^m q^{m(tn+r)}}{m} = \sum_{m \geq 1} \dfrac{\varepsilon^m q^{rm}}{m\left( 1 - q^{tm} \right)}.
\end{align*}
Setting $q = e^{-z}$ and multiplying the above expression by $\frac{tz}{tz}$, we obtain
\begin{align*}
\Log\left(\lp \varepsilon q^r; q^t \right)_\infty^{-1}\right) = tz \sum_{m \geq 1} \varepsilon^m \dfrac{e^{-rmz}}{tmz\left( 1 - e^{-tmz} \right)} = tz \sum_{m \geq 1} \varepsilon^m B_{r,t}(tmz),
\end{align*}
where $B_{r,t}(z)$ is defined as in Section \ref{Bernoulli Section}. If $0 < x < \frac{2}{5}$, then we have for all $z$ on the major arc that $|z| = \sqrt{x^2 + y^2} < \frac{2\sqrt{145}}{5} < 2\pi$. Therefore, the Laurent expansion for $B_{r,t}(tz)$ is convergent for $0 < x < \frac{2}{5t}$ and $z$ is on the major arc, and likewise for $B_{r,t}(2tz)$ if $0 < x < \frac{1}{5t}$. If we set $\varepsilon = 1$, (1) follows directly from Lemma \ref{B_rt Bound}. If $\varepsilon = -1$, by applying Lemma \ref{B_rt Bound} to each summand of
\begin{align*}
\Log\left(\lp -q^r; q^t \right)_\infty^{-1}\right) = tz \sum_{m \geq 0} B_{r,t}\left( (m+1) 2tz \right) - tz \sum_{m \geq 0} B_{r,t} \left( \left( m + \frac 12 \right) 2tz \right),
\end{align*}
(2) follows as well.
\end{proof}
\subsection{Effective Minor Arc Bounds}
We now estimate $G(q)$ on the minor arc $30x \leq |y| < \pi$ when $x$ is small.
\begin{prop} \label{Minor Arc Bounds}
Suppose $z = x + iy$ satisfies $0 < x < \frac{\pi}{480}$ and $30x \leq |y| < \pi$. Then we have
\begin{align*}
\left| G\left( q \right) \right| < \exp\left( \dfrac{1}{5 x} \right).
\end{align*}
\end{prop}
\begin{proof}
As in the proof of Proposition \ref{Major Arc Bound}, we may use Taylor expansions to show
\begin{align*}
\Log\left( G(q) \right) = \Log\left(\lp q; q^4 \right)_\infty^{-1}\right) + \Log\left(\lp -q^3; q^4 \right)_\infty^{-1}\right) = \sum_{m \geq 1} \dfrac{q^m}{m\left( 1 + \left( -1 \right)^{m+1} q^{2m} \right)}.
\end{align*}
Note that the desired result follows by exponentiation if $\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \frac{1}{5 x}$ for all $z$ satisfying $0 < x < \frac{\pi}{480}$ and $30x \leq |y| < \pi$. By the calculation
\begin{align*}
\mathrm{Re}\left( \dfrac{q^m}{m\left( 1 + \left( -1 \right)^{m+1} q^{2m} \right)} \right) = \dfrac{\cos\left( my \right) \left( |q|^m + \left( -1 \right)^{m+1} |q|^{3m} \right)}{m \left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|^2},
\end{align*}
we have
\begin{align} \label{EQN No Split}
\mathrm{Re}\left( \Log\left( G(q) \right) \right) = \sum_{m \geq 1} \dfrac{\cos\left( my \right) \left( |q|^m + \left( -1 \right)^{m+1} |q|^{3m} \right)}{m \left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|^2} = \sum_{m \geq 1} \dfrac{\cos\left( my \right) e^{-mx}}{m \left( 1 + \left( -1 \right)^m e^{-2mx} \right)}.
\end{align}
For $\frac{\pi}{2} \leq |y| < \pi$ and $0 < x < \frac{\pi}{480}$, the above expression gives negative values for $\mathrm{Re}\left( \Log \left( G(q) \right) \right)$, so clearly the required condition is satisfied. We may therefore assume without loss of generality that $30x \leq |y| < \frac{\pi}{2}$.
Having proven the claim for $\frac{\pi}{2} \leq |y| < \pi$, we now demonstrate a general method for lowering the value $\frac{\pi}{2}$. Using the inequality
\begin{align*}
\dfrac{\cos\left( my \right) \left( |q|^m + \left( -1 \right)^{m+1} |q|^{3m} \right)}{m \left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|^2} \leq \dfrac{\cos\left( my \right) |q|^m}{m\left( 1 - |q|^{2m} \right)},
\end{align*}
for any positive integer $k > 0$ we may split off the first $k$ terms of \eqref{EQN No Split} to deduce that whenever $30x \leq |y| < \frac{\pi}{2k}$,
\begin{align*}
\mathrm{Re}\left( \log G(q) \right) &\leq \sum_{m = 1}^k \dfrac{\cos\left( my \right) |q|^m}{m\left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|} + \sum_{m > k} \dfrac{\cos\left( my \right) \left( |q|^m + \left( -1 \right)^{m+1} |q|^{3m} \right)}{m \left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|^2}
\\ &< \sum_{m=1}^k \cos\left( my \right) \left( \dfrac{|q|^m}{m \left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|} - \dfrac{|q|^m}{m\left( 1 - |q|^{2m} \right)} \right) + \sum_{m \geq 1} \dfrac{\cos\left( my \right) |q|^m}{m\left( 1 - |q|^{2m} \right)}.
\end{align*}
Supposing for the sake of argument that the inequality has been proven outside the region $30x \leq |y| < \frac{\pi}{2m}$ for some fixed $m$, then $\left( -1 \right)^{m+1} \cos\left( 2my \right) \geq - \cos\left( 60mx \right)$ and therefore we get
\begin{align*}
\left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|^2 = 1 + 2\left( -1 \right)^{m+1} \cos\left( 2my \right) e^{-2mx} + e^{-4mx} \geq 1 - 2 \cos\left( 60mx \right) e^{-2mx} + e^{-4mx}.
\end{align*}
Using the Taylor expansion of $1 - 2 \cos\left( 60mx \right) e^{-2mx} + e^{-4mx}$, we can find a constant $\alpha_m > 0$ such that $\left| 1 + \left( -1 \right)^{m+1} q^{2m} \right| > \alpha_m x$ for all $0 < x < \frac{\pi}{480}$. Since $1 - |q|^{2m} = 1 - e^{-2mx} > 2mx$ and $|q|^m > e^{- \frac{m\pi}{480}}$ for $0 < x < \frac{\pi}{480}$, we arrive at the bound
\begin{align*}
\dfrac{|q|^m}{m \left| 1 + \left( -1 \right)^{m+1} q^{2m} \right|} - \dfrac{|q|^m}{m\left( 1 - |q|^{2m} \right)} < \dfrac{e^{- \frac{m\pi}{480}}}{2m^2 x} \left( \dfrac{2m}{\alpha_m} - 1 \right).
\end{align*}
If we suppose the claim is proven for all $\frac{\pi}{2k} \leq |y| < \pi$, then by splitting off terms up to $k$ we have for some constants $\alpha_1, \dots, \alpha_k$ that
\begin{align} \label{EQN k-term split}
\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \sum_{m \geq 1} \dfrac{\cos\left( my \right) |q|^m}{m\left( 1 - |q|^{2m} \right)} + \sum_{m=1}^k \dfrac{\cos\left( my \right) e^{- \frac{m\pi}{480}}}{2m^2 x} \left( \dfrac{2m}{\alpha_m} - 1 \right).
\end{align}
The remainder of the proof consists of applying this procedure recursively until we have shown that $\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \frac{1}{5x}$ on the entire minor arc. Since we have already reduced to the case $30x \leq |y| < \frac{\pi}{2}$, we may apply the case $k = 1$ of \eqref{EQN k-term split} with suitable $\alpha_1$. By using terms up to $x^8$ in the Taylor expansion of $1 - 2 \cos\left( 60mx \right) e^{-2mx} + e^{-4mx}$, we can choose $\alpha_1 = \sqrt{3508}$ and therefore
\begin{align*}
\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \sum_{m \geq 1} \dfrac{\cos\left( my \right) |q|^m}{m\left( 1 - |q|^{2m} \right)} + \dfrac{\cos\left( y \right) e^{- \frac{\pi}{480}}}{2x}\left( \dfrac{2}{\sqrt{3508}} - 1 \right).
\end{align*}
Numerical calculations show that the resulting expression proves the required inequality for $\frac{\pi}{4} \leq |y| < \frac{\pi}{2}$, and we therefore may assume without loss of generality that $30x \leq |y| < \frac{\pi}{4}$. We may then implement our algorithm for $k = 2$. A quick calculation shows that $\alpha_2 = \sqrt{13200}$ is valid for $0 < x < \frac{\pi}{480}$, which gives
\begin{align*}
\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \sum_{m \geq 1} \dfrac{\cos\left( my \right) |q|^m}{m\left( 1 - |q|^{2m} \right)} + \dfrac{\cos\left( y \right) e^{- \frac{\pi}{480}}}{2x}\left( \dfrac{2}{\sqrt{3508}} - 1 \right) + \dfrac{\cos\left( 2y \right) e^{-\frac{\pi}{240}}}{8x} \left( \dfrac{4}{\sqrt{13200}} - 1 \right).
\end{align*}
We check once again that we may reduce to the case $30x \leq |y| < \frac{\pi}{6}$, and we then apply the case $k = 3$ of the algorithm and obtain using $\alpha_3 = \sqrt{27000}$ that
\begin{align*}
\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \sum_{m \geq 1} \dfrac{\cos\left( my \right) |q|^m}{m\left( 1 - |q|^{2m} \right)} + \sum_{m = 1}^3 \dfrac{\cos\left( my \right) e^{- \frac{m\pi}{480}}}{2m^2 x} \left( \dfrac{2m}{\alpha_m} - 1 \right).
\end{align*}
Checking values at $y = \frac{\pi}{8}$, this inequality proves that $\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \frac{1}{5x}$ for all $\frac{\pi}{8} \leq |y| < \pi$ and $0 < x < \frac{\pi}{480}$. We can show quickly that the choice $\alpha_4 = 200$ reduces the region to $30x \leq |y| < \frac{\pi}{10}$, and so we may apply the case $k = 5$ using $\alpha_5 = \sqrt{55000}$ and
\begin{align*}
\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \sum_{m \geq 1} \dfrac{\cos\left( my \right) |q|^m}{m\left( 1 - |q|^{2m} \right)} + \sum_{m = 1}^5 \dfrac{\cos\left( my \right) e^{-\frac{m\pi}{480}}}{2m^2 x} \left( \dfrac{2m}{\alpha_m} - 1 \right),
\end{align*}
which reduces the problem to the region $30x \leq |y| < \frac{\pi}{12}$. In this region, we have $\cos\left( my \right) > \cos\left( \frac{m\pi}{12} \right)$ for $1 \leq m \leq 5$, and we may then show with a direct computation that
\begin{align*}
\dfrac{\pi^2}{12x} + \sum_{m = 1}^5 \dfrac{\cos\left( my \right) e^{-m\pi/480}}{2m^2x} \left( \dfrac{2m}{\alpha_m} - 1 \right) < \sum_{m = 1}^5 \dfrac{\cos\left( \frac{m\pi}{12} \right) e^{-m\pi/480}}{2m^2x} \left( \dfrac{2m}{\alpha_m} - 1 \right) \approx 0.199 < \dfrac{1}{5}.
\end{align*}
Therefore, the proof is complete once we show that for all $30x \leq |y| < \pi$ and $0 < x < \frac{\pi}{480}$,
\begin{align*}
\mathrm{Re}\left( \Log \left( G(q) \right) \right) < \dfrac{\pi^2}{12x}.
\end{align*}
By \eqref{EQN No Split} we have
\begin{align*}
\mathrm{Re}\left( \Log \left( G(q) \right) \right) \leq \sum_{m \geq 1} \dfrac{|q|^m}{m\left( 1 - |q|^{2m} \right)} = \Log\left(\lp |q|; |q|^2 \right)_\infty^{-1}\right).
\end{align*}
This can bounded using classical bounds for $P(q) := \left( q; q\right)_\infty^{-1}$. Recall that for $q = e^{2\pi i \tau}$, Dedekind's eta function $\eta(z) = q^{\frac{1}{24}} \left( q; q \right)_\infty$ satisfies the transformation law
\begin{align*}
\eta(\tau) = \sqrt{\dfrac{i}{\tau}} \eta\left( - \dfrac{1}{\tau} \right).
\end{align*}
Since $P(q) = q^{\frac{1}{24}} \eta(\tau)^{-1}$, using $\tau = \frac{iz}{2\pi}$ it follows that
\begin{align*}
P(q) = \sqrt{\dfrac{z}{2\pi}} e^{\frac{\pi^2}{6z} - \frac{z}{24}} P\left( e^{\frac{-4\pi^2}{z}} \right).
\end{align*}
Letting $q \mapsto |q|$ and taking logarithms, we get
\begin{align*}
\Log\left( P\left( |q| \right) \right) = \dfrac{\pi^2}{6x} + \dfrac{1}{2} \log(x) - \dfrac{1}{2} \log(2\pi) + \dfrac{x}{24} + \Log\left( P\left( e^{-\frac{4\pi^2}{x}} \right) \right).
\end{align*}
We obtain a similar formula for $\log P\left( |q|^2 \right)$ by taking $x \mapsto 2x$, and therefore we have
\begin{align*}
\Log\left( \left( |q|; |q|^2 \right)^{-1}_\infty\right) = \Log\left( \dfrac{P\left( |q| \right)}{P\left( |q|^2 \right)} \right) = \dfrac{\pi^2}{12x} - \dfrac{1}{2} \log(2) - \dfrac{x}{24} + \Log\left( P\left( e^{-\frac{4\pi^2}{x}} \right) \right) - \Log\left( P\left( e^{-\frac{2\pi^2}{x}} \right) \right).
\end{align*}
It is then straightforward to check for $0 < x < \frac{\pi}{480}$ that
\begin{align*}
\mathrm{Re}\left( \log G(q) \right) \leq \log\left( |q|; |q|^2 \right)_\infty^{-1} < \dfrac{\pi^2}{12x},
\end{align*}
which completes the proof.
\end{proof}
\subsection{Bounds on $F_a^{r,t}(z)$ and $E_a^{r,t}(z)$}
We will need the following effective estimates of the functions $F_a^{r,t}(z)$ and $E_a^{r,t}(z)$ which appear in Lemma \ref{B_rt Bound}.
\begin{lemma} \label{E-Bounds}
Let $0 < a \leq 1$ be a real number and $z$ any complex number satisfying $0 < |az| < \frac{\pi}{6}$. Then we have
\begin{align*}
E_a^{1,4}(z) < \left( \dfrac{649}{480000} + \dfrac{99 a}{5000} \right) |z|
\end{align*}
and
\begin{align*}
E_a^{3,4}(z) < \left( \dfrac{5}{768} + \dfrac{186a}{5000} \right) |z|.
\end{align*}
\end{lemma}
\begin{proof}
We first consider the two integrals $J_{g_{1,4},2}$ and $J_{g_{3,4},2}$. The function $g^{\prime\prime}_{1,4}(x)$ has a unique positive real zero $\alpha \approx 15.4523$ and is positive if and only if $0 < x < \alpha$. The estimate $g^\prime_{1,4}(\alpha) < 0.0003$ then justifies the bound
\begin{align*}
J_{g_{1,4},2} := \int_0^\infty \left| g^{\prime \prime}_{1,4}(x) \right| dx < \dfrac{649}{40000}.
\end{align*}
We may similarly see that $g^{\prime\prime}_{3,4}(x) < 0$ for all $0 < x < \infty$ and
\begin{align*}
J_{g_{3,4},2} = \dfrac{5}{64}.
\end{align*}
We also have
\begin{align*}
\left| \dfrac{B_{n+2}\left( \frac 34 \right)}{(n+2)!} - \dfrac{(-1)^{n+1}}{4^{n+2} (n+1)!} \right| \leq \dfrac{M_{n+2}}{(n+2)!} + \dfrac{1}{4^{n+2} (n+1)!} \leq \dfrac{2 \zeta(n+2)}{(2\pi)^{n+2}} + \dfrac{1}{4^{n+2} (n+1)!},
\end{align*}
and therefore by applying elementary bounds, in particular that $\zeta(n+2) < \frac{\pi^2}{6}$ for $n > 0$, we have
\begin{align*}
\sum_{n = 1}^\infty n \left| \dfrac{B_{n+2}\left( \frac 34 \right)}{(n+2)!} - \dfrac{(-1)^{n+1}}{4^{n+2} (n+1)!} \right| |az|^n < \dfrac{|az|}{24\pi} \sum_{n=1}^\infty n \left( \dfrac{|az|}{2\pi} \right)^{n-1} + \dfrac{1}{16} \sum_{n=1}^\infty \left( \dfrac{|az|}{4} \right)^n \dfrac{1}{(n+1)!}.
\end{align*}
Now if we suppose that $0 < |az| < \frac{\pi}{6}$, then we get
\begin{align*}
\dfrac{|az|}{24\pi} \sum_{n = 1}^\infty n \left( \dfrac{|az|}{2\pi} \right)^{n-1} < \dfrac{|az|}{24\pi} \sum_{n = 1}^\infty \dfrac{n}{12^{n-1}} = \dfrac{|az|}{242\pi}
\end{align*}
and
\begin{align*}
\dfrac{1}{16} \sum_{n=1}^\infty \left( \dfrac{|az|}{4} \right)^n \dfrac{1}{(n+1)!} < \dfrac{|az|}{64} \sum_{n=1}^\infty \left( \dfrac{\pi}{24} \right)^{n-1} \dfrac{1}{(n+1)!} < \dfrac{53 |az|}{6400}.
\end{align*}
Combining this information, we conclude that
\begin{align*}
\sum_{n = 1}^\infty n \left| \dfrac{B_{n+2}\left( \frac 34 \right)}{(n+2)!} - \dfrac{(-1)^{n+1}}{4^{n+2} (n+1)!} \right| |az|^n < \left( \dfrac{1}{242 \pi} + \dfrac{53}{6400} \right) |az| < \dfrac{1}{100} |az|.
\end{align*}
The analogous calculation for $E_a^{3,4}(z)$ and $0 < |az| < \frac{\pi}{6}$ yields
\begin{align*}
\sum_{n = 1}^\infty n \left| \dfrac{B_{n+2}\left( \frac 14 \right)}{(n+2)!} + \dfrac{(-3)^{n+1}}{4^{n+2} (n+1)!} \right| |az|^n \leq \dfrac{1}{242\pi} |az| + \dfrac{3}{16} \sum_{n=1}^\infty \left( \dfrac{3|az|}{4} \right)^n \dfrac{1}{(n+1)!} < \dfrac{3}{100} |az|.
\end{align*}
The required inequalities for $E_a^{1,4}(z)$ and $E_a^{3,4}(z)$ then follow from definitions.
\end{proof}
We now estimate a certain combination of the functions $F_a^{r,t}(z)$ in a similar manner.
\begin{lemma} \label{F-Bounds}
For all $0 < |z| < \frac{\pi}{6}$, we have
\begin{align*}
\left| 4z F_1^{1,4}(4z) + 4z F_1^{3,4}(8z) - 4z F_{1/2}^{3,4}(8z) - \dfrac{\pi^2}{48z} - \dfrac{1}{4} \log(z) - \beta_{1,4} + \dfrac{\log(2) + \gamma}{4} \right| \leq \dfrac{|z|}{2}.
\end{align*}
\end{lemma}
\begin{proof}
We define the function
\begin{align*}
F(z) := 4z \left( F_1^{1,4}(4z) + F_1^{3,4}(8z) - F_{1/2}^{3,4}(8z) \right).
\end{align*}
Then $F(z)$ has a series expansion of the form
\begin{align*}
F(z) = \dfrac{\alpha_{-1}}{z} + \alpha_{\text{log}} \log(z) + \sum_{n = 0}^\infty \alpha_n z^n
\end{align*}
near $z = 0$ which comes from the analogous series expansions for $F_a^{r,t}(z)$. In particular, the first few coefficients are $\alpha_{-1} = \frac{\pi^2}{48}$, $\alpha_{\text{log}} = \frac 14$, $\alpha_0 = \beta_{1,4} + \frac{\gamma + \psi(1)}{4} + \frac{\psi\left( \frac{1}{2} \right) + \psi(1)}{8}$, and $\alpha_1 = - \frac{1}{96}$, and for $n \geq 2$ the coefficients are
\begin{align*}
\alpha_n = \dfrac{(-1)^n}{4n \cdot n!} \left( B_n(1) + 6^{n-1}\left( B_n\left( \frac 12 \right) - B_n(1) \right) \right).
\end{align*}
Since $\psi(1) = -\gamma$ and $\psi\left( \frac{1}{2} \right) = -2 \log(2) - \gamma$, we have $\alpha_0$ to $\alpha_0 = \beta_{1,4} - \frac{\log(2) + \gamma}{4} \approx 0.051493$. By \eqref{Lehmer's Bound} we have for $n > 1$ that
\begin{align*}
\left| \alpha_n \right| \leq \dfrac{1}{4n \cdot n!} \left( \left| B_n(1) \right| + 6^{n-1} \left| B_n\left( \frac 12 \right) \right| + 6^{n-1} \left| B_n(1) \right| \right) &\leq \dfrac{1}{4n \cdot n!} \left( \dfrac{2 \zeta(n) n!}{(2\pi)^n} + 6^{n-1} \cdot \dfrac{4 \zeta(n) n!}{(2\pi)^n} \right) \\ &< \dfrac{\pi^2}{12n} \left( \dfrac{1}{(2\pi)^n} + \dfrac{1}{3} \left( \dfrac{3}{\pi} \right)^n \right).
\end{align*}
The same inequality holds for $n=1$ by a direct calculation, and so
\begin{align*}
\left| F(z) - \dfrac{\alpha_{-1}}{z} - \alpha_{\text{log}} \log(z) - \alpha_0 \right| &< \dfrac{\pi^2}{12} \sum_{n=1}^\infty \dfrac{1}{n} \left( \dfrac{|z|}{2\pi} \right)^n + \dfrac{\pi^2}{36} \sum_{n=1}^\infty \dfrac{1}{n} \left( \dfrac{3|z|}{\pi} \right)^n \\ &= \dfrac{\pi |z|}{24} \sum_{n=1}^\infty \dfrac{1}{n} \left( \dfrac{|z|}{2\pi} \right)^{n-1} + \dfrac{\pi |z|}{12} \sum_{n=1}^\infty \dfrac{1}{n} \left( \dfrac{3|z|}{\pi} \right)^{n-1}
\end{align*}
for $0 < |z| < \frac{\pi}{6}$, and applying the upper limit on $|z|$ we conclude that
\begin{align*}
\left| F(z) - \dfrac{\alpha_{-1}}{z} - \alpha_{\text{log}} \log(z) - \alpha_0 \right| < \dfrac{\pi \log\left( \frac{12}{11} \right) |z|}{2} + \dfrac{\pi \log(2) |z|}{6} < \dfrac{|z|}{2}.
\end{align*}
The result then follows from the known values of $\alpha_n$ and $\alpha_{\text{log}}$.
\end{proof}
\section{Proof of Theorem \ref{MAIN}} \label{Proof of MAIN}
We now proceed to the proof of Theorem \ref{MAIN}, which relies on a variation of Wright's circle method. We set $q = e^{-z}$ with $\mathrm{Re}\left( z \right) = \eta$. Since $G(q)$ has no poles inside the unit disk, we have by Cauchy's theorem that
\begin{align*}
a(n) = \dfrac{1}{2\pi i} \int_C \dfrac{G(q)}{q^{n+1}} dq,
\end{align*}
where $C$ is any circle oriented counterclockwise centered at 0 with radius $|q| = e^{-\eta}$. We choose $C$ so that $\eta = \frac{\pi}{\sqrt{48n}}$. As in Section \ref{Estimates of Key Terms}, we define the {\it major arc} $\widetilde C$ as that portion of $C$ satisfying $0 < \mathrm{Im}(z) < 30 \eta$ and the {\it minor arc} $C \backslash \widetilde C$ as that portion of $C$ satisfying $30\eta \leq \mathrm{Im}(z) < \pi$. We now decompose the integral for $a(n)$ along the major and minor arcs, i.e.,
\begin{align*}
a(n) = \dfrac{1}{2\pi i} \int_{\widetilde C} \dfrac{G(q)}{q^{n+1}} dq + \dfrac{1}{2\pi i} \int_{C \backslash \widetilde C} \dfrac{G(q)}{q^{n+1}} dq.
\end{align*}
Defining the function
\begin{align*}
G^*(q) := \exp\left( \dfrac{\pi^2}{48z} - \dfrac{1}{4} \log(z) - \beta_{1,4} + \dfrac{\log(2) + \gamma}{4} \right),
\end{align*}
we may further decompose the integral on the major arc so that
\begin{align*}
a(n) = \dfrac{1}{2\pi i} \int_{\widetilde C} \dfrac{G^*(q)}{q^{n+1}} dq + \dfrac{1}{2\pi i} \int_{\widetilde C} \dfrac{G(q) - G^*(q)}{q^{n+1}} dq + \dfrac{1}{2\pi i} \int_{C \backslash \widetilde C} \dfrac{G(q)}{q^{n+1}} dq =: J_1(n) + J_2(n) + J_3(n).
\end{align*}
We begin by estimating $J_2(n)$ and $J_3(n)$. Assuming that $\eta < \frac{\pi}{480}$, or equivalently that $n > 4800$, by Proposition \ref{Minor Arc Bounds} we know that $\left| G(q) \right| < \exp\left( \dfrac{1}{5 \eta} \right)$ on the minor arc. Since the length of $C \backslash \widetilde C$ is less than $2\pi$ and $\left| \int_{C \backslash \widetilde{C}} q^{-1} dq \right| < 2\pi \left| \log |q| \right| = 2 \pi \eta$, it follows that
\begin{align} \label{J3 Bound}
\left| J_3(n) \right| \leq \dfrac{1}{2\pi} \left| \int_{C \backslash \widetilde C} \dfrac{G(q)}{q^{n+1}} dq \right| < 2\pi \eta \exp\left( n \eta + \dfrac{1}{5 \eta} \right) = \dfrac{\pi^2}{2 \sqrt{3 n}} \exp\left( \left( \dfrac{\pi}{4\sqrt{3}} + \dfrac{4 \sqrt{3}}{5 \pi} \right) \sqrt{n} \right).
\end{align}
For convenience, define as in the proof of Lemma \ref{F-Bounds} the function
\begin{align*}
F(z) := 4z F_1^{1,4}(4z) + 4z F_1^{3,4}(8z) - 4z F_{1/2}^{3,4}(8z).
\end{align*}
Applying Proposition \ref{Major Arc Bound} to $G(q)$, we obtain
\begin{align*}
\left| \Log \left( G(q) \right) - F(z) \right| &\leq \left| \Log\left(\lp q; q^4 \right)_\infty^{-1}\right) - 4z F_1^{1,4}(4z) \right| + \left| \Log\left(\lp -q^3; q^4 \right)_\infty^{-1}\right) - 4z F_1^{3,4}(8z) + 4z F_{1/2}^{3,4}(8z) \right| \\ &\leq 4 |z| \left( E_1^{1,4}(4z) + E_1^{3,4}(8z) + E^{3,4}_{1/2}(8z) \right)
\end{align*}
on the major arc. Since the major arc is defined by $0 < \mathrm{Im}(z) < 30\eta$, we have on the major arc that $\eta \leq |z| \leq \sqrt{901} \eta < \frac{\pi \sqrt{901}}{480} < \frac{\pi}{6}$, so the hypothesis $0 < |z| < \frac{\pi}{6}$ is always satisfied on the major arc, and therefore by Lemma \ref{E-Bounds} we have
\begin{align*}
\left| \Log \left( G(q) \right) - F(z) \right| < \dfrac{7}{5} |z|^2.
\end{align*}
By Lemma \ref{F-Bounds}, we know that
\begin{align*}
\left| F(z) - \Log \left( G^*(q) \right) \right| = \left| F(z) - \dfrac{\pi^2}{48z} - \dfrac{1}{4} \Log(z) - \beta_{1,4} + \dfrac{\log(2) + \gamma}{4} \right| \leq \dfrac{|z|}{2}
\end{align*}
for all $0 < |z| < \frac{\pi}{6}$. Therefore, we get
\begin{align} \label{Log G Bound}
\left| \Log\left( G(q) \right) - \Log\left( G^*(q) \right) \right| \leq \dfrac{1}{2} |z| + \dfrac{7}{5} |z|^2
\end{align}
for all $0 < |z| < \frac{\pi}{6}$. Using the series expansion of $\exp(z)$ along with \eqref{Log G Bound}, we have
\begin{align*}
\left| e^{\Log\left( G(q) \right) - \Log\left( G^*(q) \right)} - 1 \right| \leq \sum_{m=1}^\infty \dfrac{1}{m!} \left( \dfrac{1}{2} |z| + \dfrac{7}{5} |z|^2 \right)^m < \exp\left( \dfrac{\sqrt{901}}{2} \eta + \dfrac{217}{5} \eta^2 \right)
\end{align*}
and
\begin{align*}
\left| G^*(q) \right| < \dfrac{\Gamma\left( \frac 14 \right) 31^{\frac{1}{8}}}{e^{\frac{\gamma}{4}} 2^{\frac{5}{4}} 3^{\frac{1}{8}} \pi^{\frac{1}{2}}} \eta^{\frac{1}{4}} \exp\left( \dfrac{\pi^2}{48\eta} \right) < \dfrac{23}{10} \eta^{\frac{1}{4}} \exp\left( \dfrac{\pi^2}{48\eta} \right)
\end{align*}
on the major arc. Combining these bounds with simple estimates and the identity
\begin{align*}
\left| G(q) - G^*(q) \right| &\leq \left| G^*(q) \right| \cdot \left| e^{\Log\left( G(q) \right) - \Log\left( G^*(q) \right)} - 1 \right|,
\end{align*}
we have
\begin{align*}
\left| G(q) - G^*(q) \right| < \dfrac{23}{10} \eta^{\frac{1}{4}} \exp\left( \dfrac{\pi^2}{48\eta} + \dfrac{\sqrt{901}}{2} \eta + \dfrac{217}{5} \eta^2 \right)
\end{align*}
on the major arc. Since $\widetilde C$ is less than $2\pi$ and $\left| \int_{\widetilde{C}} |q|^{-1} dq \right| < 2\pi \eta$ as before, we have
\begin{align} \label{J2 Bound}
\left| J_2(n) \right| \notag &\leq \dfrac{1}{2\pi} \int_{\widetilde C} \dfrac{\left| G(q) - G^*(q) \right|}{|q|^{n+1}} dq < \dfrac{23 \pi}{5} \eta^{\frac{5}{4}} \exp\left( n\eta + \dfrac{\pi^2}{48\eta} + \dfrac{\sqrt{901}}{2} \eta + \dfrac{217}{5} \eta^2 \right) \\ &< \dfrac{27}{5 n^{\frac{5}{8}}} \exp\left( \dfrac{\pi}{2\sqrt{3}} \sqrt{n} + \dfrac{\pi \sqrt{901}}{8 \sqrt{3n}} + \dfrac{217 \pi^2}{240 n} \right).
\end{align}
On the basis of \eqref{J3 Bound} and \eqref{J2 Bound}, $a(n)$ is estimated by
\begin{align*}
\left| a(n) - J_1(n) \right| < \dfrac{\pi^2}{2 \sqrt{3 n}} \exp\left( \left( \dfrac{\pi}{4\sqrt{3}} + \dfrac{4 \sqrt{3}}{5 \pi} \right) \sqrt{n} \right) + \dfrac{27}{5 n^{\frac{5}{8}}} \exp\left( \dfrac{\pi}{2\sqrt{3}} \sqrt{n} + \dfrac{\pi \sqrt{901}}{8 \sqrt{3n}} + \dfrac{217 \pi^2}{240 n} \right).
\end{align*}
Therefore, to show that $a(n) \geq 0$ it suffices to show that
\begin{align*}
J_1(n) > \dfrac{\pi^3}{24 n} \exp\left( \left( \dfrac{\pi}{4\sqrt{3}} + \dfrac{4 \sqrt{3}}{5 \pi} \right) \sqrt{n} \right) + \dfrac{27}{5 n^{\frac{5}{8}}} \exp\left( \dfrac{\pi}{2\sqrt{3}} \sqrt{n} + \dfrac{\pi \sqrt{901}}{8 \sqrt{3n}} + \dfrac{217 \pi^2}{240 n} \right).
\end{align*}
By the definitions of $J_1(n)$ and $G^*(q)$ we may write
\begin{align*}
J_1(n) = \dfrac{1}{2\pi i} \int_{\widetilde C} \dfrac{G^*(q)}{q^{n+1}} dq = \dfrac{2^{\frac{3}{4}} \pi^{\frac{1}{2}} e^{\frac{\gamma}{4}}}{\Gamma\left( \frac{1}{4} \right)} J_1^*(n),
\end{align*}
where
\begin{align*}
J_1^*(n) := \dfrac{1}{2\pi i} \int_{\widetilde C} z^{-\frac{1}{4}} \dfrac{\exp\left( \dfrac{\pi^2}{48z} \right)}{q^{n+1}} dq = \dfrac{1}{2\pi i} \int_{\widetilde D} z^{-\frac{1}{4}} \exp\left( \dfrac{\pi^2}{48z} + nz \right) dz,
\end{align*}
where $\widetilde D$ is the line segment in the $z$-plane defined by $\mathrm{Re}(z) = \eta$ and $\left| \mathrm{Im}(z) \right| \leq 2\eta$. Since $\frac{\Gamma\left( \frac{1}{4} \right)}{2^{\frac{3}{4}} \pi^{\frac 12} e^{\frac{\gamma}{4}}} > \frac{21}{20}$, for $a(n) \geq 0$ to hold it suffices to show that $n$ satisfies
\begin{align*}
J_1^*(n) > E(n) := \dfrac{21 \pi^2}{40 \sqrt{3 n}} \exp\left( \left( \dfrac{\pi}{4\sqrt{3}} + \dfrac{4 \sqrt{3}}{5 \pi} \right) \sqrt{n} \right) + \dfrac{567}{200 n^{\frac{5}{8}}} \exp\left( \dfrac{\pi}{2\sqrt{3}} \sqrt{n} + \dfrac{\pi \sqrt{901}}{8 \sqrt{3n}} + \dfrac{217 \pi^2}{240 n} \right).
\end{align*}
Let $D_-, D_+$ be the line segments with $\mathrm{Im}(z) = - 2\eta, 2\eta$ and $\mathrm{Re}(z) \leq \eta$ respectively and $D = D_- \cup \widetilde D \cup D_+$ oriented counterclockwise. With this notation, for any $v \in \mathbb{C}$ the modified Bessel function $I_{-3/4}(z)$ is given by
\begin{align*}
\left( \dfrac{z}{2} \right)^{-\frac 34} I_{-\frac 34}(z) = \dfrac{1}{2\pi i} \int_D t^{-\frac{1}{4}} \exp\left( \dfrac{z^2}{4t} + t \right) dt.
\end{align*}
We now bound $J_1^*(n)$ against this type of Bessel function. In particular, from the definition of $I_{-\frac 34}(z)$ one may check that
\begin{align*}
\left( \dfrac{\pi \sqrt{n}}{2 \sqrt{12}} \right)^{-\frac 34} I_{-\frac 34} \left( \pi \sqrt{\dfrac{n}{12}} \right) - J_1^*(n) = \dfrac{1}{2\pi i} \int_{D \backslash \widetilde D} t^{-\frac{1}{4}} \exp\left( \dfrac{\pi^2}{48t} + nt \right) dt.
\end{align*}
It therefore suffices to bound the integrand on $D_-$ and $D_+$. For $t \in D_-$, let $t = \left( \eta - u \right) - 2\eta i$ for $u \geq 0$. Since $u \geq 0$, we see that $\mathrm{Re}\left( \frac{\pi^2}{48t} \right) \leq \frac{\pi^2}{48 (\eta - u)} \leq \frac{\pi}{4\sqrt{3}} \sqrt{n}$, we have on $D_+$ that
\begin{align*}
\left| \exp\left( \dfrac{\pi^2}{48t} + nt \right) \right| \leq \exp\left( \dfrac{\pi}{4\sqrt{3}} \sqrt{n} + n\left( \eta - u \right) \right) = \exp\left( \dfrac{\pi}{2 \sqrt{3}} \sqrt{n} - nu \right).
\end{align*}
The same bound holds for $t \in D_+$, and since on each we also have $|t|^{-\frac 14} \leq \frac{2^{\frac 14} 3^{\frac 18} n^{\frac 18}}{\pi^{\frac 18}}$, we have
\begin{align*}
\left| J_1^*(n) - \left( \dfrac{\pi}{4} \sqrt{\dfrac{n}{3}} \right)^{-\frac 34} I_{-\frac 34}\left( \pi \sqrt{\dfrac{n}{12}} \right) \right| &\leq \dfrac{3^{\frac 18} n^{\frac 18}}{2^{\frac 14} \pi^{\frac 54}} \exp\left( \dfrac{\pi}{2} \sqrt{\dfrac{n}{3}} \right) \int_0^\infty \exp\left( - nu \right) du \\ &< \dfrac{1}{5 n^{\frac 78}} \exp\left( \dfrac{\pi}{2} \sqrt{\dfrac{n}{3}} \right).
\end{align*}
We therefore conclude that $a(n) \geq 0$ whenever
\begin{align} \label{Final Reduction}
I_{-3/4}\left( \dfrac{\pi}{2} \sqrt{\dfrac{n}{3}} \right) > E(n) + \dfrac{1}{5 n^{\frac 78}} \exp\left( \dfrac{\pi}{2} \sqrt{\dfrac{n}{3}} \right).
\end{align}
A straightforward computer check shows that \eqref{Final Reduction} holds for $n > 2322$. The minor arc conditions $30\eta \leq \left| \mathrm{Im}(z) \right| < \pi$ and $0 < \eta < \frac{\pi}{480}$ require the stronger inequality $n > 4800$, and therefore we have shown $a(n) \geq 0$ for $n > 4800$. A quick calculation verifies the result for $n \leq 4800$, and so Theorem \ref{MAIN} is proven thanks to Chern's previous calculation (i.e. for all $n \leq 10000$).
| {
"timestamp": "2021-12-23T02:22:53",
"yymm": "2112",
"arxiv_id": "2112.09269",
"language": "en",
"url": "https://arxiv.org/abs/2112.09269",
"abstract": "In 2018 Coll, Mayers, and Mayers conjectured that the $q$-series $( q, -q^3; q^4 )_\\infty^{-1}$ is the generating function for a certain parity statistic related to the index of seaweed algebras. We prove this conjecture. Thanks to earlier work by Seo and Yee, the conjecture would follow from the non-negativity of the coefficients of this infinite product. Using a variant of the circle method along with Euler-Maclaurin summation, we establish this non-negativity, thereby confirming the Coll-Mayers-Mayers Conjecture.",
"subjects": "Combinatorics (math.CO); Number Theory (math.NT)",
"title": "Seaweed Algebras and the Index Statistic for Partitions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787849789999,
"lm_q2_score": 0.8006919949619793,
"lm_q1q2_score": 0.7904261507289783
} |
https://arxiv.org/abs/1503.06509 | Locally Maximal Product-free Sets of Size 3 | Let $G$ be a group, and $S$ a non-empty subset of $G$. Then $S$ is \emph{product-free} if $ab\notin S$ for all $a, b \in S$. We say $S$ is \emph{locally maximal product-free} if $S$ is product-free and not properly contained in any other product-free set. A natural question is what is the smallest possible size of a locally maximal product-free set in $G$. The groups containing locally maximal product-free sets of sizes $1$ and $2$ were classified by Giudici and Hart in 2009. In this paper, we prove a conjecture of Giudici and Hart by showing that if $S$ is a locally maximal product-free set of size $3$ in a group $G$, then $|G| \leq 24$. This allows us to complete the classification of locally maximal product free sets of size 3. | \section{Introduction}
\noindent Let $G$ be a group, and $S$ a non-empty subset of $G$. Then $S$ is
\emph{product-free} if $ab\notin S$ for all $a, b \in S$. For example,
if $H$ is a subgroup of $G$ then $Hg$ is a product-free set for any
$g\notin H$. Traditionally these sets have been studied in abelian groups, and have therefore been called sum-free sets. Since we are working with arbitrary groups it makes more sense to say `product-free' in this context. We say $S$ is \emph{locally maximal product-free} if $S$ is
product-free and not properly contained in any other product-free set.
We use the term {\em locally maximal} rather than maximal because
the majority of the literature in this area uses {\em maximal} to mean maximal by cardinality (for example \cite{streetwhite,SW1974A}).\\
There are some obvious questions from the definition: given a group $G$, what is the maximum cardinality of a product-free set in $G$, and what are the maximal (by cardinality) product-free sets? How many product-free sets are there in $G$? Given that each product-free set is contained in a locally maximal product-free set, what are the locally maximal product-free sets? What are the possible sizes of locally maximal product-free sets? The question of maximal (by cardinality) product-free sets has been fully solved for abelian groups by Green and Rusza \cite{greenruzsa}. For the nonabelian case Kedlaya \cite{kedlaya97} showed that there exists a constant $c$ such that the largest product-free set in a group of order $n$ has size at least $cn^{11/14}$. Gowers \cite{gowers} proved that if the smallest nontrivial representation of $G$ is of dimension $k$ then the largest product-free set in $G$ has size at most $k^{-1/3}n$ (Theorem 3.3 and commentary at the start of Section 5). Much less is known about the minimum sizes of locally maximal product-free sets. This question was first asked in \cite{babaisos} where the authors ask what is the minimum size of a locally maximal product-free set in a group of order $n$? A good bound is still not known. Small locally-maximal product-free sets when $G$ is an elementary abelian 2-group are of interest in finite geometry, because they correspond to complete caps in PG($n-1,2$). In \cite{gh}, the groups containing locally maximal product-free sets of sizes $1$ and $2$ were classified. Some general results were also obtained. Furthermore, there was a classification (Theorem $5.6$) of groups containing locally maximal product-free sets $S$ of size $3$ for which not every subset of size $2$ in $S$ generates $\ensuremath{\langle} S \ensuremath{\rangle}$. Each of these groups has order at most $24$. Conjecture $5.7$ of \cite{gh} was that if $G$ is a group of order greater than $24$, then $G$ does not contain a locally maximal product-free set of size $3$. Table $5$ listed all the locally maximal product-free sets in groups of orders up to $24$. So the conjecture asserts that this list is the complete list of all such sets. We have reproduced Table $5$ as Table \ref{tab1} in this paper because we need to use it in some of the arguments here. The main result of this paper is the following and its immediate corollary.
\begin{thm}
\label{msf3}
Suppose $S$ is a locally maximal product-free set of size 3 in a group $G$, such that every two element subset of $S$ generates $\langle S\rangle$. Then $|G| \leq 24$.
\end{thm}
\begin{cor}
If a group $G$ contains a locally maximal product-free set $S$ of size 3, then $|G| \leq 24$ and the only possibilities for $G$ and $S$ are listed in Table~\ref{tab1}.
\end{cor}
\begin{proof}
If not every two-element subset of $S$ generates $\langle S\rangle$, then by Theorem $5.6$ of \cite{gh}, $|G| \leq 24$. We may therefore assume that every two-element subset of $S$ generates $\langle S\rangle$. Then $|G| \leq 24$ by Theorem \ref{msf3}. Now Table \ref{tab1} is just Table $5$ of \cite{gh}; it is a list of all locally maximal product-free sets of size $3$ occurring in groups of order up to $24$ (in fact, up to $37$ in the original paper). Since we have shown that all locally maximal product-free sets of size 3 occur in groups of order up to $24$, this table now constitutes a complete list of possibilities.
\end{proof}
\noindent We finish this section by establishing the notation to be used in the rest of the paper, and giving some basic results from \cite{gh}. For subsets $A, B$ of a group $G$, we use the standard notation $AB$ for the product of $A$ and $B$. That is,
$$AB = \{ab : a \in A, b \in B\}.$$
\noindent By definition, a nonempty set $S \subseteq G$ is product-free if and
only if $S \cap SS = \varnothing$. In order to investigate locally maximal
product-free sets, we introduce some further notations. For a set $S \subseteq G$, we define the following sets:
\vspace*{-1mm}
\begin{eqnarray*}
S^2 &=& \{a^2: a \in S\};\\
S^{-1} &=& \{a^{-1}: a \in S\};\\
\sqrt S &=& \{x \in G: x^2 \in S\};\\
T(S) &=& S \cup SS \cup SS^{-1} \cup S^{-1}S;\\
\hat S &=& \{s \in S : \sqrt{\{s\}}\not\subset \ensuremath{\langle} S
\ensuremath{\rangle}\}.
\end{eqnarray*}
\noindent For a singleton set $\{a\}$, we usually write $\sqrt a$ instead of $\sqrt{\{a\}}$.\\
\noindent For a positive integer $n$, we will denote by $\mathrm{Alt}(n)$ the alternating group of degree $n$, by $C_n$ the cyclic group of order $n$, by $D_{2n}$ the dihedral group of order $2n$, and by $Q_{4n}$ the dicyclic group of order $4n$ given by $Q_{4n}:= \langle x,y: x^{2n} = 1, x^n = y^2, yx = x^{-1}y\rangle$.\\
We finish this section with a few results from \cite{gh}.
\begin{lemma}\label{3.1}
\cite[Lemma 3.1]{gh} Suppose $S$ is a product-free set in the group $G$. Then $S$ is locally maximal product-free if and only if $G = T(S) \cup \sqrt S$.
\end{lemma}
The next result lists, in order, Proposition 3.2, Theorem 3.4, Propositions 3.6, 3.7, 3.8 and Corollary 3.10 of \cite{gh}.
\begin{thm}\label{key} Let $S$ be a locally maximal product-free set in a group $G$. Then
\begin{enumerate}
\item[(i)] $\ensuremath{\langle} S \ensuremath{\rangle}$ is a normal
subgroup of $G$ and $G/\ensuremath{\langle} S \ensuremath{\rangle}$ is either trivial or
an elementary abelian 2-group;
\item[(ii)] $|G| \leq 2|T(S)|\cdot|\ensuremath{\langle} S \ensuremath{\rangle}|$;
\item[(iii)] if $\ensuremath{\langle} S \ensuremath{\rangle}$ is not an elementary abelian 2-group and $|\hat S| =
1$, then $|G| = 2|\ensuremath{\langle} S \ensuremath{\rangle}|$;
\item[(iv)] every
element $s$ of $\hat S$ has even order, and all odd powers of
$s$ lie in $S$;
\item[(v)] if there exists $s \in S$ and integers $m_1, \ldots, m_t$ such
that $\hat S = \{s, s^{m_1}, \ldots, s^{m_t}\},$ then $|G|$
divides $4|\ensuremath{\langle} S\ensuremath{\rangle}|$;
\item[(vi)] if $S\cap S^{-1} = \varnothing$, then $|G| \leq 4|S|^2+1$.
\end{enumerate}
\end{thm}
We require one final result.
\begin{thm}\cite[Theorem 5.1]{gh} \label{5.1}
Up to isomorphism, the only instances of
locally maximal product-free sets $S$ of size 3 of a group $G$ where
$|G| \leq 37$ are given in Table \ref{tab1}.
\end{thm}
\section{Proof of Theorem \ref{msf3}}
\begin{prop}\label{pro1}
Suppose $S$ is locally maximal product-free of size 3 in $G$. If $\langle S\rangle$ is cyclic, then $|G| \leq 24$.
\label{cyclic}
\end{prop}
\begin{proof} Write $S = \{a,b,c\}$.
First note that since $\langle S\rangle$ is abelian, $SS^{-1} = S^{-1}S$; moreover $aa^{-1} = bb^{-1} = cc^{-1} = 1$; so $|SS^{-1}| \leq 7$. Also $SS \subseteq \{a^2, b^2, c^2, ab, ac, bc\}$. Thus $$|T(S)| = |S \cup SS \cup SS^{-1}| \leq 3 + 6 + 7 = 16.$$ By Lemma \ref{3.1}, $G = T(S) \cup \sqrt S$; so $\langle S \rangle = T(S) \cup (\langle S\rangle \cap \sqrt S)$. Elements of cyclic groups have at most two square roots. Therefore $|\langle S\rangle | \leq 16 + 6 = 22$. By Table \ref{tab1}, $\langle S\rangle$ must now be one of $C_6$, $C_8$, $C_9$, $C_{10}$, $C_{11}$, $C_{12}$, $C_{13}$ or $C_{15}$. Theorem \ref{key}(iv) tells us that every element $s$ of $\hat S$ has even order and all odd powers of $s$ lie in $S$. This means that for $C_{9}$, $C_{11}$, $C_{13}$ or $C_{15}$, we have $\hat S = \varnothing$ and so $G = \langle S \rangle$. In particular, $|G|\leq 24$.\\
It remains to consider $C_6$, $C_8$, $C_{10}$ and $C_{12}$. For $C_6 = \langle g: g^6 = 1\rangle$, the unique locally maximal product-free set of size $3$ is $S = \{g, g^3, g^5\}$. Now if $g$ or $g^5$ is contained in $\hat S$, then $\hat S$ consists of powers of a single element; so by Theorem~\ref{key}(v), $|G|$ divides $24$. If neither $g$ nor $g^5$ is in $\hat S$, then $|\hat S| \leq 1$, and so by Theorem~\ref{key}(iii) therefore, $|G|$ divides $12$. In $C_8$ there is a unique (up to group automorphisms) locally maximal product-free set of size $3$, and it is $\{g, g^{-1}, g^4\}$, where $g$ is any element of order $8$. If $\hat S$ contains $g$ or $g^{-1}$, then $S$ contains all odd powers of that element by Theorem~\ref{key}(iv), and hence $S$ contains $\{g, g^3, g^5, g^7\}$, a contradiction. Therefore $|\hat S| \leq 1$ and so $|G|$ divides $16$. Next, we consider $\langle S \rangle = C_{10}$. Recall that elements of $\hat S$ must have even order. If $\hat S$ contains any element of order 10, then $S$ contains all five odd powers of this element, which is impossible by Theorem~\ref{key}(iv). This leaves only the involution of $C_{10}$ as a possible element of $\hat S$. Hence again $|\hat S|\leq 1$ and $|G|$ divides $20$. Finally we look at $C_{12}$. If $\hat S$ contains any element of order $12$, then $|S| \geq 6$, a contradiction. If $\hat S$ contains an element $x$ of order 6 then $S$ contains all three of its odd powers, so $S = \{x, x^3, x^5\}$. But then $\langle S \rangle \cong C_6$, contradicting the assumption that $\langle S \rangle = C_{12}$. Therefore, $\hat S$ can only contain elements of order $2$ or $4$. Up to group automorphism, we see from Table \ref{tab1} that every locally maximal product-free set $S$ of size $3$ in $C_{12}$ with $\langle S \rangle = C_{12}$ is one of $\{g,g^6,g^{10}\}$ or $\{g,g^3,g^8\}$ for some generator $g$ of $C_{12}$. Each of these sets contains exactly one element of order $2$ or $4$. Therefore in every case, $|\hat S| \leq 1$ and so $|G|$ divides $24$. This completes the proof.
\end{proof}
Note that the bound on $|G|$ in Proposition \ref{pro1} is attainable. For example in $Q_{24}$ there is a locally maximal product-free set $S$ of size $3$, with $\langle S \rangle \cong C_{12}$.
\begin{prop}\label{1inv}
Suppose $S$ is locally maximal product-free of size $3$ in $G$ such that every $2$-element subset of $S$ generates $\langle S \rangle$. Then either $|G| \leq 24$ or $S$ contains exactly one involution.
\end{prop}
\begin{proof}
First suppose $S$ contains no involutions. If $S \cap S^{-1} = \varnothing$, then Theorem \ref{key}(vi) tells us that $G$ has order at most 37, and then by Theorem \ref{5.1}, $(G,S)$ is one of the possibilities listed in Table \ref{tab1}. In particular $|G| \leq 24$. If $S \cap S^{-1} \neq \varnothing$, then $S = \{a, a^{-1}, b\}$ for some $a, b$. But then $\langle S \rangle = \langle a, a^{-1}\rangle = \langle a \rangle$, so $\langle S \rangle$ is cyclic. Now by Proposition \ref{cyclic} we get $|G| \leq 24$. Next, suppose that $S$ contains at least two involutions, $a$ and $b$, with the third element being $c$. Then, since every 2-element subset of $S$ generates $\langle S \rangle$, we have that $H = \langle S \rangle = \langle a, b \rangle$ is dihedral and $S$ is locally maximal product-free in $H$. Let $o(ab) = m$, so $H \cong D_{2m}$. The non-trivial coset of the subgroup $\langle ab \rangle$ is product-free of size $m$. So if $c$ lies in this coset, then we have $m = 3$ and $H \cong D_6$. If $c$ does not lie in this coset then $c = (ab)^i$ for some $i$, and from the relations in a dihedral group $ac^{-1} = ca$, $c^{-1}a = ac$, $bc^{-1} = cb$ and $c^{-1}b = bc$. The coset $\langle ab \rangle a$ consists of $m$ involutions, which cannot lie in $\sqrt S$. Thus $\langle ab \rangle a \subseteq T(S)$ by Lemma \ref{3.1}.
A straightforward calculation shows that
\begin{align*}
\langle ab \rangle a = T(S) \cap \langle ab \rangle a &= \{a,b,ac,ca,bc,cb,ac^{-1}, c^{-1}a, bc^{-1}, c^{-1}b\}\\
&= \{a,b,ac,ca,bc,cb\}
\end{align*}
This means $m \leq 6$, and $S$ consists of two generating involutions $a, b$ plus a power of their product $ab$, with the property that any two-element subset of $S$ generates $\langle a, b\rangle$. A glance at Table \ref{tab1} shows there are no locally maximal product-free sets of this form in $D_{2m}$ for $m \leq 6$. Therefore the only possibility is that $\langle S \rangle \cong D_6$, with $S$ consisting of the three reflections in $\langle S \rangle$. By Theorem \ref{key}(i), the index of $\langle S \rangle$ in $G$ is a power of $2$. By Theorem \ref{key}(ii), $|G| \leq 2|T(S)|\cdot |\langle S\rangle|$. Thus $|G| \in \{6,12,24,48\}$. Suppose for contradiction that $|G| = 48$. Now $G = T(S) \cup \sqrt S$, and since $S$ consists of involutions, the elements of $\sqrt S$ have order 4. So $G$ contains two elements of order $3$, three elements of order 2 and the remaining non-identity elements have order $4$. Then the $46$ elements of $G$ whose order is a power of 2 must lie in three Sylow $2$-subgroups of order $16$, with trivial pairwise intersection. Each of these groups therefore has a unique involution and $14$ elements of order $4$, all of which square to the given involution. But no group of order $16$ has fourteen elements of order $4$. Hence $|G| \neq 48$, and so $|G| \leq 24$. Therefore either $|G| \leq 24$ or $G$ contains exactly one involution.
\end{proof}
Before we establish the next result, we first make a useful observation. Suppose $S =\{a, b, c\}$ where $a, b, c \in G$ and $c$ is an involution. Then a straightforward calculation shows that
\begin{equation}
\label{ts}
T(S) \subseteq \left\{\begin{array}{cc}1,a,b,c,a^2,b^2,ab,ba,ac,ca,bc,cb,\\
ab^{-1}, ba^{-1}, ca^{-1}, cb^{-1}, a^{-1}b, a^{-1}c, b^{-1}a, b^{-1}c
\end{array}
\right\}.
\end{equation}
\begin{lemma}\label{order3} Suppose $S$ is a locally maximal product-free set of size $3$ in $G$, every $2$-element subset of $S$ generates $\langle S\rangle$, and $S$ contains exactly one involution. Then either $|G| \leq 24$ or $S = \{a,b,c\}$, where $a$ and $b$ have order $3$ and $c$ is an involution.
\end{lemma}
\begin{proof} Suppose $S = \{a,b,c\}$ where $c$ is an involution and $a, b$ are not. Consider $a^{-1}$. Recall that $G = T(S) \cup \sqrt S$. If $a^{-1} \in \sqrt S$ then $a^{-2} \in \{a,b,c\}$ which implies that either $a$ has order $3$ or $\langle S\rangle$ is cyclic (because for example if $a^{-2} = b$ then $\langle S\rangle = \langle a, b \rangle = \langle a\rangle$). Thus if $a^{-1} \in \sqrt S$ implies that either $a$ has order 3 or (by Lemma \ref{cyclic}) $|G| \leq 24$. Suppose then that $a^{-1} \in T(S)$. The elements of $T(S)$ are given in Equation \ref{ts}.
If $a^{-1} \in \{b,b^2,ab,ba,ab^{-1}, ba^{-1}, a^{-1}b, b^{-1}a\}$ then by remembering that $\langle S \rangle = \langle a, b\rangle$, we deduce that $\langle S \rangle$ is cyclic, generated by either $a$ or $b$. For example, $a^{-1} = ba$ implies $b \in \langle a\rangle$. Similarly, if $a^{-1} \in \{c,ac, ca, a^{-1}c, c^{-1}a\}$, then $\langle S \rangle$ is cyclic. Since $a$ has order at least 3, we cannot have $a^{-1} \in \{1,a\}$. If $a^{-1} \in \{bc,cb,b^{-1}c, c^{-1}b\}$, then $S$ would not be product-free. For instance $a^{-1} = b^{-1}c$ implies that $b^{-1}ca = 1$, and hence $ac = b$. The only remaining possibility is $a^{-1} = a^2$, meaning that $a$ has order 3. The same argument with $b^{-1}$ shows that $b$ also has order $3$.
\end{proof}
We can now prove Theorem \ref{msf3}, which states that if $S$ is a locally maximal product-free set of size 3 in a group $G$, such that every two element subset of $S$ generates $\langle S\rangle$, then $|G| \leq 24$.
\paragraph{Proof of Theorem \ref{msf3}}
Suppose $S$ is a locally maximal product-free set of size 3 in $G$ such that every two element subset of $S$ generates $\langle S\rangle$. Then by Lemma \ref{order3}, either $|G| \leq 24$ or $S = \{a,b,c\}$ where $a$ and $b$ have order $3$ and $c$ is an involution. In the latter case, we observe that $aca^{-1}$ is an involution, so must be contained in $T(S)$. Using Equation \ref{ts} we work through the possibilities. Obviously it is impossible for $aca^{-1}$ to be equal to any of $1, a, b, a^2$ or $b^2$ because these elements are not of order $2$. If any of $ac, ca, a^{-1}c, c^{-1}a, bc, cb, b^{-1}c$ or $cb^{-1}$ were involutions, then it would imply that $\langle S\rangle$ was generated by two involutions whose product has order 3. For example if $ac$ were an involution then $\langle c, ac\rangle = \langle a,c\rangle = \langle S \rangle$. That is, $\langle S\rangle$ would be dihedral of order $6$. But there is no product-free set in $D_6$ containing two elements of order 3, because if $x, y$ are the elements of order 3 in $D_6$ then $x^2 = y$ and $y^2 = x$. So the remaining possibilities for $aca^{-1}$ are $c, ab, ba, ab^{-1}, ba^{-1}, a^{-1}b$ and $b^{-1}a$. Now $aca^{-1} = ab$ implies $c = ba$, whereas $aca^{-1} = ab^{-1}$ implies $bc = a$ and $aca^{-1} = ba^{-1}$ implies $b = ac$, each of which contradicts the fact that $S$ is product-free. We are now left with the cases $aca^{-1} = c$, $aca^{-1} = ba$ and $aca^{-1} = a^{-1}b$ (which, if it is an involution, equals $b^{-1}a$). If $aca^{-1} = c$, then $\langle S \rangle = \langle a, c\rangle = C_6$, but the only product-free set of size 3 in $C_6$ contains no elements of order 3, so this is impossible. Therefore $aca^{-1} \in \{ba, a^{-1}b\}$. If $aca^{-1} = ba$, then $a^{-1}ba = ca^{-1}$, so $ac = a^{-1}b^{-1}a$, which has order 3. If $aca^{-1} = a^{-1}b$, then $ac = a^{-1}ba$, again of order 3. So we see that
$$\langle S \rangle = \langle a, c : a^3 = 1, c^2 = 1, (ac)^3 = 1\rangle.$$
This is a well known presentation of the alternating group $\mathrm{Alt}(4)$. As $c$ is the only element of $S$ whose order is even, we see that $|\hat S| \leq 1$, and hence $|G| \leq 2|\mathrm{Alt}(4)| = 24$. Therefore in all cases $|G| \leq 24$.\qed
\section{Data and Programs}
Though Table \ref{tab1} is essentially just Table 5 from \cite{gh}, we have taken the opportunity here to correct a typographical error in the entry for the (un-named) group of order $16$. We provide below the GAP programs used to obtain the table.
\begin{prog}
A program that tests if a set T is product-free.
\begin{verbatim}
## It returns "0" if T is product-free, and "1" if otherwise.
prodtest:= function(T)
local x, y, prod;
prod:=0;
for x in T do
for y in T do
if x*y in T then
prod:=1;
fi;
od;
od;
return prod;
end;
\end{verbatim}
\end{prog}
\begin{prog}
A program for finding all locally maximal product-free sets of size $3$ in $G$.
\begin{verbatim}
##It prints the list of all locally maximal product-free sets of size 3 in G.
LMPFS3:=function(G)
local L, lmpf, combs, x, pf, H, y, z, s, i, q;
L:=AsSortedList(G); lmpf:=[]; combs:=Combinations(L,3);
for i in [1..Binomial(Size(L),3)] do
pf:=combs[i];
if prodtest(pf)=0 then
s:=Size(lmpf); H:=Difference(L,pf);
for y in [1..3] do
for z in [1..3] do
H:=Difference(H, [pf[y]*pf[z], pf[y]*(pf[z])^-1, ((pf[y])^-1)*pf[z]]);
od;
od;
for q in L do
if q^2 in pf then
H:=Difference(H, [q]);
fi;
od;
if Size(H) = 0 then
lmpf:=Union(lmpf, [pf]);
fi;
fi;
od;
if Size(lmpf) > 0 then
Print(G,"\n",L,"\n","Structure Description of G is ",StructureDescription(G),
"\n", "Gap Id of G is ", IdGroup(G), "\n", "\n", lmpf, "\n", "\n");
fi;
end;
\end{verbatim}
\end{prog}
\begin{landscape}
\begin{table}
\begin{tabular}{ll|c|l|c}
$G$ && $S$ & $\ensuremath{\langle} S \ensuremath{\rangle}$ & \begin{tabular}{c}\# Locally maximal \\product-free sets \\of size $3$ in $G$\end{tabular}\\
\hline
$\langle g: g^6 = 1\rangle$ & $\cong C_6$ & %
$\{g, g^3, g^5 \}$ & $\cong C_6$ &1 \\
$\langle g, h: g^3 = h^2 = 1, hgh = g^{-1} \rangle$ & $\cong D_6$
& $\{h, gh, g^2h\}$ & $\cong D_6$ & 1\\
$\langle g: g^8 = 1\rangle$ & $\cong C_8$ & %
$\{g, g^{-1}, g^4\}$ & $\cong C_8$ & 2\\
$\langle g, h: g^4 = h^2 = 1, hgh^{-1} = g^{-1}\rangle$ & $\cong D_8$ & %
$\{h, gh, g^2 \}$ & $\cong D_8$ & 4\\
$\langle g: g^9 = 1\rangle$ & $\cong C_9$ & %
$\{g, g^3, g^8 \}, \{g, g^4, g^7\}$ & $\cong C_9$ & 8\\
$\langle g, h: g^3=h^3=1, gh=hg\rangle$ & $\cong C_3 \times C_3$ & %
$\{g, h, g^2h^2 \}$ & $\cong C_3 \times C_3$ & 8\\
$\langle g: g^{10} = 1\rangle$ & $\cong C_{10}$ & %
$\{g^2, g^5, g^8\}, \{g, g^5, g^8\}$ & $\cong C_{10}$ & 6\\
$\langle g: g^{11} = 1\rangle$ & $\cong C_{11}$ & %
$\{ g, g^3, g^5 \}$ & $\cong C_{11}$ & 10\\
$\langle g: g^{12} = 1\rangle$ & $\cong C_{12}$ & %
$\{g^2, g^6, g^{10} \}$ & $\cong C_6$ & 1\\
& & %
$\{g, g^6, g^{10}\}, \{g, g^3, g^8\}$ & $\cong C_{12}$ & 8\\
$\langle g, h: g^6 = 1, g^3 = h^2, hgh^{-1} = g^{-1}\rangle$ & $\cong Q_{12}$ & %
$\{g, g^3, g^5 \}$ & $\cong C_6$ & 1\\
Alternating group of degree 4 & $=$ Alt(4)& %
$\{x, y, z: x^2 = y^2 = z^3 = 1\}$ & $\cong$ Alt(4) & 48\\
&& $\{x, z, xzx: x^2 = z^3=1\}$ &&\\
&&$\{x, z, zxz: x^2 = z^3 = 1\}$&&\\
$\langle g: g^{13} = 1\rangle$ & $\cong C_{13}$ & %
$\{g, g^3, g^9\}, \{g, g^6, g^{10}\}$ & $\cong C_{13}$ & 16\\
$\langle g: g^{15} = 1\rangle$ & $\cong C_{15}$ & %
$\{g, g^3, g^{11} \}$ & $\cong C_{15}$ & 4\\
$\langle g, h: g^4 = h^4 = 1, gh = hg\rangle$ & $\cong C_4\times C_4$ & %
$\{g, h, g^{-1}h^{-1} \}$ & $\cong C_4\times C_4$ & 16\\
$\langle g, h: g^8=1, g^4=h^2, hgh^{-1} = g^{-1}\rangle$ & $\cong Q_{16}$ & %
$\{g, g^4, g^{-1} \}$ & $\cong C_8$ & 2\\
$\langle g, h: g^8 = h^2 = 1, hgh^{-1} = g^5\rangle$ & (order 16) & %
$\{g, g^6, g^3h\}$ & $\cong G$ & 8\\
$\langle g, h: g^{10} = 1, g^5=h^2, hgh^{-1} = g^{-1}\rangle$ & $\cong Q_{20}$ & %
$\{g, g^5, g^8\}, \{g^2, g^5, g^8\}$ & $\cong C_{10}$ & 6\\
$\langle g, h: g^3 = h^7 = 1, ghg^{-1} = h^2\rangle$ & $\cong C_7\rtimes C_3$ & %
$\{gh, gh^{-1}, g^{-1}\}$ & $\cong C_7\rtimes C_3$ & 42\\
$\langle x : x^3 = 1\rangle \times \ensuremath{\langle} g, h: g^4 = 1, g^2 = h^2, hgh^{-1} = g^{-1}\rangle$ & $\cong C_3\times Q_8$ & %
$\{g^2, xg^2, x^2g^2\}$ & $\cong C_6$ & 1\\
$\langle g, h: g^{12} = 1, g^6 = h^2, hgh^{-1} = g^{-1}\rangle$ & $\cong Q_{24}$ & %
$\{g^2, g^6, g^{10}\}$ & $\cong C_6$ & 1\\
& & %
$\{g, g^6, g^{10} \}$ & $\cong C_{12}$ & 4\\
\end{tabular}
\caption{Locally maximal product-free sets of size $3$ in groups of order up to $24$}
\label{tab1}
\end{table}
\end{landscape}
| {
"timestamp": "2015-06-23T02:13:03",
"yymm": "1503",
"arxiv_id": "1503.06509",
"language": "en",
"url": "https://arxiv.org/abs/1503.06509",
"abstract": "Let $G$ be a group, and $S$ a non-empty subset of $G$. Then $S$ is \\emph{product-free} if $ab\\notin S$ for all $a, b \\in S$. We say $S$ is \\emph{locally maximal product-free} if $S$ is product-free and not properly contained in any other product-free set. A natural question is what is the smallest possible size of a locally maximal product-free set in $G$. The groups containing locally maximal product-free sets of sizes $1$ and $2$ were classified by Giudici and Hart in 2009. In this paper, we prove a conjecture of Giudici and Hart by showing that if $S$ is a locally maximal product-free set of size $3$ in a group $G$, then $|G| \\leq 24$. This allows us to complete the classification of locally maximal product free sets of size 3.",
"subjects": "Group Theory (math.GR)",
"title": "Locally Maximal Product-free Sets of Size 3",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936115537342,
"lm_q2_score": 0.8031737892899221,
"lm_q1q2_score": 0.7903981950076174
} |
https://arxiv.org/abs/2211.08572 | Bayesian Fixed-Budget Best-Arm Identification | Fixed-budget best-arm identification (BAI) is a bandit problem where the agent maximizes the probability of identifying the optimal arm within a fixed budget of observations. In this work, we study this problem in the Bayesian setting. We propose a Bayesian elimination algorithm and derive an upper bound on its probability of misidentifying the optimal arm. The bound reflects the quality of the prior and is the first distribution-dependent bound in this setting. We prove it using a frequentist-like argument, where we carry the prior through, and then integrate out the bandit instance at the end. We also provide a lower bound on the probability of misidentification in a $2$-armed Bayesian bandit and show that our upper bound (almost) matches it for any budget. Our experiments show that Bayesian elimination is superior to frequentist methods and competitive with the state-of-the-art Bayesian algorithms that have no guarantees in our setting. | \section{ALGORITHM}
We propose $\ensuremath{\tt BayesElim}\xspace$, a Bayesian successive elimination algorithm, similar to the one proposed in \cite{Karnin2013AlmostOE} for the frequentist version of the BAI problem, but which incorporates prior information. In contrast to the elimination algorithm of \cite{Karnin2013AlmostOE}, \ensuremath{\tt BayesElim}\xspace eliminates based on the maximum a posteriori estimates of the arm rewards.
\ensuremath{\tt BayesElim}\xspace splits the exploration budget $n$ into $R=\lceil\log_2(K)\rceil$ elimination rounds of equal budget per round, $\lfloor\frac{n}{R}\rfloor$. At each elimination round $r$, the algorithm maintains a set of active arms, denoted by $S_r$.
At the end of each round, after collecting samples from the arms, the algorithm eliminates a half of the active arms that have the lowest means of posterior distributions.
The arms that survive elimination at round $r$ become the active set of arms in the next round.
In each round $r$, \ensuremath{\tt BayesElim}\xspace splits the per-round budget among the arms in $S_r$ according to their reward variances. In particular, each arm $i\in S_r$ is sampled $\lfloor n_{r,i} \rfloor$ times, where
\begin{align}
n_{r,i}=\frac{n}{R}\frac{\sigma_i^2}{\Sigma_r}
\label{eq:split}
\end{align}
and $\Sigma_r = \sum_{j\in S_r}{\sigma_j^2}$.
The algorithm is presented in \cref{alg:bayesian_successive_elimination}.
\begin{algorithm}[ht]
\caption{\ensuremath{\tt BayesElim}\xspace: Bayesian elimination.}\label{alg:bayesian_successive_elimination}
\begin{algorithmic}[1]
\State Let $R \leftarrow \lceil\log_2K\rceil$.
\State Initialize $S_0 \leftarrow [K]$.
\For{$r=0,..., R-1$}
\For{$i\in S_r$}
\State Get $\lfloor n_{r,i}\rfloor$ samples of arm $i$, where $n_{r,i}$ is given by \cref{eq:split}.
\State Compute posterior mean $\bar\mu_{i,n_{r,i}}$ using \cref{eq:mubarim}.
\EndFor
\State Set $S_{r+1}$ to be the set of $\lceil|S_r|/2\rceil$ arms in $S_r$ with largest posterior means $\{\bar\mu_{i,n_{r,i}}\}_{i\in S_r}$.
\EndFor
\end{algorithmic}
\end{algorithm}
Notice that \cref{eq:split} allocates a relatively larger number of samples to arms with larger reward noise. In particular, splitting the per-round budget according to \cref{eq:split}, makes the posterior variances of all arms in a round equal, i.e.
\begin{align*}
\bar\sigma_{i,n_{r,i}}^2=\left(\frac{1}{\sigma_{0}^2}+\frac{n_{r,i}}{\sigma_i^2}\right)^{-1} = \left(\frac{1}{\sigma_{0}^2}+\frac{n}{R\cdot\Sigma_r}\right)^{-1}.
\end{align*}
That allows us to eliminate based on posterior means that have uniform noise.
Finally, we note that in the special case where all sample variances are equal, i.e. $\sigma_i=\sigma$ for all $i\in[K]$, the budget is distributed uniformly among the active arms in each round. This is the same as in the original algorithm of \citet{Karnin2013AlmostOE}.
\section{ANALYSIS}\label{sec:ub_analysis}
In this section, we provide an upper bound for the expected probability of misidentification of \ensuremath{\tt BayesElim}\xspace. In our analysis, we deviate from the usual Bayesian bandit proofs in the literature: we consider the parameter vector $\bm{\mu}$ fixed and use frequentist-like concentration arguments for the randomness due to sampling. In our arguments, prior quantities appear as extra bias terms in the probability bounds. After performing the analysis, we integrate out the randomness due to prior information. This style of analysis is novel and enables direct comparison of the upper bound with our also frequentist-like derived lower bounds, as well as with the guarantees achieved by applying frequentist techniques to the Bayesian setting.
We show the following upper bound on the expected probability of misidentification (as defined in \cref{eq:objective}) of our algorithm:
\begin{theorem}\label{thm:bound_with_integral}
The expected probability of misidentification for \ensuremath{\tt BayesElim}\xspace can be bounded as follows
\begin{align*}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \leq
3\sum_{r\in[R]} \frac{1}{\sqrt{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \\
&\times \sum_{i\in[K]}\sum_{j>i} \exp\left(-\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right).
\end{align*}
\end{theorem}
We remark that above guarantee exhibits an $O(1/\sqrt{n})$ dependence on the budget, which is in contrast to the $e^{-O(n)}$ guarantees achieved for the frequestist BAI problem \citep{Karnin2013AlmostOE}.
This fact might at first seem counter intuitive due to the additional access to prior information available to the learner. However, this dependence is inherent to the objective of expected probability of misidentification defined in \cref{eq:objective}, which integrates the probability of misidentification over the possible instances. This is in contrast to the frequentist setting, where the objective is the worst-case probability of misidentification, on a single instance.
In addition, notice that integrating an optimal frequentist bound would result in a $O(1/\sqrt{n})$ dependence. This can be illustrated in the following simple example (with $\sigma_0=1$):
\begin{align*}
&\Ex{\bm{\mu}}{e^{-n(\mu_i-\mu_j)^2}}=\\
&=\frac{1}{2\pi}\int_{\mu_i, \mu_j} e^{-n(\mu_i-\mu_j)^2} e^{-\frac{(\nu_i-\mu_i)^2}{2}} e^{-\frac{(\nu_j-\mu_j)^2}{2}}\, d\mu_j \, d\mu_i\\
&=
\frac{1}{\sqrt{4n+1}}
\exp\left(-\frac{n(\nu_i-\nu_j)^2}{4n+1}\right),
\end{align*}
where the last expression results from completing the square in the exponent and computing the Gaussian integral.
Moreover, as we show in \cref{sec:lb} the dependence of \ensuremath{\tt BayesElim}\xspace on the budget, $n$, is in fact optimal.
In addition, the upper bound in \cref{thm:bound_with_integral} decreases exponentially with the squared gaps between the prior means normalized by the prior variance, $\frac{(\nu_i-\nu_j)^2}{\sigma_0^2}$. This also agrees with the intuition that as the prior mean gaps grow (or the prior variances become smaller) the uncertainty in the parameters decreases and the problem of identifying the best arm becomes easier. In particular, when $\sigma_0\rightarrow 0$ the algorithm has exact knowledge of the mean without any sampling. In this extreme case where $\sigma_0\rightarrow 0$, the above bound indicates that the probability of misidentification of \ensuremath{\tt BayesElim}\xspace becomes $0$.
\subsection{Comparison to Frequentist Algorithm}
We compare our guarantees for \ensuremath{\tt BayesElim}\xspace with the result obtained by applying the frequentist elimination algorithm of \cite{Karnin2013AlmostOE} in the Bayesian BAI setting.
This is an elimination algorithm similar to \ensuremath{\tt BayesElim}\xspace with the difference that it eliminates based on sample averages at the end of each round, i.e. ignoring prior information.
We remark that this algorithm gives optimal guarantees in terms of worst-case {probability of misidentification} in the frequentist version of BAI, that is, in the absence of priors. Alternatively, this algorithm can be viewed as a version of \ensuremath{\tt BayesElim}\xspace where we take $\sigma_0\rightarrow \infty$.
We first derive an upper bound for the frequentist algorithm following a similar analysis to that of \ensuremath{\tt BayesElim}\xspace and then compare the two bounds in the general case, as well as when $\sigma_0\rightarrow 0$. Since the algorithm in \citet{Karnin2013AlmostOE} is designed for equal reward variances, the discussion and results below focus on the case where $\sigma_i=\sigma$ for all $i\in[K]$.
\begin{restatable}{theorem}{thmFreqUBound}\label{thm:freq_bound}
The elimination algorithm of \citet{Karnin2013AlmostOE} satisfies
\begin{align}\label{eq:freq_bound1}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}}
\leq 3\sum_{r\in [R]} \frac{1}{\sqrt{n\frac{\sigma_0^2}{\sigma^2}\frac{2^r}{K\log_{2}(K)}+1}} \nonumber\\
&\times \sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{n\sigma_0^2}{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right).
\end{align}
\end{restatable}
In addition, in the special case where the reward variances are equal for all arms, the upper bound on the expected probability of misidentification for \ensuremath{\tt BayesElim}\xspace becomes as follows:
\begin{corollary}
When $\sigma_i=\sigma$ for all $i\in[K]$, \ensuremath{\tt BayesElim}\xspace satisfies
\begin{align}\label{eq:eq_var_bound}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}}
\leq 3\sum_{r\in [R]} \frac{1}{\sqrt{n\frac{\sigma_0^2}{\sigma^2}\frac{2^r}{K\log_{2}(K)}+1}}\nonumber \\ &\times\sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}{n\sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right).
\end{align}
\end{corollary}
The bound in \cref{eq:freq_bound1} has similar dependence on $n,K$ and prior mean gaps as the one in \cref{eq:eq_var_bound}. The only difference between the two bounds is in the multiplicative terms in the exponent, i.e. $$\frac{n\sigma_0^2}{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}\leq 1 \leq \frac{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}{n\sigma_0^2}.$$ Therefore, the bound in \cref{eq:freq_bound1} is always lower than the bound in \cref{eq:eq_var_bound}. In addition, we note that by taking $\sigma_0\rightarrow 0$ in \cref{eq:eq_var_bound}, the probability of misidentification for \ensuremath{\tt BayesElim}\xspace tends to $0$. In contrary, for $\sigma_0\rightarrow 0$ the bound for the frequentist algorithm becomes constant: $$3\sum_{r\in[R]}\sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{1}{4}\frac{n{2^r}}{{K\log_2(K)}\sigma^2}(\nu_i-\nu_j)^2\right).$$ \\
In the case of the frequentist algorithm, the previous discussion of taking $\sigma_0\rightarrow 0$ is only applied on an upper bound on its expected probability of misidentification. In \cref{sec:lb}, we present a formal lower bound on the expected probability of misidentification of any frequentist policy applied to the Bayesian setting, which clearly shows the loss of ignoring prior information.
In the rest of this section we outline the proof of our main \cref{thm:bound_with_integral}.
\subsection{Proof Sketch of \cref{thm:bound_with_integral}}
To prove \cref{thm:bound_with_integral}, we first consider the parameter vector $\bm{\mu}$ fixed. For any fixed round $r\in [R]$, in \cref{lem:posterior_mean_concentration} we bound the probability that the posterior mean of some suboptimal arm $i$ is larger that the posterior mean of the optimal arm of $\bm{\mu}$. Then, for any fixed $r\in[R]$ and $\bm{\mu}$, in \cref{lem:r_elimination} we bound the probability that the optimal arm is eliminated at round $r$. Subsequently, we put things together and bound the probability of error in instance $\bm{\mu}$. Up to this point, our analysis is frequentist-like and treats the parameter vector as fixed. Finally, in \cref{lem:final_ub}, we integrate out the randomness of $\bm{\mu}$.
To simplify the analysis, we ignore errors due to rounding. Note that the set of active arms in any round $r\in [R]$, i.e. $S_r$, is a random variable that depends on the reward realizations as well as the randomness in the parameters $\bm{\mu}$ of the reward distributions of the arms. For any fixed set of parameters $\bm{\mu}$ and round $r\in[R]$, the following lemma bounds the probability that the posterior mean $\bar\mu_{i_*,n_{r,{i_*}}}$ of the best arm $i_*$ of $\bm{\mu}$ is smaller that the posterior mean $\bar\mu_{i,n_{r,i}}$ of some arm $i\in S_r$:
\begin{restatable}{lemma}{lemPostManConc}\label{lem:posterior_mean_concentration}
Fix instance $\bm{\mu}$ and round $r \in [R]$. Suppose that $i_* \in S_r$. Then, for any $i \in S_r$ we have
\begin{align*}
&\prob{\bar\mu_{i,n_{r,i}}>\bar\mu_{i_*,n_{r,i_*}}|\bm{\mu}} \\
&\leq\exp\left(-\frac{n}{4R\cdot \Sigma_r }\left(R\cdot\Sigma_r\frac{\nu_{i_*}-\nu_i}{n\sigma_0^2} + \mu_{i_*}- \mu_i\right)^2\right).
\end{align*}
\end{restatable}
Observe that as $n$ grows, the bias due to prior information, $\frac{(\nu_{i_*}-\nu_i)}{\sigma_0^2}$, contributes less in the above bound. This is in line with the intuition that as the number of samples grows, prior information should become less important.
For a fixed parameter instance $\bm{\mu}$, we bound the probability that the optimal arm is eliminated at some round $r\in[R]$. This bound follows roughly by \cref{lem:posterior_mean_concentration} and the fact that if arm $i_*$ is eliminated at round $r$, then at least $|S_r|/2$ arms have larger posterior means than $i_*$ at the end of the round. The result follows:
\begin{restatable}{lemma}{lemRElimination}\label{lem:r_elimination}
Fix instance $\bm{\mu}$ and round $r\in [R]$. Then, there exists some $j_r\in S_r$ such that the probability that $i_*$ is eliminated at round $r$ satisfies
\begin{align*}
&\prob{i_* \not \in S_{r+1}| \{i_* \in S_r\}, \bm{\mu}}\\
&\leq 3\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right).
\end{align*}
\end{restatable}
Putting together the above lemmas, the expected probability of misidentification can be bounded as follows
\begin{align}\label{eq:regret_bound_1}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \nonumber\\
&= \int_{\bm{\mu}} \prob{J\neq i_*|\bm{\mu}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \nonumber\\
&\leq \int_{\bm{\mu}} \sum_{r\in [R]}\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \nonumber\\
&\leq \sum_{r\in [R]} \int_{\bm{\mu}}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right)\nonumber\\
&\quad \cdot 3 \mathbb{P}(\bm{\mu}) \,d\bm{\mu} ,
\end{align}
where the last inequality is due to \cref{lem:r_elimination}.
Now, as noted before, $i_*, j_r$ are random quantities that depend on the instance $\bm{\mu}$. For any fixed $r\in[R]$, we can rewrite the above integral by grouping the instances according to the values of $i_*$ and $j_r$ in $[K]$ and then upper bound the integral to show the following:
\begin{restatable}{lemma}{lemFinalUBound}\label{lem:final_ub}
We have that
\begin{align*}
&\int_{\bm{\mu}}
e^{-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2}
\mathbb{P}(\bm{\mu}) \,d\bm{\mu} \\
&\leq \sqrt{\frac{1}{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \sum_{i\in[K]}\sum_{j>i} e^{-\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}}.
\end{align*}
\end{restatable}
This completes the sketch.
\qed
\section{Proofs of \cref{sec:ub_analysis}}\label{app}
\begin{theorem}[Hoeffding's Inequality for Subgaussian Random Variables (from \cite{hoeffding63})]\label{thm:hoeffding}
If $X_1,...,X_m\sim \mathcal{N}(\mu,\sigma^2)$ then for any $i\in[m]$:
\begin{align*}
&\Prob{}{X_i\geq \mu+\epsilon} \leq \exp\left(-\frac{\epsilon^2}{2\sigma^2}\right)
\text{ and } \Prob{}{\frac{1}{m}\sum_{i\in[m]}X_i\geq \mu+\epsilon} \leq \exp\left(-\frac{m\epsilon^2}{2\sigma^2}\right).
\end{align*}
\end{theorem}
\begin{theorem}[Bretagnolle–Huber inequality (from \cite{Bretagnolle1978EstimationDD})]\label{thm:huber}
Let $\mathbb{P}$ and $\mathbb{Q}$ be probability
measures on the same measurable space and $A$ a measurable event. Then,
\begin{align*}
\mathbb{P}(A) + \mathbb{Q}(A^c) \geq \frac{1}{2} \exp(-d_{KL}(\mathbb{P},\mathbb{Q}))
\end{align*}
where $A^c$ is the complement of $A$ and $d_{KL}(\mathbb{P},\mathbb{Q}) = \int_{-\infty}^{\infty} \log \left(\frac{\,d \mathbb{P}(x)}{\,d \mathbb{Q}(x)}\right) \,d \mathbb{P}(x)$.
\end{theorem}
\lemPostManConc*
\begin{proof}
The posterior distribution of any arm $i$ given $n_{r,i}$ samples, $X_{i,1},...,X_{i,n_{r,i}}$, is
$\mathcal{N}\left(\bar\mu_{i,n_{r,i}},\bar\sigma_{i,n_{r,i}}^2\right)$
where
$$\bar\sigma_{i,n_{r,i}}^2 = \left(\frac{1}{\frac{1}{\sigma_{0,i}^2}+\frac{n_{r,i}}{\sigma_i^2}}\right)
\text{ and }~
\bar\mu_{i,n_{r,i}} = \bar\sigma_{i,n_{r,i}}^2 \left(\frac{\nu_i}{\sigma_{0,i}^2}+\sum_{s\in [n_{r,i}]} \frac{X_{i,s}}{\sigma_i^2}\right).$$
We consider the probability that the posterior mean of some arm $i\in S_r$ is larger than the posterior mean of arm $i_*$ for a fixed parameter vector $\bm{\mu}$. We have that:
\begin{align}\label{eq:concentration_2}
\prob{\bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}}|\bm{\mu}}
&= \prob{ \bar\sigma_{i,n_{r,i}}^2 \left(\frac{\nu_i}{\sigma_{0,i}^2}+\sum_{s\in [n_{r,i}]} \frac{X_{i,s}}{\sigma_i^2}\right) > \bar\sigma_{i_*,n_{r,i_*}}^2 \left(\frac{\nu_{i_*}}{\sigma_{0,i_*}^2}+\sum_{s\in [n_{r,i_*}]} \frac{X_{i_*,s}}{\sigma_{i_*}^2}\right) ~|\bm{\mu}}.
\end{align}
Notice that for the values of $n_{r,i}$ selected by our Algorithm, we have that :
\begin{align*}
\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}
&= \frac{1}{\sigma_i^2} \left(\frac{1}{\frac{1}{\sigma_{0}^2}+\frac{n_{r,i}}{\sigma_i^2}}\right)
= \frac{1}{\sigma_i^2} \left(\frac{1}{\frac{1}{\sigma_{0}^2}+\frac{n \sigma_i^2}{R\cdot\sigma_i^2\Sigma_r} }\right)
= \frac{\sigma_{0}^2}{\sigma_i^2} \left(\frac{1}{1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} }\right).
\end{align*}
Also, we have that:
\begin{align}\label{eq:tmp_s3}
n_{r,i}\cdot \frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}
&= {\sigma_i^2}\frac{n}{R\cdot\Sigma_r} \cdot \frac{\sigma_{0}^2}{\sigma_i^2} \left(\frac{1}{1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} }\right)
= \left(\frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} }\right) = \frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}.
\end{align}
Then, for the probability that the posterior mean of some arm $i\in S_r$ is larger than the posterior mean of arm $i_*$ for a fixed parameter vector $\bm{\mu}$ in \cref{eq:concentration_2} have that:
\begin{align}\label{eq:tmp_s5}
&\prob{\bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}}|\bm{\mu}}
= \prob{ \bar\sigma_{i,n_{r,i}}^2 \left(\frac{\nu_i}{\sigma_{0}^2}+\sum_{s\in [n_{r,i}]} \frac{X_{i,s}}{\sigma_i^2}\right) > \bar\sigma_{i_*,n_{r,i_*}}^2 \left(\frac{\nu_{i_*}}{\sigma_{0}^2}+\sum_{s\in [n_{r,i_*}]} \frac{X_{i_*,s}}{\sigma_{i_*}^2}\right) ~|\bm{\mu}} \nonumber \\
&= \prob{\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} X_{i,s} - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} X_{i_*,s}> \bar\sigma_{i_*,n_{r,i_*}}^2\frac{\nu_{i_*}}{\sigma_{0}^2} - \bar\sigma_{i,n_{r,i}}^2\frac{\nu_i}{\sigma_{0}^2} ~|\bm{\mu}} \nonumber \\
&= \mathbb{P}\Bigg( \frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})> \nonumber\\
& \qquad\qquad \bar\sigma_{i_*,n_{r,i_*}}^2\frac{\nu_{i_*}}{\sigma_{0}^2} - \bar\sigma_{i,n_{r,i}}^2\frac{\nu_i}{\sigma_{0}^2} + \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}n_{r,i_*} \mu_{i_*} - \frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}n_{r,i}\mu_i ~|\bm{\mu} \Bigg) \nonumber \\
&= \mathbb{P}\Bigg(\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})> \nonumber\\
&\qquad\qquad \frac{n_{r,i_*}\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\left(\frac{\sigma_{i_*}^2 \nu_{i_*}}{\sigma_{0}^2n_{r,i_*}} + \mu_{i_*}\right)- \frac{n_{r,i}\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\left(\frac{\sigma_i^2 \nu_i}{\sigma_{0}^2n_{r,i}} + \mu_i\right) ~|\bm{\mu} \Bigg) \nonumber \\
&= \mathbb{P}\Bigg(\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})>\nonumber\\
&\qquad\qquad\frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}\left(\left(\frac{\sigma_{i_*}^2 \nu_{i_*}}{\sigma_{0}^2n_{r,i_*}} + \mu_{i_*}\right)- \left(\frac{\sigma_i^2 \nu_i}{\sigma_{0}^2n_{r,i}} + \mu_i\right)\right) ~|\bm{\mu}\Bigg)\nonumber\\
&= \prob{\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})> \frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right) ~|\bm{\mu}}
\end{align}
where the second-to-last inequality is because, according to \cref{eq:tmp_s3} we have that $\frac{n_{r,i_*}\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}=\frac{n_{r,i}\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}=\frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}$ and the last inequality is due to replacing the expression for $n_{r,i}, n_{r,i_*}$ and reordering terms.
Since $X_{i,s}\sim\mathcal{N}(\mu_i,\sigma_i^2)$, the sum
$\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i)$
is a zero mean Gaussian random variable with variance:
\begin{align*}
\frac{\bar\sigma_{i,n_{r,i}}^4}{\sigma_i^4} \cdot n_{r,i} \cdot \sigma_i^2
= \frac{\sigma_{0}^4}{\sigma_i^4} \frac{1}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2} \cdot \frac{n}{R} \frac{\sigma_i^2}{\Sigma_r} \cdot \sigma_i^2
= \sigma_{0}^2 \frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2} .
\end{align*}
Therefore, the difference on the LHS of the above inequality: $\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})$, is a zero mean Gaussian with variance:
\begin{align*}
\frac{\bar\sigma_{i,n_{r,i}}^4}{\sigma_i^4} \cdot n_{r,i} \cdot \sigma_i^2 + \frac{\bar\sigma_{i_*,n_{r,i_*}}^4}{\sigma_{i_*}^4} \cdot n_{r,i_*} \cdot \sigma_{i_*}^2
= 2\sigma_{0}^2 \frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2}.
\end{align*}
Thus, by Hoeffding's inequality (\cref{thm:hoeffding}) we get that:
\begin{align*}
\cref{eq:tmp_s5}
&\leq \exp\left(-\frac{\frac{\left(\frac{n\sigma_0^2}{R}\right)^2}{(\Sigma_r+\frac{n\sigma_0^2}{R})^2}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2 }{4\sigma_0^2 \frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2}}\right)
= \exp\left(-\frac{n}{4R\cdot \Sigma_r }\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2\right).
\end{align*}
In the case that $\sigma_i=\sigma$ for all $i\in[K]$, we have equal distribution of the per-phase budget for all arms, i.e. $n_{r,i} = \frac{n}{|S_r| \log_2(K)} = n_r$ for all $i\in S_r$, and thus:
\begin{align*}
\cref{eq:concentration_2}
&\leq \prob{\sum_{s\in [\nr{}]} \frac{X_{i,s}-X_{i_*,s}}{\sigma^2} - \nr{} \frac{\mu_i-\mu_{i_*}}{\sigma^2} > \frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2} - \nr{} \frac{\mu_i-\mu_{i_*}}{\sigma^2} ~|\bm{\mu}}\\
&= \prob{\sum_{s\in [\nr{}]} ({X_{i,s}-X_{i_*,s}-(\mu_i-\mu_{i_*})}) > \sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2} + \nr{} (\mu_{i_*}-\mu_{i}) ~|\bm{\mu}}\\
&\leq \exp\left(-\frac{1}{4}\frac{\left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2} + \nr{} (\mu_{i_*}-\mu_{i})\right)^2}{\sigma^2\nr{}}\right) = \exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2 \nr{}} + \Delta_i\right)^2\right) ,
\end{align*}
where the last inequality is due to the fact that
${X_{i,s}-X_{i_*,s}-(\mu_i-\mu_{i_*})}\sim \mathcal{N}(0,2\sigma^2)$.
\end{proof}
\lemRElimination*
\begin{proof}
We define $S'_r$ to be the set of $\frac{3|S_r|}{4}$ arms in $S_r$ with smallest posterior means.
Let us consider the following quantity: \begin{align}\label{eq:tmp_s6}
\E{\sum_{i\in S'_r} \mathbb{I}(\bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_ \in S_r\}}
&= \sum_{i\in S'_r} \prob{ \bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}} | \bm{\mu}, \{i_* \in S_r\}} \nonumber\\
&\leq \sum_{i\in S'_r} \exp\left(-\frac{n}{2R\cdot \Sigma_r (\sigma_{0,i}^2+\sigma_{0,i_*}^2)}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2\right).
\end{align}
Let some $j_r\in S_r$ such that for all $i\in S'_r$ we have $\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2 \geq \left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \mu_{i_*}- \mu_{j_r}\right)^2$. We can upper bound \cref{eq:tmp_s6} as follows:
\begin{align}\label{eq:tmp_s7}
\cref{eq:tmp_s6}
&= \sum_{i\in S'_r} \exp\left(-\frac{n}{2 R\cdot \Sigma_r (\sigma_{0,i}^2+\sigma_{0,i_*}^2)}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2\right) \nonumber\\
&\leq \sum_{i\in S'_r} \exp\left(-\frac{n}{4 R \cdot\Sigma_r \sigma_{0,\max}^2}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right) \nonumber\\
&=|S'_r| \exp\left(-\frac{n}{4 R\cdot \Sigma_r \sigma_{0,\max}^2}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right) .
\end{align}
Now, since the best arm is eliminated at the end of round $r$, then at least $\frac{|S'_r|}{3} = \frac{|S_r|}{4}$ arms in $S'_r$ must have larger posterior means than $i_*$. Using this fact we get:
\begin{align*}
&\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}} \\
&\leq \prob{ \sum_{i\in S'_r} \mathbb{I}( \bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\} > \frac{|S'_r|}{3}} & {} \\
&\leq \frac{3\cdot \E{\sum_{i\in S'_r} \mathbb{I}( \bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}}}{|S'_r|} &\text{(by Markov's inequality)}\\
&\leq 3\exp\left(-\frac{n}{4 R\cdot \Sigma_r \sigma_{0,\max}^2}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right). &\text{(by \cref{eq:tmp_s7})}
\end{align*}
For equal variances, we let some $j_r\in S_r$ such that for all $i\in S'_r$ we have $\left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2 \nr{}} + \Delta_{i}\right)\geq \left(\sigma^2\frac{\nu_{i_*}-\nu_{j_r}}{\sigma_{0}^2 \nr{}} + \Delta_{j_r}\right)$. Then we have that:
\begin{align}\label{eq:Sr_total_bound_1}
\E{\sum_{i\in S'_r} \mathbb{I}(\bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_ \in S_r\}} &= \sum_{i\in S'_r} \prob{ \bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}} | \bm{\mu}, \{i_* \in S_r\}} \nonumber\\
&\leq \sum_{i\in S'_r} \exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2 \nr{}} + \Delta_i\right)^2\right) \nonumber\\
&\leq |S'_r| \exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{j_r}}{\sigma_{0}^2 \nr{}} + \Delta_{j_r} \right)^2\right)
\end{align}
where the inequality is due to \cref{lem:posterior_mean_concentration}, and subsequently we use the definition of $j_r$.
Similarly, we get:
\begin{align*}
\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}}
&\leq \frac{3\cdot \E{\sum_{i\in S'_r} \mathbb{I}( \bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}}}{|S'_r|} &\text{(by Markov's inequality)}\\
&\leq 3\exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{j_r}}{\sigma_{0}^2 \nr{}} + \Delta_{j_r} \right)^2\right) &\text{(by \cref{eq:Sr_total_bound_1})}
\end{align*}
\end{proof}
\paragraph{Lemma 6.} \textit{
We have that:
\begin{align*}
\int_{\bm{\mu}}
e^{-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2}
\mathbb{P}(\bm{\mu}) \,d\bm{\mu} \leq \sqrt{\frac{1}{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \sum_{i\in[K]}\sum_{j>i} e^{-\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}}.
\end{align*}}
\begin{proof}
We have that:
\begin{align*}
&\int_{\bm{\mu}}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} =\\
&= \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ i_*=i,j_r=j}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i}-\nu_{j}) + \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ i_*=j,j_r=i}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{j}-\nu_{i}) + \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&\leq \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ \mu_i\geq \mu_j}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i}-\nu_{j}) + \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ \mu_i<\mu_j}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{j}-\nu_{i}) + \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&= \sum_{i\in[K]}\sum_{j>i}
\int_{\bm{\mu}}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i}-\nu_{j}) + \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} &\text{ (by symmetry)}\\
&= \sqrt{\frac{1}{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \sum_{i\in[K]}\sum_{j>i} \exp\left(-\frac{1}{4\sigma_0^2}\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}(\nu_i-\nu_j)^2\right). &\text{ (by \cref{lem:gaussuan_integration_1} in the Appendix)}
\end{align*}
\end{proof}
\thmFreqUBound*
\begin{proof}
This proof follows the lines of \cref{thm:bound_with_integral}. To simplify the analysis, we ignore errors due to rounding. Note that the set of active arms in any round $r\in [R]$, i.e. $S_r$, is a random variable that depends on the reward realizations as well as the randomness in the parameters $\bm{\mu}$ of the reward distributions of the arms. We first consider the parameter vector $\bm{\mu}$ fixed. For any fixed round $r\in [R]$, \cref{lem:posterior_mean_concentration_freq} bounds the probability that the empirical mean of some suboptimal arm $i$ is larger that the empirical mean of the optimal arm of $\bm{\mu}$. Recall that when $\sigma_i=\sigma$ for all $i\in[K]$ for some $\sigma>0$, the per-phase budget is distributed equally among active arms, i.e. $\nr{i}=\nr{}=\frac{n}{R|S_r|},\forall i\in S_r$. Let $\hat\mu_{i,n_r}$ be the empirical mean from $n_r$ samples of arm $i$. The following is an adaptation of Lemma $4.2$ of \citep{Karnin2013AlmostOE} for Gaussian reward distributions:
\begin{lemma}[Adaptated Lemma $4.2$ of \citep{Karnin2013AlmostOE}]\label{lem:posterior_mean_concentration_freq}
Fix instance $\bm{\mu}$ and round $r \in [R]$. Suppose that $i_* \in S_r$. Then, when $\sigma_i=\sigma$ for all $i\in[K]$ for some $\sigma>0$, for any $i \in S_r$:
\begin{align*}
\prob{\hat\mu_{i,n_r}>\hat\mu_{i_*,n_r}|\bm{\mu}} \leq \exp\left(-\frac{1}{4\sigma^2}\nr{} \Delta_i^2\right) .
\end{align*}
\end{lemma}
Then, continuing along the lines of the frequentist proof, for any fixed $r\in[R]$ and $\bm{\mu}$, in \cref{lem:r_elimination_freq} we bound the probability that the optimal arm is eliminated at round $r$.
\begin{lemma}[Adaptated Lemma $4.3$ of \citep{Karnin2013AlmostOE}]\label{lem:r_elimination_freq}
Fix instance $\bm{\mu}$ and round $r\in [R]$. Then, when $\sigma_i=\sigma$ for all $i\in[K]$ for some $\sigma>0$, there exists some $j_r\in S_r$ such that the probability that $i_*$ is eliminated at $r$ satisfies:
\begin{align*}
\prob{i_* \not \in S_{r+1}| \{i_* \in S_r\}, \bm{\mu}}
\leq 3\exp\left(-\frac{1}{4\sigma^2}\nr{} \Delta_{j_r}^2\right).
\end{align*}
\end{lemma}
Up to this point the above analysis is frequentist and imitates the one of \citep{Karnin2013AlmostOE} in the frequentist setting. Finally, in the following, we deal with the randomness in $\bm{\mu}$.
\begin{align*}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \\
&= \int_{\bm{\mu}} \prob{J\neq i_*|\bm{\mu}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \nonumber\\
&\leq \int_{\bm{\mu}} \sum_{r\in [R]}\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \\
&\leq 3\sum_{r\in [R]} \int_{\bm{\mu}}
\exp\left(-\frac{n_r}{4 \sigma^2}\Delta_{j_r}^2\right) \cdot \mathbb{P}(\bm{\mu}) \,d\bm{\mu} ,
\end{align*}
where the last inequality is due to \cref{lem:r_elimination_freq}.
Now, as noted before, $i_*, j_r$ are random quantities that depend on the instance $\bm{\mu}$. For any fixed $r\in[R]$, we can rewrite the above integral by grouping the instances according to the realizations of $i_*$ and $j_r$ and then upper bound the integral to show the following:
We have that:
\begin{align*}
&\int_{\bm{\mu}}
\exp\left(-\frac{n_r}{4 \sigma^2}\Delta_{j_r}^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} =\\
&= \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ i_*=i,j_r=j}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ i_*=j,j_r=i}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&\leq \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ \mu_i\geq \mu_j}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ \mu_i<\mu_j}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&= \sum_{i\in[K]}\sum_{j>i}
\int_{\bm{\mu}}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} &\text{ (by symmetry)}\\
&= \frac{1}{\sqrt{n\frac{\sigma_0^2}{\sigma^2}\frac{2^r}{K\log_{2}(K)}+1}} \sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{n\sigma_0^2}{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right) &\text{ (by \cref{lem:gaussuan_integration2} in the Appendix)}
\end{align*}
This completes the proof of \cref{thm:freq_bound}.
\end{proof}
\subsection{Gaussian Integrals}
\begin{lemma}\label{lem:gaussuan_integration_1}
Let $\mu_1\sim \mathcal{N}(\nu_1,\sigma_{0}^2), \mu_2\sim \mathcal{N}(\nu_2,\sigma_{0}^2)$ and $c_1,c_2\geq 0$. We have that:
\begin{align*}
\int_{\mu} \exp\left(-\frac{c_2}{2\sigma_0^2} \left(c_1(\nu_2-\nu_1)
+ \mu_2-\mu_1 \right)^2\right) P_Q\,d\bm{\mu}
= \frac{1}{\sqrt{2c_2+1}} e^{-\frac{c_2(c_1+1)^2(\nu_1-\nu_2)^2}{2(2c_2+1)\sigma_0^2}}
\end{align*}
\end{lemma}
\begin{proof}
We have that:
\begin{align*}
&\int_{\mu} e^{-\frac{c_2}{2\sigma_0^2} \left(c_1(\nu_2-\nu_1)
+ \mu_2-\mu_1 \right)^2} P_Q\,d\bm{\mu} =\\
&= \int_{\bm{\mu}} \frac{1}{2\pi\sigma_0^2}e^{-\frac{c_2}{2\sigma_0^2} \left(c_1(\nu_2-\nu_1) + \mu_2-\mu_1 \right)^2-\frac{(\mu_1-\nu_1)^2}{2\sigma_0^2}-\frac{(\mu_2-\nu_2)^2}{2\sigma_0^2}} \,d\bm{\mu} =\\
&= \int_{\mu_1} \frac{1}{\sqrt{2}\sqrt{{\pi}}\sqrt{c_2+1}\sigma_0} e^{-\frac{(c_1^2+2c_1+1)c_2\nu_2^2+((-2c_1^2-2c_1)c_2\nu_1+(-2c_1-2)c_2\mu_1)\nu_2+((c_1^2+1)c_2+1)\nu_1^2+((2c_1-2)c_2-2)\mu_1\nu_1+(2c_2+1)\mu_1^2}{(2c_2+2)\sigma_0^2}} \, d\mu_1\\
&= \frac{1}{\sqrt{2c_2+1}} e^{-\frac{c_2(c_1+1)^2(\nu_1-\nu_2)^2}{2(2c_2+1)\sigma_0^2}}
\end{align*}
\end{proof}
\begin{lemma}\label{lem:gaussuan_integration2}
Let $\mu_i\sim \mathcal{N}(\nu_i,\sigma_{0}^2), \mu_j\sim \mathcal{N}(\nu_j,\sigma_{0}^2)$ and $C\geq 0$. We have that:
\begin{align*}
\int_{\bm{\mu}}
\exp\left(-C (\mu_i-\mu_j)^2 \right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu}
= \sqrt{
\frac{1}{4C\sigma_0^2+1}}
\exp\left(-\frac{C(\nu_i-\nu_j)^2}{4\sigma_0^2C+1}\right)
\end{align*}
\end{lemma}
\begin{proof}
We will use the following.
Let $a,b,c\geq 0$ then:
\begin{align}\label{eq:gauss_integr}
\int_{x=-\infty}^{+\infty} e^{-ax^2+bx+c} \,dx = \sqrt{\frac{\pi}{a}}e^{\frac{b^2}{4a}+c}.
\end{align}
\noindent
Now, we can compute the objective as follows:
\begin{align*}
&\int_{\bm{\mu}}
\exp\left(-C (\mu_i-\mu_j)^2 \right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} =\\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_i,\mu_j} \exp\left(-C (\mu_i-\mu_j)^2 \right) \exp\left(- \frac{(\nu_i-\mu_i)^2}{2\sigma_0^2} \right) \exp\left(- \frac{(\nu_j-\mu_j)^2}{2\sigma_0^2} \right) \,d\mu_i \,d\mu_j \\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_i,\mu_j} \exp\left(-C (\mu_i-\mu_j)^2 - \frac{(\nu_i-\mu_i)^2}{2\sigma_0^2} - \frac{(\nu_j-\mu_j)^2}{2\sigma_0^2} \right) \,d\mu_i \,d\mu_j \\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_i,\mu_j} \exp\left( -\mu_i^2\left(C+\frac{1}{2\sigma_0^2}\right)+ \mu_i 2\left(C\mu_j+\frac{\nu_i}{2\sigma_0^2}\right)+\left(-C\mu_j^2+\frac{2\nu_j\mu_j-\nu_i^2-\mu_j^2-\nu_j^2}{2\sigma_0^2}\right)\right) \,d\mu_i \,d\mu_j \\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_j} \sqrt{\frac{\pi}{C+\frac{1}{2\sigma_0^2}}}\exp\left(\frac{\left(C\mu_j+\frac{\nu_i}{2\sigma_0^2}\right)^2}{\left(C+\frac{1}{2\sigma_0^2}\right)}+
\left(-C\mu_j^2+\frac{2\nu_j\mu_j-\nu_i^2-\mu_j^2-\nu_j^2}{2\sigma_0^2}\right)\right) \,d\mu_j ~~~~~\text{ using \cref{eq:gauss_integr}}\\
&= \sqrt{\frac{1}{4\pi\sigma_0^4C+2\pi\sigma_0^2}} \int_{\mu_j}
\exp\left(
-
\frac{\mu_j^2}{2\sigma_0^2}
\frac{2C+\frac{1}{2\sigma_0^2}}{C+\frac{1}{2\sigma_0^2}}
+
\frac{\mu_j}{2\sigma_0^2}
\frac{2\left(C\nu_i +(C+\frac{1}{2\sigma_0^2})\nu_j\right)}{C+\frac{1}{2\sigma_0^2}}
+\frac{1}{2\sigma_0^2}\frac{\left(-\frac{\nu_j^2}{2\sigma_0^2}-C(\nu_i^2+\nu_j^2)\right) }{C+\frac{1}{2\sigma_0^2}}
\right) \,d\mu_j \\
&= \sqrt{
\frac{1}{4C\sigma_0^2+1}}
\exp\left(-\frac{C(\nu_i-\nu_j)^2}{4\sigma_0^2C+1}\right) \hspace{8.3cm}\text{ using \cref{eq:gauss_integr}}
\end{align*}
\end{proof}
\section{Proofs of \cref{sec:lb}}\label{app:freq_lb}
\propMinProb*
\begin{proof}
Observe that for the conditional probabilities $\Prob{}{\mathcal{I}_{i_2}},\Prob{}{\mathcal{I}_{i_1}}$ we have that
\begin{align*}
\min(\Prob{}{\mathcal{I}_{i_2}},\Prob{}{\mathcal{I}_{i_1}}) =
\begin{cases}
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}, & \text{ if } \nu_{i_1}>\nu_{i_2}\\
&\\
\frac{1}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}, & \text{ if } \nu_{i_1}\leq\nu_{i_2}\\
\end{cases}
\end{align*}
Moreover, if $\nu_{i_1}\leq\nu_{i_2}$ then
\begin{align*}
\frac{1}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}
\geq
\frac{1}
{2\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}
=\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})|\nu_{i_1}-\nu_{i_2}|}{2\sigma_0^2}\right)}{2}
\end{align*}
On the other hand, when $\nu_{i_1}>\nu_{i_2}$ then
\begin{align*}
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)+1}
\geq
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}{2}
=
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})|\nu_{i_1}-\nu_{i_2}|}{2\sigma_0^2}\right)}{2}
\end{align*}
\end{proof}
\thmFreqLBound*
\begin{proof}
Let the mean vector $\bm{\mu}=(\mu_1,...,\mu_K)$, where $X_i\sim\mathcal{N}(\mu_i,1)$ and $\mu_i\sim\mathcal{N}(\nu_i,\sigma_0)$.
We would like to bound the probability that the learner fails to recommend the optimal arm when presented with instance $\bm{\mu}$, i.e. $\Ex{\bm{\mu}}{\Prob{}{J\neq i_*~|~\bm{\mu}}}$. We also assume that the learner is oblivious to prior information.
Instead, we consider the easier problem where the learner is required to distinguish the best arm given that the instance it faces is one of $K$ instances defined as follows:
We define $d_i = \mu_{i_*}-\mu_{i}$.
Let $K$ pairs of corresponding Gaussian distributions $p_i= \mathcal{N}(\mu_i,1)$ and $p_i'= \mathcal{N}( \mu_i',1)$ where $\mu_i'=\mu_i + 2d_i$.
We define $K$ Gaussian bandit instances where, in instance $i\in [K]$, any arm $k\in [K]$ has distribution $p_{k}^i=p_{k}$ if $i\neq k$ and $p_{k}^i=p_{k}'$ if $k=i$, i.e. arm $i$ is uniquely the best arm in instance $i$.
We define the product distribution of instance $i \in [K]$ as $G^i=p_{1}^i\times \dots \times p_{K}^i$.
For $i\in[K]$, we use the notation $\Prob{i}{\cdot} = \Prob{G^i}{\cdot|\bm{\mu}}$ and $\Ex{i}{\cdot} = \Ex{G^i}{\cdot | \bm{\mu}}$ to denote the probability (resp. expectation) w.r.t. the randomness of sampling in instance $i$. \\
Let the KL divergence between distributions $p,p'$:
\begin{align*}
d_{KL}(p,p') = \int_{-\infty}^{\infty} \log \left(\frac{\,d p(x)}{\,d p'(x)}\right) \,d p(x)
\end{align*}
Here, for $k\in[K]$ we define the KL divergence for arm $k$ as:
$$d_{KL}^{k} = d_{KL}(p_k,p_k') = d_{KL}(p_k',p_k) = \frac{(\mu_k-\mu_k')^2}{2} = \frac{(2d_k)^2}{2} = 2d_k^2.$$
For $t\in [T], k \in [K]$ let $t$ samples $\{X_{k,s}\}_{s\in[T]}\sim p_k^i$ from arm $k$ in some bandit instance $i$. Moreover, let the empirical KL divergence computed from the samples of arm $k$:
\begin{align*}
\widehat{d}_{KL}^{k,t} &= \frac{1}{t} \sum_{s\in[t]} \log \left(\frac{\,d p_k}{\,d p_k'}(X_{k,s})\right) \\
&= \frac{1}{t} \sum_{s\in[t]} \log \left(\frac{\frac{1}{\sqrt{2\pi}} e^{\frac{-(X_{k,s}-\mu_k)^2}{2}}}{\frac{1}{\sqrt{2\pi}} e^{\frac{-(X_{k,s}-\mu'_k)^2}{2}}}\right) \\
&= \frac{1}{t} \sum_{s\in[t]} \left( -\frac{(X_{k,s}-\mu_k)^2}{2} + \frac{(X_{k,s}-\mu_k')^2}{2} \right) \\
&= \frac{1}{t} \sum_{s\in[t]} \frac{(2 X_{k,s} -(\mu_k+\mu_k')) (\mu_k'-\mu_k)}{2} \\
&= \frac{1}{t} \sum_{s\in[t]} 2(X_{k,s} -\mu_{i_*}) d_k
\end{align*}
Note that $\Ex{G^i}{\widehat{d}_{KL}^{k,t}} = 2(\mu_k-\mu_{i_*})d_k=-d_{KL}^k$ if $k\neq i$, or $\Ex{G^i}{\widehat{d}_{KL}^{k,t}} = 2(\mu_k'-\mu_{i_*})d_k=2(\mu_k+2d_k-\mu_{i_*})d_k=d_{KL}^k$ if $k=i$. Therefore, $|\widehat{d}_{KL}^{k,t}|$ is an unbiased estimator of $d_{KL}^k$. Moreover, we have the following concentration result:
\begin{lemma}\label{lem:KL_concentration}
Let
\begin{align*}
\Xi = \left\{|\widehat{d}_{KL}^{k,t}| - d_{KL}^k \leq 2d_k\sqrt{\frac{2\log(6KT)}{t}}, \forall k\in [K], t\in [T]\right\}
\end{align*}
For any $i\in[K]$ we have that:
\begin{align*}
\Prob{i}{\Xi} \geq 5/6
\end{align*}
\end{lemma}
\begin{proof}
Since $X_{k,s}\sim p_k^i$, the quantity $2(X_{k,s} -\mu_{i_*}) d_k$ is a Gaussian random variable. In particular, if $k=i$ then $2(X_{k,s} -\mu_{i_*}) d_k\sim \mathcal{N}\left(2d_k^2, 2d_k\right)=\mathcal{N}\left(d_{KL}^k, 2d_k\right)$.
On the other hand, if $k\neq i$
then $2(X_{k,s} -\mu_{i_*}) d_k\sim \mathcal{N}\left(-2d_k^2, 2d_k\right)=\mathcal{N}\left(-d_{KL}^k, 2d_k\right)$.
Thus using Hoeffding's inequality for the empirical mean of subgaussian random variables:
\begin{align*}
\Prob{G^i}{|\widehat{d}_{KL}^{k,t}| - d_{KL}^k \geq 2d_k \sqrt{\frac{2\log (1/\delta)}{t}}} \leq \delta
\end{align*}
Using $\delta=(6TK)^{-1}$ and union bound over all $t\in [T]$ and $k\in [K]$ we obtain that:
\begin{align*}
\Prob{G^i}{|\widehat{d}_{KL}^{k,t}| - d_{KL}^k \leq 2d_k\sqrt{\frac{2\log(6KT)}{t}}, \forall k\in [K], t\in [T]} \geq 5/6
\end{align*}
\end{proof}
We consider an algorithm that returns arm $J$ and denote by $T_i$ the number of times arm $i$ has been pulled. As in \cite{Carpentier16}, we define:
\begin{equation}\label{eq:def_times}
t_i = \Ex{i_*}{T_i}
\end{equation}
and the event:
\begin{align*}
\mathcal{E}_i = \{J= i_*\}\cap \Xi \cap\{T_i\leq 6 t_i\}.
\end{align*}
We focus on some $i\in[K]$. Using the change of measure identity, since the distributions $G_{i_*},G_i$ only differ in arm $i$, we have that:
\begin{align*}
\Prob{i}{\mathcal{E}_i} = \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-T_i \widehat{d}_{KL}^{i,T_i}\right)}
\end{align*}
Using \cref{lem:KL_concentration} and subsequently \cref{eq:def_times} we get that:
\begin{align*}
\Prob{i}{\mathcal{E}_i}
&= \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-T_i \widehat{d}_{KL}^{i,T_i}\right)}\\
&\geq \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-T_id_{KL}^{i} - 2d_i\sqrt{2T_i\log(6KT)}\right)} \\
&\geq \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-6t_id_{KL}^{i} - 2d_i\sqrt{12t_i\log(6KT)}\right)} \\
&= \Prob{i_*}{\mathcal{E}_i} \exp\left(-6t_id_{KL}^{i} - 2\sqrt{12t_id_i^2\log(6KT)}\right) \\
&= \Prob{i_*}{\mathcal{E}_i} \exp\left(-12 t_i d^2_i - 2\sqrt{12 t_i d_i^2\log(6KT)}\right) \\
\end{align*}
Recall that we are interested in bounding the probability of error, i.e. the following quantity, for any $i\in[K]$:
\begin{align*}
\Ex{\bm{\mu}}{\Prob{i}{J\neq i}}.
\end{align*}
Notice that for $i\in [K]\setminus \{i_*\}$ the probability of error in instance $i$ can be lower bounded as follows:
\begin{align*}
\Prob{i}{J\neq i}
\geq \Prob{i}{\mathcal{E}_i}
\geq \Prob{i_*}{\mathcal{E}_i} \exp\left(-12 t_i d^2_i - 2\sqrt{12 t_i d_i^2\log(6KT)}\right).
\end{align*}
Since $\sum_{i\in[K]} t_i = T$ and $\sum_{i\in [K]\setminus \{i_*\}} \frac{1}{d_i^2}=H$ then $\exists i\in[K]\setminus\{i_*\}$ such that $t_i d_i^2 \leq T/H$. Thus, there exists $i\in [K]\setminus\{i_*\}$ such that:
\begin{align*}
\Prob{i}{J\neq i} \geq \Prob{i_*}{\mathcal{E}_i} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right).
\end{align*}
Using that, by Markov's inequality $\Prob{i_*}{T_i\geq 6 t_i}\leq \frac{1}{6}$ and \cref{lem:KL_concentration} we get that $$\Prob{i_*}{\mathcal{E}_i}\geq 1 - \Prob{i_*}{T_i\geq 6 t_i} - \Prob{i_*}{\Xi} - \Prob{i_*}{J\neq i_*} \geq 1-\frac{1}{6}-\frac{1}{6} - \Prob{i_*}{J\neq i_*}= \frac{2}{3}-\Prob{i_*}{J\neq i_*}.$$
Thus, for this $i\in[K]\setminus {i_*}$ we have that
\begin{align*}
\Prob{i}{J\neq i}
&\geq \left(\frac{2}{3}-\Prob{i_*}{J\neq i_*}\right) \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right) \\
&\geq \frac{2}{3} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right) -\Prob{i_*}{J\neq i_*}.
\end{align*}
Rearranging the terms in the above inequality we get that
\begin{align*}
\Prob{i}{J\neq i} + \Prob{i_*}{J\neq i_*} \geq \frac{2}{3} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right).
\end{align*}
Then, $\exists i\in[K]$ such that:
\begin{align*}
\Prob{i}{J\neq i} \geq \frac{1}{3} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right).
\end{align*}
which also holds on expectation when $\mu_i\sim \mathcal{N}(\nu_i,1)$:
\begin{align*}
\Ex{\bm{\mu}}{\Prob{i}{J\neq i}} \geq \frac{1}{3} \Ex{\bm{\mu}}{\exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right)}.
\end{align*}
The theorem follows by taking $\sigma_0\rightarrow 0$ in the above.
\end{proof}
\section{ALGORITHM}
We propose $\ensuremath{\tt BayesElim}\xspace$, a Bayesian successive elimination algorithm, similar to the one proposed in \cite{Karnin2013AlmostOE} for the frequentist version of the BAI problem, but which incorporates prior information. In contrast to the elimination algorithm of \cite{Karnin2013AlmostOE}, \ensuremath{\tt BayesElim}\xspace eliminates based on the maximum a posteriori estimates of the arm rewards.
\ensuremath{\tt BayesElim}\xspace splits the exploration budget $n$ into $R=\lceil\log_2(K)\rceil$ elimination rounds of equal budget per round, $\lfloor\frac{n}{R}\rfloor$. At each elimination round $r$, the algorithm maintains a set of active arms, denoted by $S_r$.
At the end of each round, after collecting samples from the arms, the algorithm eliminates a half of the active arms that have the lowest means of posterior distributions.
The arms that survive elimination at round $r$ become the active set of arms in the next round.
In each round $r$, \ensuremath{\tt BayesElim}\xspace splits the per-round budget among the arms in $S_r$ according to their reward variances. In particular, each arm $i\in S_r$ is sampled $\lfloor n_{r,i} \rfloor$ times, where
\begin{align}
n_{r,i}=\frac{n}{R}\frac{\sigma_i^2}{\Sigma_r}
\label{eq:split}
\end{align}
and $\Sigma_r = \sum_{j\in S_r}{\sigma_j^2}$.
The algorithm is presented in \cref{alg:bayesian_successive_elimination}.
\begin{algorithm}[ht]
\caption{\ensuremath{\tt BayesElim}\xspace: Bayesian elimination.}\label{alg:bayesian_successive_elimination}
\begin{algorithmic}[1]
\State Let $R \leftarrow \lceil\log_2K\rceil$.
\State Initialize $S_0 \leftarrow [K]$.
\For{$r=0,..., R-1$}
\For{$i\in S_r$}
\State Get $\lfloor n_{r,i}\rfloor$ samples of arm $i$, where $n_{r,i}$ is given by \cref{eq:split}.
\State Compute posterior mean $\bar\mu_{i,n_{r,i}}$ using \cref{eq:mubarim}.
\EndFor
\State Set $S_{r+1}$ to be the set of $\lceil|S_r|/2\rceil$ arms in $S_r$ with largest posterior means $\{\bar\mu_{i,n_{r,i}}\}_{i\in S_r}$.
\EndFor
\end{algorithmic}
\end{algorithm}
Notice that \cref{eq:split} allocates a relatively larger number of samples to arms with larger reward noise. In particular, splitting the per-round budget according to \cref{eq:split}, makes the posterior variances of all arms in a round equal, i.e.
\begin{align*}
\bar\sigma_{i,n_{r,i}}^2=\left(\frac{1}{\sigma_{0}^2}+\frac{n_{r,i}}{\sigma_i^2}\right)^{-1} = \left(\frac{1}{\sigma_{0}^2}+\frac{n}{R\cdot\Sigma_r}\right)^{-1}.
\end{align*}
That allows us to eliminate based on posterior means that have uniform noise.
Finally, we note that in the special case where all sample variances are equal, i.e. $\sigma_i=\sigma$ for all $i\in[K]$, the budget is distributed uniformly among the active arms in each round. This is the same as in the original algorithm of \citet{Karnin2013AlmostOE}.
\section{LOWER BOUND}\label{sec:lb}
In this section, we construct a lower bound on the probability of misidentification of any policy that recommends arm $J$ after $n$ exploration rounds.
Our lower bound construction is novel: we formulate a simpler setting where it is easier to handle prior weights and combine this fact with frequentist-like arguments for every possible problem instance.
We compare this lower bound to the guarantee of our algorithm and show that \ensuremath{\tt BayesElim}\xspace achieves optimal dependence in almost all parameters of the setting.
In addition, we give a lower bound on the expected probability of misidentification of any frequentist policy, i.e. policy that ignores prior information, applied to the Bayesian setting. Finally, in \cref{sec:proof_lb} we sketch the proof of our \cref{thm:lb}.
We are now ready to state the main lower bound. For simplicity, we focus on the case of $K=2$ arms and explore the dependence on the horizon and prior quantities. We have the following:
\begin{theorem}\label{thm:lb}
For any policy interacting with $K=2$ arms with mean rewards $\bm{\mu}=(\mu_1,\mu_2)$, reward distributions $\mathcal{N}(\mu_i,\sigma^2)$ and priors $\mu_i\sim\mathcal{N}(\nu_i,\sigma_0^2)$ for $i\in\{1,2\}$, we have that
\begin{align*}
&\Ex{\bm{\mu}}{\Prob{}{J\neq i_*|\bm{\mu}}} \\
&\geq \frac{1}{8}\Ex{\bm{\mu}}{\exp\left(-n\frac{\Delta^2}{2\sigma^2}- \frac{2\Delta|\nu_{1}-\nu_{2}|}{2\sigma_0^2}\right) },
\end{align*}
where $\Delta = |\mu_1-\mu_2|$.
\end{theorem}
In order to compare the above lower bound
to the guarantee for the expected probability of misidentification of \ensuremath{\tt BayesElim}\xspace, we use the upper bound derived in the intermediate \cref{eq:regret_bound_1}, that is, before integration over priors. Replacing the expressions of $\Sigma_r,R$ and using $K=2$ and the same notation $\Delta$ as in \cref{thm:lb}, the guarantee for \ensuremath{\tt BayesElim}\xspace in \cref{eq:regret_bound_1} becomes as follows:
\begin{corollary}
In the setting of \cref{thm:lb}, \ensuremath{\tt BayesElim}\xspace satisfies:
\begin{align*}
&\Ex{\bm{\mu}}{\Prob{}{J\neq i_*|\bm{\mu}}} \\
&\leq 3 \Ex{\bm{\mu}}{
\exp\left(-\frac{n}{8} \frac{\left(\frac{\sigma^2}{\sigma_{0}^2}\frac{\nu_{1}-\nu_{2}}{n} + \Delta\right)^2}{\sigma^2} \right)} \\
&= \mathbb{E}_{\bm{\mu}}\Bigg[
\exp\Bigg(-n\frac{\Delta^2}{8\sigma^2}- \frac{2\Delta (\nu_{1}-\nu_{2})}{8\sigma_0^2}\\
& \qquad\qquad\qquad
\qquad-\frac{\sigma^2}{\sigma_0^2}\frac{(\nu_{1}-\nu_{2})^2}{8n\sigma_0^2}\Bigg) \Bigg].
\end{align*}
\end{corollary}
From the above, it is clear that \ensuremath{\tt BayesElim}\xspace achieves \textit{optimal} guarantees in terms of the dependence on the budget $n$, prior variance $\sigma_0$, reward variance $\sigma$ and suboptimality gaps $\Delta$, up to constant factors. Moreover, the upper bound has similar dependence on the gaps between the prior means normalized by $\sigma_0^2$ as the lower bound. Notice that the term $\frac{\sigma^2}{\sigma_0^2}\frac{(\nu_{1}-\nu_{2})^2}{8n\sigma_0^2} $ is vanishing for larger budget. However, there is still a gap between the two bounds in terms of this dependence.
Finally, we formally quantify the loss incurred by using a frequentist policy in the Bayesian setting, by showing the following lower bound on the expected probability of misidentification of policies that ignore prior information:
\begin{restatable}{theorem}{thmFreqLBound}\label{thm:freq_lb}
When $\sigma_0\rightarrow 0$, there exist prior means $(\nu_1,\dots,\nu_K)$ with $\nu_1>\nu_{j}$ for all $j\in [K]\setminus{\{1\}}$, such that the expected probability of misidentification of any frequentist algorithm, that is any algorithm that is oblivious to prior information, satisfies:
\begin{align*}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \\
&\geq \frac{1}{6}
\exp\left(-\frac{12n}{\sum_{i>1}(\nu_1-\nu_i)^{-2}} - \sqrt{\frac{48n\log(6Kn)}{\sum_{i>1}(\nu_1-\nu_i)^{-2}}}\right).
\end{align*}
\end{restatable}
Observe that the setting of \cref{thm:freq_lb} corresponds to a trivial case for a Bayesian policy, since the means are known without any sampling - \ensuremath{\tt BayesElim}\xspace achieves zero probability of misidentification in that case. The proof of the above result is deferred to $\cref{app:freq_lb}$.
In the rest of this section, we outline the proof of our main lower bound in \cref{thm:lb} for the problem of Bayesian BAI.
\subsection{Proof of \cref{thm:lb}}\label{sec:proof_lb}
We consider a fixed reward vector $\bm{\mu}=(\mu_1,\mu_2)$ drawn from the prior. Let $i_1$ be the optimal and $i_2$ be the suboptimal arm of $\bm{\mu}$. We need to bound the probability that the learner fails to recommend the optimal arm when presented with vector $\bm{\mu}$, i.e. $\Ex{\bm{\mu}}{\Prob{}{J\neq i_1~|~\bm{\mu}}}$.
Instead, we consider the easier problem where the learner is also given the information that the instance it faces is one of two Gaussian bandit instances $\mathcal{I}_{i_1},\mathcal{I}_{i_2}$, where $\mathcal{I}_{i_1}$ is the original instance corresponding to the true mean reward vector $\bm{\mu}$, while in instance $\mathcal{I}_{i_2}$ the mean rewards are flipped such that the suboptimal arm, $i_2$, is optimal.
Formally, we define instance $\mathcal{I}_{i_1}$ where the realizations of the arms follow the product distribution $G_{i_1}=p_1\times p_2$ where:
$$p_1= \mathcal{N}(\mu_{1},\sigma^2), p_2= \mathcal{N}(\mu_{2},\sigma^2).$$
Similarly, we define instance $\mathcal{I}_{i_2}$ where the realizations of the arms follow the distribution $G_{i_2}=p_2\times p_1$.\\
Observe that all $\mathcal{I}_{i_1},\mathcal{I}_{i_2},G_{i_1},G_{i_2}$ are functions of $\bm{\mu}$, and thus they are random variables. Also, notice that the instances are defined such that arm $i$ is uniquely optimal in instance $\mathcal{I}_{i}$.
For $i\in[i_1,i_2]$, we use the notation $\Prob{i}{\cdot} = \Prob{G_i}{\cdot|\bm{\mu}}$ and $\Ex{i}{\cdot} = \Ex{G_i}{\cdot | \bm{\mu}}$ to denote the probability (resp. expectation) w.r.t. the interplay of the (possibly randomized) policy and the reward realizations within $n$ rounds in instance $\mathcal{I}_{i}$.
Let $n_{i_1},n_{i_2}$ be the number of times arms $i_1,i_2$ are played by a policy in $n$ rounds. Since we are dealing with only two arms, we can define the event $A=\{J=i_1\}$ and apply Bretagnolle-Huber inequality (\cref{thm:huber} in \cref{app}) to obtain the following:
\begin{align*}
&\Prob{i_1}{J\neq i_1}+\Prob{i_2}{J\neq i_2}\\
&=\Prob{i_1}{A^c}+\Prob{i_2}{A} \\
&\geq \frac{1}{2}e^{-\left(\Ex{i_1}{n_{i_1}}d_{KL}(p_1,p_2)+\Ex{i_1}{n_{i_2}}d_{KL}(p_2,p_1)\right)} \\
&= \frac{1}{2}\exp\left(-\left(\Ex{i_1}{n_{i_1}}\frac{\Delta^2}{2\sigma^2}+\Ex{i_1}{n_{i_2}}\frac{\Delta^2}{2\sigma^2}\right)\right)\\
&= \frac{1}{2}\exp\left(-n\frac{\Delta^2}{2\sigma^2}\right),
\end{align*}
where above we first use Bretagnolle-Huber inequality, then we use the expression for the KL-divergence between two Gaussian distributions with same standard deviation $\sigma$ and subsequently the fact that $n_{i_1}+n_{i_2}=n$.
Thus,
\begin{align}\label{eq:tmp1}
\max(\Prob{i_1}{J\neq i_1},\Prob{i_2}{J\neq i_2}) \geq \frac{1}{4}\exp\left(-n\frac{\Delta^2}{2\sigma^2}\right).
\end{align}
However, in contrast to the frequentist setting, here instances $\mathcal{I}_{i_1},\mathcal{I}_{i_2}$ have possibly different probabilities of happening and these probabilities are available to the policy. The conditional probability of instance $\mathcal{I}_{i_1}$ given that the learner faces either $\mathcal{I}_{i_1}$ or $\mathcal{I}_{i_2}$ is:
\begin{align*}
&\Prob{}{\mathcal{I}_{i_1}} = \\
&= \frac{e^{-\frac{(\mu_{i_1}-\nu_{i_1})^2+(\mu_{i_2}-\nu_{i_2})^2}{2\sigma_0^2}}}
{e^{-\frac{(\mu_{i_1}-\nu_{i_1})^2+(\mu_{i_2}-\nu_{i_2})^2}{2\sigma_0^2}}
+
e^{-\frac{(\mu_{i_1}-\nu_{i_2})^2+(\mu_{i_2}-\nu_{i_1})^2}{2\sigma_0^2}}}\\
&= \frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}
\end{align*}
and similarly for instance $\mathcal{I}_{i_2}$:
$$
\Prob{}{\mathcal{I}_{i_2}}
=\frac{1}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}.
$$
By distinguishing cases for the conditional probabilities above, we can show the following:
\begin{restatable}{proposition}{propMinProb}\label{prop:min_prob}
We have that
\begin{align*}
\min(\Prob{}{\mathcal{I}_{i_1}},\Prob{}{\mathcal{I}_{i_2}})
\geq \frac{1}{2}\exp\left(-\frac{2\Delta|\nu_{i_1}-\nu_{i_2}|}{2\sigma_0^2}\right).
\end{align*}
\end{restatable}
\noindent
Putting it all together, the probability of error in this setting can be written as follows:
\begin{align*}
&\Prob{}{J\neq i_1, \mathcal{I}_{i_1}} + \Prob{}{J\neq i_2, \mathcal{I}_{i_2}} \\
&= \Prob{i_1}{J\neq i_1}\Prob{}{\mathcal{I}_{i_1}} + \Prob{i_2}{J\neq i_2}\Prob{}{\mathcal{I}_{i_2}}\\
&\geq \max(\Prob{i_1}{J\neq i_1},\Prob{i_2}{J\neq i_2}) \cdot \min(\Prob{}{\mathcal{I}_{i_1}},\Prob{}{\mathcal{I}_{i_2}})\\
&\geq \frac{1}{4}\exp\left(-n\frac{\Delta^2}{2\sigma^2}\right) \min(\Prob{}{\mathcal{I}_{i_1}},\Prob{}{\mathcal{I}_{i_2}}) \\
&\geq \frac{1}{8}\Ex{\bm{\mu}}{\exp\left(-\left(n\frac{\Delta^2}{2\sigma^2}+ \frac{2\Delta|\nu_{1}-\nu_{2}|}{2\sigma_0^2}\right) \right)}.
\end{align*}
where above we used \cref{eq:tmp1} and subsequently \cref{prop:min_prob}.
Returning to our original experiment, we conclude that:
\begin{align*}
&\Ex{\bm{\mu}}{\Prob{}{J\neq i_*|\bm{\mu}}} \\
&\geq \frac{1}{8}\Ex{\bm{\mu}}{\exp\left(-\left(n\frac{\Delta^2}{2\sigma^2}+ \frac{2\Delta|\nu_{1}-\nu_{2}|}{2\sigma_0^2}\right) \right)}.
\end{align*}
\subsubsection*{\bibname}}
\begin{document}
\onecolumn
\aistatstitle{Instructions for Paper Submissions to AISTATS 2022: \\
Supplementary Materials}
\section{FORMATTING INSTRUCTIONS}
To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2022.sty} as a style file and to follow the same formatting instructions as in the main paper.
The only difference is that the supplementary material must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files.
Note that reviewers are under no obligation to examine your supplementary material.
\section{MISSING PROOFS}
The supplementary materials may contain detailed proofs of the results that are missing in the main paper.
\subsection{Proof of Lemma 3}
\textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]}
\section{ADDITIONAL EXPERIMENTS}
If you have additional experimental results, you may include them in the supplementary materials.
\subsection{The Effect of Regularization Parameter}
\textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]}
\vfill
\end{document}
\subsubsection*{\bibname}}
\usepackage{algorithm}
\usepackage{algorithmicx}
\usepackage{algpseudocode}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{bbm}
\usepackage{bm}
\usepackage{caption}
\usepackage{color}
\usepackage{dirtytalk}
\usepackage{dsfont}
\usepackage{enumerate}
\usepackage{fullpage}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{mathtools}
\usepackage[round]{natbib}
\usepackage{subfigure}
\usepackage{url}
\usepackage{xspace}
\usepackage{array,xtab,ragged2e}
\newlength\mylengtha
\newlength\mylengthb
\newcolumntype{P}[1]{>{\RaggedRight}p{#1}}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage[bookmarks=false]{hyperref}
\hypersetup{
pdffitwindow=true,
pdfstartview={FitH},
pdfnewwindow=true,
colorlinks,
linktocpage=true,
linkcolor=Green,
urlcolor=Green,
citecolor=Green
}
\usepackage[capitalize]{cleveref}
\usepackage[textsize=tiny]{todonotes}
\newcommand{\todob}[2][]{\todo[color=Red!20,size=\tiny,inline,#1]{B: #2}}
\newcommand{\todos}[2][]{\todo[color=Blue!20,size=\tiny,inline,#1]{S: #2}}
\newcommand{\todoa}[2][]{\todo[color=orange!20,size=\tiny,inline,#1]{A: #2}}
\newcommand{\commentout}[1]{}
\newcommand{\junk}[1]{}
\usepackage{thmtools}
\usepackage{thm-restate}
\declaretheorem[name=Theorem,refname={Theorem,Theorems},Refname={Theorem,Theorems}]{theorem}
\declaretheorem[name=Lemma,refname={Lemma,Lemmas},Refname={Lemma,Lemmas},sibling=theorem]{lemma}
\declaretheorem[name=Corollary,refname={Corollary,Corollaries},Refname={Corollary,Corollaries},sibling=theorem]{corollary}
\declaretheorem[name=Assumption,refname={Assumption,Assumptions},Refname={Assumption,Assumptions}]{assumption}
\declaretheorem[name=Proposition,refname={Proposition,Propositions},Refname={Proposition,Propositions},sibling=theorem]{proposition}
\declaretheorem[name=Fact,refname={Fact,Facts},Refname={Fact,Facts},sibling=theorem]{fact}
\declaretheorem[name=Definition,refname={Definition,Definitions},Refname={Definition,Definitions},sibling=theorem]{definition}
\declaretheorem[name=Example,refname={Example,Examples},Refname={Example,Examples}]{example}
\declaretheorem[name=Remark,refname={Remark,Remarks},Refname={Remark,Remarks}]{remark}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathsf{ch}}{\mathsf{ch}}
\newcommand{\diag}[1]{\mathrm{diag}\left(#1\right)}
\newcommand{\domain}[1]{\mathrm{dom}\left(#1\right)}
\newcommand{\mathsf{pa}}{\mathsf{pa}}
\newcommand{\range}[1]{\mathrm{rng}\left[#1\right]}
\newcommand{\E}[1]{\mathbb{E} \left[#1\right]}
\newcommand{\condE}[2]{\mathbb{E} \left[#1 \,\middle|\, #2\right]}
\newcommand{\Et}[1]{\mathbb{E}_t \left[#1\right]}
\newcommand{\prob}[1]{\mathbb{P} \left(#1\right)}
\newcommand{\condprob}[2]{\mathbb{P} \left(#1 \,\middle|\, #2\right)}
\newcommand{\probt}[1]{\mathbb{P}_t \left(#1\right)}
\newcommand{\var}[1]{\mathrm{var} \left[#1\right]}
\newcommand{\condvar}[2]{\mathrm{var} \left[#1 \,\middle|\, #2\right]}
\newcommand{\std}[1]{\mathrm{std} \left[#1\right]}
\newcommand{\condstd}[2]{\mathrm{std} \left[#1 \,\middle|\, #2\right]}
\newcommand{\cov}[1]{\mathrm{cov} \left[#1\right]}
\newcommand{\condcov}[2]{\mathrm{cov} \left[#1 \,\middle|\, #2\right]}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\ceils}[1]{\left\lceil#1\right\rceil}
\newcommand{\dbar}[1]{\bar{\bar{#1}}}
\newcommand*\dif{\mathop{}\!\mathrm{d}}
\newcommand{\floors}[1]{\left\lfloor#1\right\rfloor}
\newcommand{\I}[1]{\mathds{1} \! \left\{#1\right\}}
\newcommand{\inner}[2]{\langle#1, #2\rangle}
\newcommand{\kl}[2]{D_\mathrm{KL}(#1 \,\|\, #2)}
\newcommand{\klplus}[2]{D_\mathrm{KL}^+(#1 \,\|\, #2)}
\newcommand{\maxnorm}[1]{\|#1\|_\infty}
\newcommand{\maxnormw}[2]{\|#1\|_{\infty, #2}}
\newcommand{\negpart}[1]{\left[#1\right]^-}
\newcommand{\norm}[1]{\|#1\|}
\newcommand{\normw}[2]{\|#1\|_{#2}}
\newcommand{\pospart}[1]{\left[#1\right]^+}
\newcommand{\rnd}[1]{\bm{#1}}
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\subreal}[0]{\preceq}
\newcommand{\supreal}[0]{\succeq}
\newcommand{^\top}{^\top}
\newcommand{\bm{\mu}}{\bm{\mu}}
\newcommand{\sigma}{\sigma}
\newcommand{\bar\sigma}{\bar\sigma}
\newcommand{\nr}[1]{n_{r_{#1}}}
\newcommand{\Sigma_r}{\Sigma_r}
\DeclareMathOperator*{\argmax}{arg\,max\,}
\DeclareMathOperator*{\argmin}{arg\,min\,}
\let\det\relax
\DeclareMathOperator{\det}{det}
\DeclareMathOperator{\poly}{poly}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\sgn}{sgn}
\let\trace\relax
\DeclareMathOperator{\trace}{tr}
\mathchardef\mhyphen="2D
\newcommand\Ex[2]{\mathop{{\mathbb{E}_{#1}}\left[#2\right]}}
\newcommand\Prob[2]{\mathop{{\mathbb{P}_{#1}}\left(#2\right)}}
\newcommand{\ensuremath{\tt BayesElim}\xspace}{\ensuremath{\tt BayesElim}\xspace}
\newcommand{\ensuremath{\tt BayesElim2}\xspace}{\ensuremath{\tt BayesElim2}\xspace}
\newcommand{\ensuremath{\tt TS}\xspace}{\ensuremath{\tt TS}\xspace}
\newcommand{\ensuremath{\tt TS2}\xspace}{\ensuremath{\tt TS2}\xspace}
\newcommand{\ensuremath{\tt TTTS}\xspace}{\ensuremath{\tt TTTS}\xspace}
\newcommand{\ensuremath{\tt FreqElim}\xspace}{\ensuremath{\tt FreqElim}\xspace}
\newcommand{\ensuremath{\tt FreqElim2}\xspace}{\ensuremath{\tt FreqElim2}\xspace}
\begin{document}
\twocolumn[
\aistatstitle{Bayesian Fixed-Budget Best-Arm Identification}
\aistatsauthor{ Alexia Atsidakou \And Sumeet Katariya \And Sujay Sanghavi \And Branislav Kveton }
\aistatsaddress{ UT Austin \And Amazon \And UT Austin, Amazon \And Amazon }
]
\begin{abstract}
Fixed-budget best-arm identification (BAI) is a bandit problem where the learning agent maximizes the probability of identifying the optimal arm after a fixed number of observations. In this work, we initiate the study of this problem in the Bayesian setting. We propose a Bayesian elimination algorithm and derive an upper bound on the probability that it fails to identify the optimal arm. The bound reflects the quality of the prior and is the first such bound in this setting. We prove it using a frequentist-like argument, where we carry the prior through, and then integrate out the random bandit instance at the end. Our upper bound asymptotically matches a newly established lower bound for $2$ arms. Our experimental results show that Bayesian elimination is superior to frequentist methods and competitive with the state-of-the-art Bayesian algorithms that have no guarantees in our setting.
\end{abstract}
\input{Introduction}
\input{Setting}
\input{Algorithm}
\input{Analysis}
\input{Lower_Bound}
\input{Experiments}
\input{Conclusions}
\bibliographystyle{abbrvnat}
\section{PROBLEM SETTING}
Bayesian Fixed-Budget best-arm identification (BAI) involves $K$ arms with \textit{unknown} mean rewards $\bm{\mu}=(\mu_1,...,\mu_K)$ drawn from some \textit{known} prior distribution $Q$.
By playing arm $i\in [K]$ a policy observes a sample $X_i$ drawn from its reward distribution.
We focus on the Gaussian case, where the reward of each arm $i\in [K]$ follows a Gaussian distribution with known variance, i.e. $X_{i}\sim \mathcal{N}(\mu_i,\sigma_i^2)$ and the mean rewards of the arms are drawn independently from $\mu_i\sim \mathcal{N}(\nu_{i}, \sigma_{0}^2)$. We refer to $\sigma_i^2$ as the reward variance of arm $i$ and to $\nu_i$ and $\sigma_0^2$ as the prior mean and variance of the reward of arm $i$, respectively.
A policy interacts with the arms for $n$ exploration rounds, where $n$ is a known budget, with the goal of identifying an optimal arm in $[K]$, i.e. an arm $i_*$ such that $i_*=\argmax_{i\in[K]}\mu_i$. We denote by $J$ the arm that is recommended by the policy at the end of $n$ rounds. For any fixed parameter vector $\bm{\mu}$, the \textit{probability of misidentification} is
\begin{align*}
{\prob{J\neq i_*|\bm{\mu}}},
\end{align*}
where $\prob{\cdot|\bm{\mu}}$ is over the randomness of the policy and the reward realizations of each round and considering the parameter vector $\bm{\mu}$ fixed. The setting where the reward vector $\bm{\mu}$ is considered fixed corresponds to the frequentist BAI setting. There, the objective of a policy is to minimize the worst-case probability of misidentification for any possible mean reward vector. \\
In contrast to the frequentist setting, in Bayesian BAI the performance of a policy is measured in terms of its \textit{expected} {probability of misidentification}, i.e.:
\begin{align}\label{eq:objective}
\Ex{\bm{\mu}\sim Q}{\prob{J\neq i_*|\bm{\mu}}}
\end{align}
where the expectation is taken over the prior distributions of the mean rewards.
\paragraph{Notation.} We use the notation $\mu_* = \mu_{i_*}$ and $\nu_* = \nu_{i_*}$ for the mean reward and prior mean of the optimal arm, respectively. The suboptimality gap of each arm $i\in [K]$ is defined as $\Delta_{i} = \mu_* - \mu_i$. We note that $i_*, \mu_*, \nu_*$ and $\Delta_i$ are all functions of $\bm{\mu}$ and thus are random variables. For any $i\in [K]$, the posterior distribution (see \cite{Murphy07}) of its mean reward given $m$ i.i.d. observations $\{X_{i,s}\}_{s\in[m]}$ from its reward distribution, i.e. $X_{i,s}\sim \mathcal{N}(\mu_i,\sigma_i^2)$ for $s\in[m]$, is a Gaussian distribution:
\begin{align*}
\mathcal{N}\left( \bar \mu_{i,m}, \bar\sigma_{i,m}^2 \right),
\end{align*}
where the \textit{posterior mean} is given by the expression
\begin{align}
\bar \mu_{i,m}=\bar\sigma_{i,m}^2 \left(\frac{\nu_i}{\sigma_{0}^2}+\frac{\sum_{s\in [m]} X_{i,s}}{\sigma_i^2}\right)
\label{eq:mubarim}
\end{align}
while the \textit{posterior variance} is given by
$$\bar\sigma_{i,m}^2=\left(\frac{1}{\sigma_{0}^2}+\frac{m}{\sigma_i^2}\right)^{-1}.$$
We summarize notation in \cref{table:1}.
\begin{table}[!ht]
\centering
\begin{tabular}{ |p{0.08\textwidth}||p{0.35\textwidth}| }
\hline
\multicolumn{2}{|c|}{Notation} \\
\hline
$K$ & Number of arms \\
$n$ & Exploration budget \\
$J$ & Arm recommended by the policy \\
$\bm{\mu}$ & Mean reward vector \\
$Q$ & Prior distribution on $\bm{\mu}$ \\
$X_i$ & Stochastic reward of arm $i$ \\
$\mu_i,\sigma_i^2$ & Mean and Variance of the reward distribution of arm $i$ \\
$\nu_i,\sigma_{0,i}^2$ & Mean and variance of the prior distribution of arm $i$ \\
$\bar \mu_{i,m}, \bar\sigma_{i,m}^2$ & Posterior mean and variance of arm $i$ computed from $m$ i.i.d. samples \\
$i_*$ & Optimal arm of a random instance $\bm{\mu}$ \\
$\nu_*, \mu_*$ & Prior and reward mean of arm $i_*$ \\
$R$ & Number of elimination rounds \\
$S_r$ & Active set of arms in elimination round $r$ \\
$\Sigma_r$ & Sum of reward variances of active arms in round $r$ \\
$\prob{\cdot|\bm{\mu}}$ & Probability measure considering the vector $\bm{\mu}$ fixed \\
$\Ex{\bm{\mu}}{.}$ & Expectation over the randomness in $\bm{\mu}$ \\
$\prob{\bm{\mu}}$ & Probability of instance $\bm{\mu}$ \\
\hline
\end{tabular}
\caption{Notation}
\label{table:1}
\end{table}
\section{INTRODUCTION}
\label{sec:introduction}
\emph{Best-arm identification (BAI)} is a \emph{pure exploration} bandit problem where the goal is to identify the optimal arm \citep{bubeck2010pure,audibert-2010-BAI}. It has many applications in practice, such as online advertising, recommender systems, and vaccine tests \citep{lattimore-Bandit}. In the \emph{fixed-budget (FB)} setting \citep{bubeck2010pure,audibert-2010-BAI}, the goal is to accurately identify the optimal arm within a fixed budget of observations (arm pulls). This setting is common in applications where the observations are costly, such as in Bayesian optimization \citep{krause08nearoptimal}. In the \emph{fixed-confidence (FC)} setting \citep{ActionElimination-Evendar2006a,soare2014bestarm}, the goal is to find the optimal arm with a guaranteed level of confidence, while minimizing the sample complexity. Some works even studied both settings \citep{Gabillon-2012,Karnin2013AlmostOE}.
Most BAI algorithms, including all of the aforementioned, are frequentist. This means that the bandit instance is chosen potentially adversarially from some hypothesis class, such as linear models, and the goal of the agent is to identify the best arm in it by only knowing the class. While frequentist BAI algorithms have strong guarantees, they cannot be easily integrated with side information, such as a prior distribution over bandit instances, which is often available. While Bayesian BAI algorithms can naturally do that \citep{pmlr-v33-hoffman14,russo2016simple}, to the best of our knowledge the analyses of all state-of-the-art Bayesian algorithms are frequentist. So, while the algorithms can benefit from the side information, their regret bounds do not show improvement due to better side information. One recent exception is the work of \citet{komiyama2021optimal}, where the authors prove the first lower bound on a simple Bayes regret and bound the simple Bayes regret of a BAI algorithm that explores uniformly.
In this work, we set out to address an obvious gap in prior works on Bayesian BAI. Specifically, we propose the first Bayesian BAI algorithm for the fixed-budget setting that uses a prior distribution over bandit instances as a side information, and also has an error bound that improves with a more informative prior. This work parallels modern analyses of Thompson sampling in the cumulative regret setting \citep{russo14learning,russo16information,hong22thompson,hong22hierarchical}. For instance, \citet{russo14learning} showed that the $n$-round Bayes regret of linear Thompson sampling is $O(\sqrt{d})$ lower than the best known regret bound in the frequentist setting \citep{agrawal13thompson}. \citet{hong22hierarchical} showed that the shared hyper-parameter in meta- and multi-task bandits provably reduces the $n$-round Bayes regret of Thompson sampling that uses this structure. We believe that our work lays foundations for similar future improvements in Bayesian BAI.
This paper makes the following contributions. First, we formulate the setting of fixed-budget BAI with $K$ arms and propose an elimination algorithm for it. The algorithm is a variant of successive elimination \citep{Karnin2013AlmostOE} where the \emph{maximum likelihood estimate (MLE)} of the mean arm reward is replaced with a Bayesian \emph{maximum a posteriori (MAP)} estimate. We call the algorithm \ensuremath{\tt BayesElim}\xspace. Second, we prove an upper bound on the probability that \ensuremath{\tt BayesElim}\xspace fails to identify the optimal arm. The upper bound is proved using a frequentist-like analysis, where we carry the prior information through, and then integrate out the random instance at the end. The carried prior shows reduced regret when compared to frequentist algorithms. Our analysis technique is novel and very different from typical Bayesian bandit analyses in the cumulative regret setting \citep{russo14learning,russo16information,hong22thompson,hong22hierarchical}, which condition on history and bound the regret in expectation over the posterior in each round. Third, we prove a matching lower bound for the case of $K = 2$ arms. Finally, we evaluate \ensuremath{\tt BayesElim}\xspace on several synthetic bandit instances and demonstrate the benefit of using the prior in BAI.
One surprising property of our upper and lower bounds is that they are proportional to $1 / \sqrt{n}$, where $n$ is the budget. At a first sight, this seems to contradict to the frequentist upper \citep{Karnin2013AlmostOE} and lower \citep{Carpentier16} bounds, which are proportional to $\exp[- n\Delta^2]$, where $\Delta$ is the gap. The reason for the seeming contradiction is that the frequentist bounds are proved in a harder setting, per instance instead of integrating out the instance, yet they are lower. The bounds are compatible though: roughly speaking, when the frequentist bounds are integrated over $\Delta$, which in our case can be viewed as $\Delta \sim \mathcal{N}(0, 1)$, the resulting integrals yield $1 / \sqrt{n}$, because the budget $n$ in $\exp[-n \Delta^2]$ plays the role of variance in the Gaussian integral. These claims are stated more rigorously in the paper.
\section{CONCLUSIONS}
\label{sec:conclusions}
While best-arm identification in the fixed-budget setting has been studied extensively in the frequentist setting, Bayesian algorithms with provable Bayesian guarantees do not exist. In this work, we set out to address this gap and propose a Bayesian successive elimination algorithm with (almost) optimal such guarantees. The key idea in the algorithm is to eliminate arms based on their MAP estimates of mean rewards, which take the prior distribution of arm means into account.
The performance of the algorithm improves when the prior is more informative and we also derive an upper bound on the failure probability of the algorithm that reflects that. Our bound matches our newly established lower bound for $K = 2$ arms. Our algorithm is evaluated empirically on synthetic bandit problems. We observe that it is superior to frequentist methods and competitive with the state-of-the-art Bayesian algorithms that have no guarantees in our setting.
Our work is a first step in an exciting direction of more sample-efficient Bayesian BAI algorithms, which have improved guarantees for more informative priors. The work can be extended in two obvious directions. First, our algorithm is designed for and analyzed in Gaussian bandits. We believe that both can be extended to single-parameter exponential-family distributions with conjugate priors, such as Bernoulli rewards with beta priors. Second, successive elimination of \citet{Karnin2013AlmostOE} has recently been extended to linear models by \citet{ijcai2022p388}. In the linear model, a Gaussian model parameter prior with Gaussian rewards implies a Gaussian model parameter posterior. For this conjugacy, we believe that our algorithm design and analysis can also be extended to linear models.
\section{ANALYSIS}\label{sec:ub_analysis}
In this section, we provide an upper bound for the expected probability of misidentification of \ensuremath{\tt BayesElim}\xspace. In our analysis, we deviate from the usual Bayesian bandit proofs in the literature: we consider the parameter vector $\bm{\mu}$ fixed and use frequentist-like concentration arguments for the randomness due to sampling. In our arguments, prior quantities appear as extra bias terms in the probability bounds. After performing the analysis, we integrate out the randomness due to prior information. This style of analysis is novel and enables direct comparison of the upper bound with our also frequentist-like derived lower bounds, as well as with the guarantees achieved by applying frequentist techniques to the Bayesian setting.
We show the following upper bound on the expected probability of misidentification (as defined in \cref{eq:objective}) of our algorithm:
\begin{theorem}\label{thm:bound_with_integral}
The expected probability of misidentification for \ensuremath{\tt BayesElim}\xspace can be bounded as follows
\begin{align*}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \leq
3\sum_{r\in[R]} \frac{1}{\sqrt{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \\
&\times \sum_{i\in[K]}\sum_{j>i} \exp\left(-\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right).
\end{align*}
\end{theorem}
We remark that above guarantee exhibits an $O(1/\sqrt{n})$ dependence on the budget, which is in contrast to the $e^{-O(n)}$ guarantees achieved for the frequestist BAI problem \citep{Karnin2013AlmostOE}.
This fact might at first seem counter intuitive due to the additional access to prior information available to the learner. However, this dependence is inherent to the objective of expected probability of misidentification defined in \cref{eq:objective}, which integrates the probability of misidentification over the possible instances. This is in contrast to the frequentist setting, where the objective is the worst-case probability of misidentification, on a single instance.
In addition, notice that integrating an optimal frequentist bound would result in a $O(1/\sqrt{n})$ dependence. This can be illustrated in the following simple example (with $\sigma_0=1$):
\begin{align*}
&\Ex{\bm{\mu}}{e^{-n(\mu_i-\mu_j)^2}}=\\
&=\frac{1}{2\pi}\int_{\mu_i, \mu_j} e^{-n(\mu_i-\mu_j)^2} e^{-\frac{(\nu_i-\mu_i)^2}{2}} e^{-\frac{(\nu_j-\mu_j)^2}{2}}\, d\mu_j \, d\mu_i\\
&=
\frac{1}{\sqrt{4n+1}}
\exp\left(-\frac{n(\nu_i-\nu_j)^2}{4n+1}\right),
\end{align*}
where the last expression results from completing the square in the exponent and computing the Gaussian integral.
Moreover, as we show in \cref{sec:lb} the dependence of \ensuremath{\tt BayesElim}\xspace on the budget, $n$, is in fact optimal.
In addition, the upper bound in \cref{thm:bound_with_integral} decreases exponentially with the squared gaps between the prior means normalized by the prior variance, $\frac{(\nu_i-\nu_j)^2}{\sigma_0^2}$. This also agrees with the intuition that as the prior mean gaps grow (or the prior variances become smaller) the uncertainty in the parameters decreases and the problem of identifying the best arm becomes easier. In particular, when $\sigma_0\rightarrow 0$ the algorithm has exact knowledge of the mean without any sampling. In this extreme case where $\sigma_0\rightarrow 0$, the above bound indicates that the probability of misidentification of \ensuremath{\tt BayesElim}\xspace becomes $0$.
\subsection{Comparison to Frequentist Algorithm}
We compare our guarantees for \ensuremath{\tt BayesElim}\xspace with the result obtained by applying the frequentist elimination algorithm of \cite{Karnin2013AlmostOE} in the Bayesian BAI setting.
This is an elimination algorithm similar to \ensuremath{\tt BayesElim}\xspace with the difference that it eliminates based on sample averages at the end of each round, i.e. ignoring prior information.
We remark that this algorithm gives optimal guarantees in terms of worst-case {probability of misidentification} in the frequentist version of BAI, that is, in the absence of priors. Alternatively, this algorithm can be viewed as a version of \ensuremath{\tt BayesElim}\xspace where we take $\sigma_0\rightarrow \infty$.
We first derive an upper bound for the frequentist algorithm following a similar analysis to that of \ensuremath{\tt BayesElim}\xspace and then compare the two bounds in the general case, as well as when $\sigma_0\rightarrow 0$. Since the algorithm in \citet{Karnin2013AlmostOE} is designed for equal reward variances, the discussion and results below focus on the case where $\sigma_i=\sigma$ for all $i\in[K]$.
\begin{restatable}{theorem}{thmFreqUBound}\label{thm:freq_bound}
The elimination algorithm of \citet{Karnin2013AlmostOE} satisfies
\begin{align}\label{eq:freq_bound1}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}}
\leq 3\sum_{r\in [R]} \frac{1}{\sqrt{n\frac{\sigma_0^2}{\sigma^2}\frac{2^r}{K\log_{2}(K)}+1}} \nonumber\\
&\times \sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{n\sigma_0^2}{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right).
\end{align}
\end{restatable}
In addition, in the special case where the reward variances are equal for all arms, the upper bound on the expected probability of misidentification for \ensuremath{\tt BayesElim}\xspace becomes as follows:
\begin{corollary}
When $\sigma_i=\sigma$ for all $i\in[K]$, \ensuremath{\tt BayesElim}\xspace satisfies
\begin{align}\label{eq:eq_var_bound}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}}
\leq 3\sum_{r\in [R]} \frac{1}{\sqrt{n\frac{\sigma_0^2}{\sigma^2}\frac{2^r}{K\log_{2}(K)}+1}}\nonumber \\ &\times\sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}{n\sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right).
\end{align}
\end{corollary}
The bound in \cref{eq:freq_bound1} has similar dependence on $n,K$ and prior mean gaps as the one in \cref{eq:eq_var_bound}. The only difference between the two bounds is in the multiplicative terms in the exponent, i.e. $$\frac{n\sigma_0^2}{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}\leq 1 \leq \frac{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}{n\sigma_0^2}.$$ Therefore, the bound in \cref{eq:freq_bound1} is always lower than the bound in \cref{eq:eq_var_bound}. In addition, we note that by taking $\sigma_0\rightarrow 0$ in \cref{eq:eq_var_bound}, the probability of misidentification for \ensuremath{\tt BayesElim}\xspace tends to $0$. In contrary, for $\sigma_0\rightarrow 0$ the bound for the frequentist algorithm becomes constant: $$3\sum_{r\in[R]}\sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{1}{4}\frac{n{2^r}}{{K\log_2(K)}\sigma^2}(\nu_i-\nu_j)^2\right).$$ \\
In the case of the frequentist algorithm, the previous discussion of taking $\sigma_0\rightarrow 0$ is only applied on an upper bound on its expected probability of misidentification. In \cref{sec:lb}, we present a formal lower bound on the expected probability of misidentification of any frequentist policy applied to the Bayesian setting, which clearly shows the loss of ignoring prior information.
In the rest of this section we outline the proof of our main \cref{thm:bound_with_integral}.
\subsection{Proof Sketch of \cref{thm:bound_with_integral}}
To prove \cref{thm:bound_with_integral}, we first consider the parameter vector $\bm{\mu}$ fixed. For any fixed round $r\in [R]$, in \cref{lem:posterior_mean_concentration} we bound the probability that the posterior mean of some suboptimal arm $i$ is larger that the posterior mean of the optimal arm of $\bm{\mu}$. Then, for any fixed $r\in[R]$ and $\bm{\mu}$, in \cref{lem:r_elimination} we bound the probability that the optimal arm is eliminated at round $r$. Subsequently, we put things together and bound the probability of error in instance $\bm{\mu}$. Up to this point, our analysis is frequentist-like and treats the parameter vector as fixed. Finally, in \cref{lem:final_ub}, we integrate out the randomness of $\bm{\mu}$.
To simplify the analysis, we ignore errors due to rounding. Note that the set of active arms in any round $r\in [R]$, i.e. $S_r$, is a random variable that depends on the reward realizations as well as the randomness in the parameters $\bm{\mu}$ of the reward distributions of the arms. For any fixed set of parameters $\bm{\mu}$ and round $r\in[R]$, the following lemma bounds the probability that the posterior mean $\bar\mu_{i_*,n_{r,{i_*}}}$ of the best arm $i_*$ of $\bm{\mu}$ is smaller that the posterior mean $\bar\mu_{i,n_{r,i}}$ of some arm $i\in S_r$:
\begin{restatable}{lemma}{lemPostManConc}\label{lem:posterior_mean_concentration}
Fix instance $\bm{\mu}$ and round $r \in [R]$. Suppose that $i_* \in S_r$. Then, for any $i \in S_r$ we have
\begin{align*}
&\prob{\bar\mu_{i,n_{r,i}}>\bar\mu_{i_*,n_{r,i_*}}|\bm{\mu}} \\
&\leq\exp\left(-\frac{n}{4R\cdot \Sigma_r }\left(R\cdot\Sigma_r\frac{\nu_{i_*}-\nu_i}{n\sigma_0^2} + \mu_{i_*}- \mu_i\right)^2\right).
\end{align*}
\end{restatable}
Observe that as $n$ grows, the bias due to prior information, $\frac{(\nu_{i_*}-\nu_i)}{\sigma_0^2}$, contributes less in the above bound. This is in line with the intuition that as the number of samples grows, prior information should become less important.
For a fixed parameter instance $\bm{\mu}$, we bound the probability that the optimal arm is eliminated at some round $r\in[R]$. This bound follows roughly by \cref{lem:posterior_mean_concentration} and the fact that if arm $i_*$ is eliminated at round $r$, then at least $|S_r|/2$ arms have larger posterior means than $i_*$ at the end of the round. The result follows:
\begin{restatable}{lemma}{lemRElimination}\label{lem:r_elimination}
Fix instance $\bm{\mu}$ and round $r\in [R]$. Then, there exists some $j_r\in S_r$ such that the probability that $i_*$ is eliminated at round $r$ satisfies
\begin{align*}
&\prob{i_* \not \in S_{r+1}| \{i_* \in S_r\}, \bm{\mu}}\\
&\leq 3\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right).
\end{align*}
\end{restatable}
Putting together the above lemmas, the expected probability of misidentification can be bounded as follows
\begin{align}\label{eq:regret_bound_1}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \nonumber\\
&= \int_{\bm{\mu}} \prob{J\neq i_*|\bm{\mu}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \nonumber\\
&\leq \int_{\bm{\mu}} \sum_{r\in [R]}\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \nonumber\\
&\leq \sum_{r\in [R]} \int_{\bm{\mu}}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right)\nonumber\\
&\quad \cdot 3 \mathbb{P}(\bm{\mu}) \,d\bm{\mu} ,
\end{align}
where the last inequality is due to \cref{lem:r_elimination}.
Now, as noted before, $i_*, j_r$ are random quantities that depend on the instance $\bm{\mu}$. For any fixed $r\in[R]$, we can rewrite the above integral by grouping the instances according to the values of $i_*$ and $j_r$ in $[K]$ and then upper bound the integral to show the following:
\begin{restatable}{lemma}{lemFinalUBound}\label{lem:final_ub}
We have that
\begin{align*}
&\int_{\bm{\mu}}
e^{-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2}
\mathbb{P}(\bm{\mu}) \,d\bm{\mu} \\
&\leq \sqrt{\frac{1}{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \sum_{i\in[K]}\sum_{j>i} e^{-\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}}.
\end{align*}
\end{restatable}
This completes the sketch.
\qed
\section{EXPERIMENTS}
\label{sec:experiments}
\begin{figure*}[t]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=\linewidth]{figures/plot_budget.pdf}
\end{minipage}
\begin{minipage}{.48\textwidth}
\includegraphics[width=\linewidth]{figures/plot_sigma0.pdf}
\end{minipage}
\caption{Evaluation of fixed-budget BAI algorithms on a synthetic dataset. The probability of misidentification as a function of a) budget $n$ is shown in the left plot, and b) prior variance $\sigma_0$ is shown in the right plot.}
\label{fig:plot budget sigma0}
\end{figure*}
We conduct experiments on a synthetic dataset. We simulate rewards from $K=8$ Gaussian arms whose means are themselves drawn from a Gaussian prior with means $\{\nu_i = 2^{-i}: i \in \{0,1,2,3,4,5,6,7\}\}$. We choose this setting because it contains few small gaps and many large gaps. This is conducive to adaptive algorithms, as is evident from the bound in \cref{thm:freq_bound}. In all our plots, we show the mean performance and error bars over $5000$ runs. We compare the performance of seven algorithms, explained below.
\begin{itemize}
\item \ensuremath{\tt TS}\xspace\citep{russo2018tutorial}: Thompson sampling, where the best arm is sampled proportional to the number of pulls. The probability of misidentification of \ensuremath{\tt TS}\xspace can easily be bounded by observing that an $\tilde{O}(\sqrt{n})$ cumulative regret of TS implies a $\tilde{O}(1/\sqrt{n})$ simple regret for this strategy.
\item \ensuremath{\tt TS2}\xspace: Thompson sampling, where the best arm is chosen as the one with the highest posterior mean. This strategy does not have a guarantee for fixed-budget BAI. We choose it because it performed extremely well in our early experiments.
\item \ensuremath{\tt BayesElim}\xspace: \cref{alg:bayesian_successive_elimination} proposed in this paper. The probability of misidentification is bounded in \cref{thm:bound_with_integral}.
\item \ensuremath{\tt BayesElim2}\xspace: \cref{alg:bayesian_successive_elimination}, but where we do not discard previous samples at the end of each stage. This does not have a theoretical bound. It is well known that discarding of earlier observations is needed for analysis, but it hurts practical performance.
\item \ensuremath{\tt TTTS}\xspace\citep{russo2016simple}: Top-two Thompson sampling, a state-of-the-art algorithm for BAI, where the arm with the highest posterior mean is chosen as the best arm. This does not have a theoretical bound for fixed-budget BAI.
\item \ensuremath{\tt FreqElim}\xspace\citep{Karnin2013AlmostOE}: The frequentist version of the elimination algorithm proposed in this paper, which ignores the prior. The probability of error can be bounded analytically, as shown in Theorem 4.1 of \citet{Karnin2013AlmostOE}.
\item \ensuremath{\tt FreqElim2}\xspace: The frequentist analog of \ensuremath{\tt BayesElim2}\xspace. This strategy does not have a theoretical bound.
\end{itemize}
For our first experiment, we study the dependence of the probability of misidentification as a function of the budget $n$. For this experiment, we fix the prior variance $\sigma_0 = 0.5$ and the reward variance $\sigma = 0.5$. The left plot in \cref{fig:plot budget sigma0} shows the log probability of misidentification as a function of the budget. As expected, the probability of misidentification decreases as the budget increases for all algorithms. We highlight two observations. First, among all algorithms that have theoretical guarantees, \ensuremath{\tt BayesElim}\xspace performs the best. Furthermore, the performance of \ensuremath{\tt BayesElim2}\xspace (which does not have a theoretical guarantee), is similar to \ensuremath{\tt TTTS}\xspace, which performs the best among all algorithms. Second, the difference between the frequentist and Bayesian elimination algorithms decreases as the budget increases. This is expected since the benefit of the prior diminishes as more samples are available.
In our second experiment, we plot the probability of misidentification as a function of the prior variance $\sigma_0$. Here again, we observe \ensuremath{\tt BayesElim}\xspace to be the best algorithm among those with theoretical guarantees, and \ensuremath{\tt BayesElim2}\xspace to be close to optimal among all algorithms. We also observe that the performance gap between the frequentist and Bayesian elimination algorithms decreases as the variance of the prior increases. This is expected because the higher the prior variance, the less informative it is.
\section{Proofs of \cref{sec:ub_analysis}}\label{app}
\begin{theorem}[Hoeffding's Inequality for Subgaussian Random Variables (from \cite{hoeffding63})]\label{thm:hoeffding}
If $X_1,...,X_m\sim \mathcal{N}(\mu,\sigma^2)$ then for any $i\in[m]$:
\begin{align*}
&\Prob{}{X_i\geq \mu+\epsilon} \leq \exp\left(-\frac{\epsilon^2}{2\sigma^2}\right)
\text{ and } \Prob{}{\frac{1}{m}\sum_{i\in[m]}X_i\geq \mu+\epsilon} \leq \exp\left(-\frac{m\epsilon^2}{2\sigma^2}\right).
\end{align*}
\end{theorem}
\begin{theorem}[Bretagnolle–Huber inequality (from \cite{Bretagnolle1978EstimationDD})]\label{thm:huber}
Let $\mathbb{P}$ and $\mathbb{Q}$ be probability
measures on the same measurable space and $A$ a measurable event. Then,
\begin{align*}
\mathbb{P}(A) + \mathbb{Q}(A^c) \geq \frac{1}{2} \exp(-d_{KL}(\mathbb{P},\mathbb{Q}))
\end{align*}
where $A^c$ is the complement of $A$ and $d_{KL}(\mathbb{P},\mathbb{Q}) = \int_{-\infty}^{\infty} \log \left(\frac{\,d \mathbb{P}(x)}{\,d \mathbb{Q}(x)}\right) \,d \mathbb{P}(x)$.
\end{theorem}
\lemPostManConc*
\begin{proof}
The posterior distribution of any arm $i$ given $n_{r,i}$ samples, $X_{i,1},...,X_{i,n_{r,i}}$, is
$\mathcal{N}\left(\bar\mu_{i,n_{r,i}},\bar\sigma_{i,n_{r,i}}^2\right)$
where
$$\bar\sigma_{i,n_{r,i}}^2 = \left(\frac{1}{\frac{1}{\sigma_{0,i}^2}+\frac{n_{r,i}}{\sigma_i^2}}\right)
\text{ and }~
\bar\mu_{i,n_{r,i}} = \bar\sigma_{i,n_{r,i}}^2 \left(\frac{\nu_i}{\sigma_{0,i}^2}+\sum_{s\in [n_{r,i}]} \frac{X_{i,s}}{\sigma_i^2}\right).$$
We consider the probability that the posterior mean of some arm $i\in S_r$ is larger than the posterior mean of arm $i_*$ for a fixed parameter vector $\bm{\mu}$. We have that:
\begin{align}\label{eq:concentration_2}
\prob{\bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}}|\bm{\mu}}
&= \prob{ \bar\sigma_{i,n_{r,i}}^2 \left(\frac{\nu_i}{\sigma_{0,i}^2}+\sum_{s\in [n_{r,i}]} \frac{X_{i,s}}{\sigma_i^2}\right) > \bar\sigma_{i_*,n_{r,i_*}}^2 \left(\frac{\nu_{i_*}}{\sigma_{0,i_*}^2}+\sum_{s\in [n_{r,i_*}]} \frac{X_{i_*,s}}{\sigma_{i_*}^2}\right) ~|\bm{\mu}}.
\end{align}
Notice that for the values of $n_{r,i}$ selected by our Algorithm, we have that :
\begin{align*}
\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}
&= \frac{1}{\sigma_i^2} \left(\frac{1}{\frac{1}{\sigma_{0}^2}+\frac{n_{r,i}}{\sigma_i^2}}\right)
= \frac{1}{\sigma_i^2} \left(\frac{1}{\frac{1}{\sigma_{0}^2}+\frac{n \sigma_i^2}{R\cdot\sigma_i^2\Sigma_r} }\right)
= \frac{\sigma_{0}^2}{\sigma_i^2} \left(\frac{1}{1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} }\right).
\end{align*}
Also, we have that:
\begin{align}\label{eq:tmp_s3}
n_{r,i}\cdot \frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}
&= {\sigma_i^2}\frac{n}{R\cdot\Sigma_r} \cdot \frac{\sigma_{0}^2}{\sigma_i^2} \left(\frac{1}{1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} }\right)
= \left(\frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} }\right) = \frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}.
\end{align}
Then, for the probability that the posterior mean of some arm $i\in S_r$ is larger than the posterior mean of arm $i_*$ for a fixed parameter vector $\bm{\mu}$ in \cref{eq:concentration_2} have that:
\begin{align}\label{eq:tmp_s5}
&\prob{\bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}}|\bm{\mu}}
= \prob{ \bar\sigma_{i,n_{r,i}}^2 \left(\frac{\nu_i}{\sigma_{0}^2}+\sum_{s\in [n_{r,i}]} \frac{X_{i,s}}{\sigma_i^2}\right) > \bar\sigma_{i_*,n_{r,i_*}}^2 \left(\frac{\nu_{i_*}}{\sigma_{0}^2}+\sum_{s\in [n_{r,i_*}]} \frac{X_{i_*,s}}{\sigma_{i_*}^2}\right) ~|\bm{\mu}} \nonumber \\
&= \prob{\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} X_{i,s} - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} X_{i_*,s}> \bar\sigma_{i_*,n_{r,i_*}}^2\frac{\nu_{i_*}}{\sigma_{0}^2} - \bar\sigma_{i,n_{r,i}}^2\frac{\nu_i}{\sigma_{0}^2} ~|\bm{\mu}} \nonumber \\
&= \mathbb{P}\Bigg( \frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})> \nonumber\\
& \qquad\qquad \bar\sigma_{i_*,n_{r,i_*}}^2\frac{\nu_{i_*}}{\sigma_{0}^2} - \bar\sigma_{i,n_{r,i}}^2\frac{\nu_i}{\sigma_{0}^2} + \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}n_{r,i_*} \mu_{i_*} - \frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}n_{r,i}\mu_i ~|\bm{\mu} \Bigg) \nonumber \\
&= \mathbb{P}\Bigg(\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})> \nonumber\\
&\qquad\qquad \frac{n_{r,i_*}\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\left(\frac{\sigma_{i_*}^2 \nu_{i_*}}{\sigma_{0}^2n_{r,i_*}} + \mu_{i_*}\right)- \frac{n_{r,i}\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\left(\frac{\sigma_i^2 \nu_i}{\sigma_{0}^2n_{r,i}} + \mu_i\right) ~|\bm{\mu} \Bigg) \nonumber \\
&= \mathbb{P}\Bigg(\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})>\nonumber\\
&\qquad\qquad\frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}\left(\left(\frac{\sigma_{i_*}^2 \nu_{i_*}}{\sigma_{0}^2n_{r,i_*}} + \mu_{i_*}\right)- \left(\frac{\sigma_i^2 \nu_i}{\sigma_{0}^2n_{r,i}} + \mu_i\right)\right) ~|\bm{\mu}\Bigg)\nonumber\\
&= \prob{\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})> \frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right) ~|\bm{\mu}}
\end{align}
where the second-to-last inequality is because, according to \cref{eq:tmp_s3} we have that $\frac{n_{r,i_*}\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}=\frac{n_{r,i}\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}=\frac{\frac{n\sigma_0^2}{R}}{\Sigma_r+\frac{n\sigma_0^2}{R}}$ and the last inequality is due to replacing the expression for $n_{r,i}, n_{r,i_*}$ and reordering terms.
Since $X_{i,s}\sim\mathcal{N}(\mu_i,\sigma_i^2)$, the sum
$\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i)$
is a zero mean Gaussian random variable with variance:
\begin{align*}
\frac{\bar\sigma_{i,n_{r,i}}^4}{\sigma_i^4} \cdot n_{r,i} \cdot \sigma_i^2
= \frac{\sigma_{0}^4}{\sigma_i^4} \frac{1}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2} \cdot \frac{n}{R} \frac{\sigma_i^2}{\Sigma_r} \cdot \sigma_i^2
= \sigma_{0}^2 \frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2} .
\end{align*}
Therefore, the difference on the LHS of the above inequality: $\frac{\bar\sigma_{i,n_{r,i}}^2}{\sigma_i^2}\sum_{s\in [n_{r,i}]} (X_{i,s}-\mu_i) - \frac{\bar\sigma_{i_*,n_{r,i_*}}^2}{\sigma_{i_*}^2}\sum_{s\in [n_{r,i_*}]} (X_{i_*,s}-\mu_{i_*})$, is a zero mean Gaussian with variance:
\begin{align*}
\frac{\bar\sigma_{i,n_{r,i}}^4}{\sigma_i^4} \cdot n_{r,i} \cdot \sigma_i^2 + \frac{\bar\sigma_{i_*,n_{r,i_*}}^4}{\sigma_{i_*}^4} \cdot n_{r,i_*} \cdot \sigma_{i_*}^2
= 2\sigma_{0}^2 \frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2}.
\end{align*}
Thus, by Hoeffding's inequality (\cref{thm:hoeffding}) we get that:
\begin{align*}
\cref{eq:tmp_s5}
&\leq \exp\left(-\frac{\frac{\left(\frac{n\sigma_0^2}{R}\right)^2}{(\Sigma_r+\frac{n\sigma_0^2}{R})^2}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2 }{4\sigma_0^2 \frac{\frac{n\sigma_0^2}{R\cdot\Sigma_r}}{(1+\frac{n\sigma_0^2}{R\cdot\Sigma_r} )^2}}\right)
= \exp\left(-\frac{n}{4R\cdot \Sigma_r }\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2\right).
\end{align*}
In the case that $\sigma_i=\sigma$ for all $i\in[K]$, we have equal distribution of the per-phase budget for all arms, i.e. $n_{r,i} = \frac{n}{|S_r| \log_2(K)} = n_r$ for all $i\in S_r$, and thus:
\begin{align*}
\cref{eq:concentration_2}
&\leq \prob{\sum_{s\in [\nr{}]} \frac{X_{i,s}-X_{i_*,s}}{\sigma^2} - \nr{} \frac{\mu_i-\mu_{i_*}}{\sigma^2} > \frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2} - \nr{} \frac{\mu_i-\mu_{i_*}}{\sigma^2} ~|\bm{\mu}}\\
&= \prob{\sum_{s\in [\nr{}]} ({X_{i,s}-X_{i_*,s}-(\mu_i-\mu_{i_*})}) > \sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2} + \nr{} (\mu_{i_*}-\mu_{i}) ~|\bm{\mu}}\\
&\leq \exp\left(-\frac{1}{4}\frac{\left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2} + \nr{} (\mu_{i_*}-\mu_{i})\right)^2}{\sigma^2\nr{}}\right) = \exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2 \nr{}} + \Delta_i\right)^2\right) ,
\end{align*}
where the last inequality is due to the fact that
${X_{i,s}-X_{i_*,s}-(\mu_i-\mu_{i_*})}\sim \mathcal{N}(0,2\sigma^2)$.
\end{proof}
\lemRElimination*
\begin{proof}
We define $S'_r$ to be the set of $\frac{3|S_r|}{4}$ arms in $S_r$ with smallest posterior means.
Let us consider the following quantity: \begin{align}\label{eq:tmp_s6}
\E{\sum_{i\in S'_r} \mathbb{I}(\bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_ \in S_r\}}
&= \sum_{i\in S'_r} \prob{ \bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}} | \bm{\mu}, \{i_* \in S_r\}} \nonumber\\
&\leq \sum_{i\in S'_r} \exp\left(-\frac{n}{2R\cdot \Sigma_r (\sigma_{0,i}^2+\sigma_{0,i_*}^2)}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2\right).
\end{align}
Let some $j_r\in S_r$ such that for all $i\in S'_r$ we have $\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2 \geq \left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \mu_{i_*}- \mu_{j_r}\right)^2$. We can upper bound \cref{eq:tmp_s6} as follows:
\begin{align}\label{eq:tmp_s7}
\cref{eq:tmp_s6}
&= \sum_{i\in S'_r} \exp\left(-\frac{n}{2 R\cdot \Sigma_r (\sigma_{0,i}^2+\sigma_{0,i_*}^2)}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_i) + \mu_{i_*}- \mu_i\right)^2\right) \nonumber\\
&\leq \sum_{i\in S'_r} \exp\left(-\frac{n}{4 R \cdot\Sigma_r \sigma_{0,\max}^2}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right) \nonumber\\
&=|S'_r| \exp\left(-\frac{n}{4 R\cdot \Sigma_r \sigma_{0,\max}^2}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right) .
\end{align}
Now, since the best arm is eliminated at the end of round $r$, then at least $\frac{|S'_r|}{3} = \frac{|S_r|}{4}$ arms in $S'_r$ must have larger posterior means than $i_*$. Using this fact we get:
\begin{align*}
&\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}} \\
&\leq \prob{ \sum_{i\in S'_r} \mathbb{I}( \bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\} > \frac{|S'_r|}{3}} & {} \\
&\leq \frac{3\cdot \E{\sum_{i\in S'_r} \mathbb{I}( \bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}}}{|S'_r|} &\text{(by Markov's inequality)}\\
&\leq 3\exp\left(-\frac{n}{4 R\cdot \Sigma_r \sigma_{0,\max}^2}\left(\frac{R\cdot\Sigma_r}{n}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right). &\text{(by \cref{eq:tmp_s7})}
\end{align*}
For equal variances, we let some $j_r\in S_r$ such that for all $i\in S'_r$ we have $\left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2 \nr{}} + \Delta_{i}\right)\geq \left(\sigma^2\frac{\nu_{i_*}-\nu_{j_r}}{\sigma_{0}^2 \nr{}} + \Delta_{j_r}\right)$. Then we have that:
\begin{align}\label{eq:Sr_total_bound_1}
\E{\sum_{i\in S'_r} \mathbb{I}(\bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_ \in S_r\}} &= \sum_{i\in S'_r} \prob{ \bar\mu_{i,n_{r,i}} > \bar\mu_{i_*,n_{r,i_*}} | \bm{\mu}, \{i_* \in S_r\}} \nonumber\\
&\leq \sum_{i\in S'_r} \exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{i}}{\sigma_{0}^2 \nr{}} + \Delta_i\right)^2\right) \nonumber\\
&\leq |S'_r| \exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{j_r}}{\sigma_{0}^2 \nr{}} + \Delta_{j_r} \right)^2\right)
\end{align}
where the inequality is due to \cref{lem:posterior_mean_concentration}, and subsequently we use the definition of $j_r$.
Similarly, we get:
\begin{align*}
\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}}
&\leq \frac{3\cdot \E{\sum_{i\in S'_r} \mathbb{I}( \bar\mu_{i,n_{r,i}} >\bar\mu_{i_*,n_{r,i_*}}) | \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}}}{|S'_r|} &\text{(by Markov's inequality)}\\
&\leq 3\exp\left(-\frac{1}{4\sigma^2}\nr{} \left(\sigma^2\frac{\nu_{i_*}-\nu_{j_r}}{\sigma_{0}^2 \nr{}} + \Delta_{j_r} \right)^2\right) &\text{(by \cref{eq:Sr_total_bound_1})}
\end{align*}
\end{proof}
\paragraph{Lemma 6.} \textit{
We have that:
\begin{align*}
\int_{\bm{\mu}}
e^{-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2}
\mathbb{P}(\bm{\mu}) \,d\bm{\mu} \leq \sqrt{\frac{1}{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \sum_{i\in[K]}\sum_{j>i} e^{-\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}}.
\end{align*}}
\begin{proof}
We have that:
\begin{align*}
&\int_{\bm{\mu}}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i_*}-\nu_{j_r}) + \Delta_{j_r}\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} =\\
&= \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ i_*=i,j_r=j}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i}-\nu_{j}) + \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ i_*=j,j_r=i}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{j}-\nu_{i}) + \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&\leq \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ \mu_i\geq \mu_j}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i}-\nu_{j}) + \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ \mu_i<\mu_j}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{j}-\nu_{i}) + \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&= \sum_{i\in[K]}\sum_{j>i}
\int_{\bm{\mu}}
\exp\left(-\frac{n}{4 R\cdot \Sigma_r}\left(\frac{R\cdot\Sigma_r}{n\sigma_0^2}(\nu_{i}-\nu_{j}) + \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} &\text{ (by symmetry)}\\
&= \sqrt{\frac{1}{n\frac{\sigma_0^2}{R\cdot \Sigma_r}+1}} \sum_{i\in[K]}\sum_{j>i} \exp\left(-\frac{1}{4\sigma_0^2}\frac{n\sigma_0^2+R\cdot\Sigma_r}{n \sigma_0^2}(\nu_i-\nu_j)^2\right). &\text{ (by \cref{lem:gaussuan_integration_1} in the Appendix)}
\end{align*}
\end{proof}
\thmFreqUBound*
\begin{proof}
This proof follows the lines of \cref{thm:bound_with_integral}. To simplify the analysis, we ignore errors due to rounding. Note that the set of active arms in any round $r\in [R]$, i.e. $S_r$, is a random variable that depends on the reward realizations as well as the randomness in the parameters $\bm{\mu}$ of the reward distributions of the arms. We first consider the parameter vector $\bm{\mu}$ fixed. For any fixed round $r\in [R]$, \cref{lem:posterior_mean_concentration_freq} bounds the probability that the empirical mean of some suboptimal arm $i$ is larger that the empirical mean of the optimal arm of $\bm{\mu}$. Recall that when $\sigma_i=\sigma$ for all $i\in[K]$ for some $\sigma>0$, the per-phase budget is distributed equally among active arms, i.e. $\nr{i}=\nr{}=\frac{n}{R|S_r|},\forall i\in S_r$. Let $\hat\mu_{i,n_r}$ be the empirical mean from $n_r$ samples of arm $i$. The following is an adaptation of Lemma $4.2$ of \citep{Karnin2013AlmostOE} for Gaussian reward distributions:
\begin{lemma}[Adaptated Lemma $4.2$ of \citep{Karnin2013AlmostOE}]\label{lem:posterior_mean_concentration_freq}
Fix instance $\bm{\mu}$ and round $r \in [R]$. Suppose that $i_* \in S_r$. Then, when $\sigma_i=\sigma$ for all $i\in[K]$ for some $\sigma>0$, for any $i \in S_r$:
\begin{align*}
\prob{\hat\mu_{i,n_r}>\hat\mu_{i_*,n_r}|\bm{\mu}} \leq \exp\left(-\frac{1}{4\sigma^2}\nr{} \Delta_i^2\right) .
\end{align*}
\end{lemma}
Then, continuing along the lines of the frequentist proof, for any fixed $r\in[R]$ and $\bm{\mu}$, in \cref{lem:r_elimination_freq} we bound the probability that the optimal arm is eliminated at round $r$.
\begin{lemma}[Adaptated Lemma $4.3$ of \citep{Karnin2013AlmostOE}]\label{lem:r_elimination_freq}
Fix instance $\bm{\mu}$ and round $r\in [R]$. Then, when $\sigma_i=\sigma$ for all $i\in[K]$ for some $\sigma>0$, there exists some $j_r\in S_r$ such that the probability that $i_*$ is eliminated at $r$ satisfies:
\begin{align*}
\prob{i_* \not \in S_{r+1}| \{i_* \in S_r\}, \bm{\mu}}
\leq 3\exp\left(-\frac{1}{4\sigma^2}\nr{} \Delta_{j_r}^2\right).
\end{align*}
\end{lemma}
Up to this point the above analysis is frequentist and imitates the one of \citep{Karnin2013AlmostOE} in the frequentist setting. Finally, in the following, we deal with the randomness in $\bm{\mu}$.
\begin{align*}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \\
&= \int_{\bm{\mu}} \prob{J\neq i_*|\bm{\mu}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \nonumber\\
&\leq \int_{\bm{\mu}} \sum_{r\in [R]}\prob{i_*(\bm{\mu}) \not \in S_{r+1}| \bm{\mu}, \{i_*(\bm{\mu}) \in S_r\}} \mathbb{P}(\bm{\mu}) \,d\bm{\mu} \\
&\leq 3\sum_{r\in [R]} \int_{\bm{\mu}}
\exp\left(-\frac{n_r}{4 \sigma^2}\Delta_{j_r}^2\right) \cdot \mathbb{P}(\bm{\mu}) \,d\bm{\mu} ,
\end{align*}
where the last inequality is due to \cref{lem:r_elimination_freq}.
Now, as noted before, $i_*, j_r$ are random quantities that depend on the instance $\bm{\mu}$. For any fixed $r\in[R]$, we can rewrite the above integral by grouping the instances according to the realizations of $i_*$ and $j_r$ and then upper bound the integral to show the following:
We have that:
\begin{align*}
&\int_{\bm{\mu}}
\exp\left(-\frac{n_r}{4 \sigma^2}\Delta_{j_r}^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} =\\
&= \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ i_*=i,j_r=j}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ i_*=j,j_r=i}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&\leq \sum_{i\in[K]}\sum_{j>i}
{\Bigg[}\int_{\bm{\mu}~:~ \mu_i\geq \mu_j}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} \\
&\qquad\qquad\qquad +
\int_{\bm{\mu}~:~ \mu_i<\mu_j}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_j-\mu_i\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} {\Bigg]}\\
&= \sum_{i\in[K]}\sum_{j>i}
\int_{\bm{\mu}}
\exp\left(-\frac{n_r}{4\sigma^2}\left( \mu_i-\mu_j\right)^2\right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} &\text{ (by symmetry)}\\
&= \frac{1}{\sqrt{n\frac{\sigma_0^2}{\sigma^2}\frac{2^r}{K\log_{2}(K)}+1}} \sum_{i\in[K]}\sum_{j>i}\exp\left(-\frac{n\sigma_0^2}{n\sigma_0^2+\frac{K\log_2(K)}{2^r}\sigma^2}\frac{(\nu_i-\nu_j)^2}{4\sigma_0^2}\right) &\text{ (by \cref{lem:gaussuan_integration2} in the Appendix)}
\end{align*}
This completes the proof of \cref{thm:freq_bound}.
\end{proof}
\subsection{Gaussian Integrals}
\begin{lemma}\label{lem:gaussuan_integration_1}
Let $\mu_1\sim \mathcal{N}(\nu_1,\sigma_{0}^2), \mu_2\sim \mathcal{N}(\nu_2,\sigma_{0}^2)$ and $c_1,c_2\geq 0$. We have that:
\begin{align*}
\int_{\mu} \exp\left(-\frac{c_2}{2\sigma_0^2} \left(c_1(\nu_2-\nu_1)
+ \mu_2-\mu_1 \right)^2\right) P_Q\,d\bm{\mu}
= \frac{1}{\sqrt{2c_2+1}} e^{-\frac{c_2(c_1+1)^2(\nu_1-\nu_2)^2}{2(2c_2+1)\sigma_0^2}}
\end{align*}
\end{lemma}
\begin{proof}
We have that:
\begin{align*}
&\int_{\mu} e^{-\frac{c_2}{2\sigma_0^2} \left(c_1(\nu_2-\nu_1)
+ \mu_2-\mu_1 \right)^2} P_Q\,d\bm{\mu} =\\
&= \int_{\bm{\mu}} \frac{1}{2\pi\sigma_0^2}e^{-\frac{c_2}{2\sigma_0^2} \left(c_1(\nu_2-\nu_1) + \mu_2-\mu_1 \right)^2-\frac{(\mu_1-\nu_1)^2}{2\sigma_0^2}-\frac{(\mu_2-\nu_2)^2}{2\sigma_0^2}} \,d\bm{\mu} =\\
&= \int_{\mu_1} \frac{1}{\sqrt{2}\sqrt{{\pi}}\sqrt{c_2+1}\sigma_0} e^{-\frac{(c_1^2+2c_1+1)c_2\nu_2^2+((-2c_1^2-2c_1)c_2\nu_1+(-2c_1-2)c_2\mu_1)\nu_2+((c_1^2+1)c_2+1)\nu_1^2+((2c_1-2)c_2-2)\mu_1\nu_1+(2c_2+1)\mu_1^2}{(2c_2+2)\sigma_0^2}} \, d\mu_1\\
&= \frac{1}{\sqrt{2c_2+1}} e^{-\frac{c_2(c_1+1)^2(\nu_1-\nu_2)^2}{2(2c_2+1)\sigma_0^2}}
\end{align*}
\end{proof}
\begin{lemma}\label{lem:gaussuan_integration2}
Let $\mu_i\sim \mathcal{N}(\nu_i,\sigma_{0}^2), \mu_j\sim \mathcal{N}(\nu_j,\sigma_{0}^2)$ and $C\geq 0$. We have that:
\begin{align*}
\int_{\bm{\mu}}
\exp\left(-C (\mu_i-\mu_j)^2 \right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu}
= \sqrt{
\frac{1}{4C\sigma_0^2+1}}
\exp\left(-\frac{C(\nu_i-\nu_j)^2}{4\sigma_0^2C+1}\right)
\end{align*}
\end{lemma}
\begin{proof}
We will use the following.
Let $a,b,c\geq 0$ then:
\begin{align}\label{eq:gauss_integr}
\int_{x=-\infty}^{+\infty} e^{-ax^2+bx+c} \,dx = \sqrt{\frac{\pi}{a}}e^{\frac{b^2}{4a}+c}.
\end{align}
\noindent
Now, we can compute the objective as follows:
\begin{align*}
&\int_{\bm{\mu}}
\exp\left(-C (\mu_i-\mu_j)^2 \right)
\mathbb{P}_{\bm{Q}}(\bm{\mu}) \,d\bm{\mu} =\\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_i,\mu_j} \exp\left(-C (\mu_i-\mu_j)^2 \right) \exp\left(- \frac{(\nu_i-\mu_i)^2}{2\sigma_0^2} \right) \exp\left(- \frac{(\nu_j-\mu_j)^2}{2\sigma_0^2} \right) \,d\mu_i \,d\mu_j \\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_i,\mu_j} \exp\left(-C (\mu_i-\mu_j)^2 - \frac{(\nu_i-\mu_i)^2}{2\sigma_0^2} - \frac{(\nu_j-\mu_j)^2}{2\sigma_0^2} \right) \,d\mu_i \,d\mu_j \\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_i,\mu_j} \exp\left( -\mu_i^2\left(C+\frac{1}{2\sigma_0^2}\right)+ \mu_i 2\left(C\mu_j+\frac{\nu_i}{2\sigma_0^2}\right)+\left(-C\mu_j^2+\frac{2\nu_j\mu_j-\nu_i^2-\mu_j^2-\nu_j^2}{2\sigma_0^2}\right)\right) \,d\mu_i \,d\mu_j \\
&= \frac{1}{2\pi \sigma_0^2} \int_{\mu_j} \sqrt{\frac{\pi}{C+\frac{1}{2\sigma_0^2}}}\exp\left(\frac{\left(C\mu_j+\frac{\nu_i}{2\sigma_0^2}\right)^2}{\left(C+\frac{1}{2\sigma_0^2}\right)}+
\left(-C\mu_j^2+\frac{2\nu_j\mu_j-\nu_i^2-\mu_j^2-\nu_j^2}{2\sigma_0^2}\right)\right) \,d\mu_j ~~~~~\text{ using \cref{eq:gauss_integr}}\\
&= \sqrt{\frac{1}{4\pi\sigma_0^4C+2\pi\sigma_0^2}} \int_{\mu_j}
\exp\left(
-
\frac{\mu_j^2}{2\sigma_0^2}
\frac{2C+\frac{1}{2\sigma_0^2}}{C+\frac{1}{2\sigma_0^2}}
+
\frac{\mu_j}{2\sigma_0^2}
\frac{2\left(C\nu_i +(C+\frac{1}{2\sigma_0^2})\nu_j\right)}{C+\frac{1}{2\sigma_0^2}}
+\frac{1}{2\sigma_0^2}\frac{\left(-\frac{\nu_j^2}{2\sigma_0^2}-C(\nu_i^2+\nu_j^2)\right) }{C+\frac{1}{2\sigma_0^2}}
\right) \,d\mu_j \\
&= \sqrt{
\frac{1}{4C\sigma_0^2+1}}
\exp\left(-\frac{C(\nu_i-\nu_j)^2}{4\sigma_0^2C+1}\right) \hspace{8.3cm}\text{ using \cref{eq:gauss_integr}}
\end{align*}
\end{proof}
\section{Proofs of \cref{sec:lb}}\label{app:freq_lb}
\propMinProb*
\begin{proof}
Observe that for the conditional probabilities $\Prob{}{\mathcal{I}_{i_2}},\Prob{}{\mathcal{I}_{i_1}}$ we have that
\begin{align*}
\min(\Prob{}{\mathcal{I}_{i_2}},\Prob{}{\mathcal{I}_{i_1}}) =
\begin{cases}
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}, & \text{ if } \nu_{i_1}>\nu_{i_2}\\
&\\
\frac{1}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}, & \text{ if } \nu_{i_1}\leq\nu_{i_2}\\
\end{cases}
\end{align*}
Moreover, if $\nu_{i_1}\leq\nu_{i_2}$ then
\begin{align*}
\frac{1}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}
\geq
\frac{1}
{2\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}
=\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})|\nu_{i_1}-\nu_{i_2}|}{2\sigma_0^2}\right)}{2}
\end{align*}
On the other hand, when $\nu_{i_1}>\nu_{i_2}$ then
\begin{align*}
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)+1}
\geq
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}{2}
=
\frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})|\nu_{i_1}-\nu_{i_2}|}{2\sigma_0^2}\right)}{2}
\end{align*}
\end{proof}
\thmFreqLBound*
\begin{proof}
Let the mean vector $\bm{\mu}=(\mu_1,...,\mu_K)$, where $X_i\sim\mathcal{N}(\mu_i,1)$ and $\mu_i\sim\mathcal{N}(\nu_i,\sigma_0)$.
We would like to bound the probability that the learner fails to recommend the optimal arm when presented with instance $\bm{\mu}$, i.e. $\Ex{\bm{\mu}}{\Prob{}{J\neq i_*~|~\bm{\mu}}}$. We also assume that the learner is oblivious to prior information.
Instead, we consider the easier problem where the learner is required to distinguish the best arm given that the instance it faces is one of $K$ instances defined as follows:
We define $d_i = \mu_{i_*}-\mu_{i}$.
Let $K$ pairs of corresponding Gaussian distributions $p_i= \mathcal{N}(\mu_i,1)$ and $p_i'= \mathcal{N}( \mu_i',1)$ where $\mu_i'=\mu_i + 2d_i$.
We define $K$ Gaussian bandit instances where, in instance $i\in [K]$, any arm $k\in [K]$ has distribution $p_{k}^i=p_{k}$ if $i\neq k$ and $p_{k}^i=p_{k}'$ if $k=i$, i.e. arm $i$ is uniquely the best arm in instance $i$.
We define the product distribution of instance $i \in [K]$ as $G^i=p_{1}^i\times \dots \times p_{K}^i$.
For $i\in[K]$, we use the notation $\Prob{i}{\cdot} = \Prob{G^i}{\cdot|\bm{\mu}}$ and $\Ex{i}{\cdot} = \Ex{G^i}{\cdot | \bm{\mu}}$ to denote the probability (resp. expectation) w.r.t. the randomness of sampling in instance $i$. \\
Let the KL divergence between distributions $p,p'$:
\begin{align*}
d_{KL}(p,p') = \int_{-\infty}^{\infty} \log \left(\frac{\,d p(x)}{\,d p'(x)}\right) \,d p(x)
\end{align*}
Here, for $k\in[K]$ we define the KL divergence for arm $k$ as:
$$d_{KL}^{k} = d_{KL}(p_k,p_k') = d_{KL}(p_k',p_k) = \frac{(\mu_k-\mu_k')^2}{2} = \frac{(2d_k)^2}{2} = 2d_k^2.$$
For $t\in [T], k \in [K]$ let $t$ samples $\{X_{k,s}\}_{s\in[T]}\sim p_k^i$ from arm $k$ in some bandit instance $i$. Moreover, let the empirical KL divergence computed from the samples of arm $k$:
\begin{align*}
\widehat{d}_{KL}^{k,t} &= \frac{1}{t} \sum_{s\in[t]} \log \left(\frac{\,d p_k}{\,d p_k'}(X_{k,s})\right) \\
&= \frac{1}{t} \sum_{s\in[t]} \log \left(\frac{\frac{1}{\sqrt{2\pi}} e^{\frac{-(X_{k,s}-\mu_k)^2}{2}}}{\frac{1}{\sqrt{2\pi}} e^{\frac{-(X_{k,s}-\mu'_k)^2}{2}}}\right) \\
&= \frac{1}{t} \sum_{s\in[t]} \left( -\frac{(X_{k,s}-\mu_k)^2}{2} + \frac{(X_{k,s}-\mu_k')^2}{2} \right) \\
&= \frac{1}{t} \sum_{s\in[t]} \frac{(2 X_{k,s} -(\mu_k+\mu_k')) (\mu_k'-\mu_k)}{2} \\
&= \frac{1}{t} \sum_{s\in[t]} 2(X_{k,s} -\mu_{i_*}) d_k
\end{align*}
Note that $\Ex{G^i}{\widehat{d}_{KL}^{k,t}} = 2(\mu_k-\mu_{i_*})d_k=-d_{KL}^k$ if $k\neq i$, or $\Ex{G^i}{\widehat{d}_{KL}^{k,t}} = 2(\mu_k'-\mu_{i_*})d_k=2(\mu_k+2d_k-\mu_{i_*})d_k=d_{KL}^k$ if $k=i$. Therefore, $|\widehat{d}_{KL}^{k,t}|$ is an unbiased estimator of $d_{KL}^k$. Moreover, we have the following concentration result:
\begin{lemma}\label{lem:KL_concentration}
Let
\begin{align*}
\Xi = \left\{|\widehat{d}_{KL}^{k,t}| - d_{KL}^k \leq 2d_k\sqrt{\frac{2\log(6KT)}{t}}, \forall k\in [K], t\in [T]\right\}
\end{align*}
For any $i\in[K]$ we have that:
\begin{align*}
\Prob{i}{\Xi} \geq 5/6
\end{align*}
\end{lemma}
\begin{proof}
Since $X_{k,s}\sim p_k^i$, the quantity $2(X_{k,s} -\mu_{i_*}) d_k$ is a Gaussian random variable. In particular, if $k=i$ then $2(X_{k,s} -\mu_{i_*}) d_k\sim \mathcal{N}\left(2d_k^2, 2d_k\right)=\mathcal{N}\left(d_{KL}^k, 2d_k\right)$.
On the other hand, if $k\neq i$
then $2(X_{k,s} -\mu_{i_*}) d_k\sim \mathcal{N}\left(-2d_k^2, 2d_k\right)=\mathcal{N}\left(-d_{KL}^k, 2d_k\right)$.
Thus using Hoeffding's inequality for the empirical mean of subgaussian random variables:
\begin{align*}
\Prob{G^i}{|\widehat{d}_{KL}^{k,t}| - d_{KL}^k \geq 2d_k \sqrt{\frac{2\log (1/\delta)}{t}}} \leq \delta
\end{align*}
Using $\delta=(6TK)^{-1}$ and union bound over all $t\in [T]$ and $k\in [K]$ we obtain that:
\begin{align*}
\Prob{G^i}{|\widehat{d}_{KL}^{k,t}| - d_{KL}^k \leq 2d_k\sqrt{\frac{2\log(6KT)}{t}}, \forall k\in [K], t\in [T]} \geq 5/6
\end{align*}
\end{proof}
We consider an algorithm that returns arm $J$ and denote by $T_i$ the number of times arm $i$ has been pulled. As in \cite{Carpentier16}, we define:
\begin{equation}\label{eq:def_times}
t_i = \Ex{i_*}{T_i}
\end{equation}
and the event:
\begin{align*}
\mathcal{E}_i = \{J= i_*\}\cap \Xi \cap\{T_i\leq 6 t_i\}.
\end{align*}
We focus on some $i\in[K]$. Using the change of measure identity, since the distributions $G_{i_*},G_i$ only differ in arm $i$, we have that:
\begin{align*}
\Prob{i}{\mathcal{E}_i} = \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-T_i \widehat{d}_{KL}^{i,T_i}\right)}
\end{align*}
Using \cref{lem:KL_concentration} and subsequently \cref{eq:def_times} we get that:
\begin{align*}
\Prob{i}{\mathcal{E}_i}
&= \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-T_i \widehat{d}_{KL}^{i,T_i}\right)}\\
&\geq \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-T_id_{KL}^{i} - 2d_i\sqrt{2T_i\log(6KT)}\right)} \\
&\geq \Ex{i_*}{\bm{1}\{\mathcal{E}_i\} \exp\left(-6t_id_{KL}^{i} - 2d_i\sqrt{12t_i\log(6KT)}\right)} \\
&= \Prob{i_*}{\mathcal{E}_i} \exp\left(-6t_id_{KL}^{i} - 2\sqrt{12t_id_i^2\log(6KT)}\right) \\
&= \Prob{i_*}{\mathcal{E}_i} \exp\left(-12 t_i d^2_i - 2\sqrt{12 t_i d_i^2\log(6KT)}\right) \\
\end{align*}
Recall that we are interested in bounding the probability of error, i.e. the following quantity, for any $i\in[K]$:
\begin{align*}
\Ex{\bm{\mu}}{\Prob{i}{J\neq i}}.
\end{align*}
Notice that for $i\in [K]\setminus \{i_*\}$ the probability of error in instance $i$ can be lower bounded as follows:
\begin{align*}
\Prob{i}{J\neq i}
\geq \Prob{i}{\mathcal{E}_i}
\geq \Prob{i_*}{\mathcal{E}_i} \exp\left(-12 t_i d^2_i - 2\sqrt{12 t_i d_i^2\log(6KT)}\right).
\end{align*}
Since $\sum_{i\in[K]} t_i = T$ and $\sum_{i\in [K]\setminus \{i_*\}} \frac{1}{d_i^2}=H$ then $\exists i\in[K]\setminus\{i_*\}$ such that $t_i d_i^2 \leq T/H$. Thus, there exists $i\in [K]\setminus\{i_*\}$ such that:
\begin{align*}
\Prob{i}{J\neq i} \geq \Prob{i_*}{\mathcal{E}_i} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right).
\end{align*}
Using that, by Markov's inequality $\Prob{i_*}{T_i\geq 6 t_i}\leq \frac{1}{6}$ and \cref{lem:KL_concentration} we get that $$\Prob{i_*}{\mathcal{E}_i}\geq 1 - \Prob{i_*}{T_i\geq 6 t_i} - \Prob{i_*}{\Xi} - \Prob{i_*}{J\neq i_*} \geq 1-\frac{1}{6}-\frac{1}{6} - \Prob{i_*}{J\neq i_*}= \frac{2}{3}-\Prob{i_*}{J\neq i_*}.$$
Thus, for this $i\in[K]\setminus {i_*}$ we have that
\begin{align*}
\Prob{i}{J\neq i}
&\geq \left(\frac{2}{3}-\Prob{i_*}{J\neq i_*}\right) \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right) \\
&\geq \frac{2}{3} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right) -\Prob{i_*}{J\neq i_*}.
\end{align*}
Rearranging the terms in the above inequality we get that
\begin{align*}
\Prob{i}{J\neq i} + \Prob{i_*}{J\neq i_*} \geq \frac{2}{3} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right).
\end{align*}
Then, $\exists i\in[K]$ such that:
\begin{align*}
\Prob{i}{J\neq i} \geq \frac{1}{3} \exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right).
\end{align*}
which also holds on expectation when $\mu_i\sim \mathcal{N}(\nu_i,1)$:
\begin{align*}
\Ex{\bm{\mu}}{\Prob{i}{J\neq i}} \geq \frac{1}{3} \Ex{\bm{\mu}}{\exp\left(-12\frac{T}{H} - 2\sqrt{12\frac{T}{H}\log(6KT)}\right)}.
\end{align*}
The theorem follows by taking $\sigma_0\rightarrow 0$ in the above.
\end{proof}
\section{CONCLUSIONS}
\label{sec:conclusions}
While best-arm identification in the fixed-budget setting has been studied extensively in the frequentist setting, Bayesian algorithms with provable Bayesian guarantees do not exist. In this work, we set out to address this gap and propose a Bayesian successive elimination algorithm with (almost) optimal such guarantees. The key idea in the algorithm is to eliminate arms based on their MAP estimates of mean rewards, which take the prior distribution of arm means into account.
The performance of the algorithm improves when the prior is more informative and we also derive an upper bound on the failure probability of the algorithm that reflects that. Our bound matches our newly established lower bound for $K = 2$ arms. Our algorithm is evaluated empirically on synthetic bandit problems. We observe that it is superior to frequentist methods and competitive with the state-of-the-art Bayesian algorithms that have no guarantees in our setting.
Our work is a first step in an exciting direction of more sample-efficient Bayesian BAI algorithms, which have improved guarantees for more informative priors. The work can be extended in two obvious directions. First, our algorithm is designed for and analyzed in Gaussian bandits. We believe that both can be extended to single-parameter exponential-family distributions with conjugate priors, such as Bernoulli rewards with beta priors. Second, successive elimination of \citet{Karnin2013AlmostOE} has recently been extended to linear models by \citet{ijcai2022p388}. In the linear model, a Gaussian model parameter prior with Gaussian rewards implies a Gaussian model parameter posterior. For this conjugacy, we believe that our algorithm design and analysis can also be extended to linear models.
\section{EXPERIMENTS}
\label{sec:experiments}
\begin{figure*}[t]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=\linewidth]{figures/plot_budget.pdf}
\end{minipage}
\begin{minipage}{.48\textwidth}
\includegraphics[width=\linewidth]{figures/plot_sigma0.pdf}
\end{minipage}
\caption{Evaluation of fixed-budget BAI algorithms on a synthetic dataset. The probability of misidentification as a function of a) budget $n$ is shown in the left plot, and b) prior variance $\sigma_0$ is shown in the right plot.}
\label{fig:plot budget sigma0}
\end{figure*}
We conduct experiments on a synthetic dataset. We simulate rewards from $K=8$ Gaussian arms whose means are themselves drawn from a Gaussian prior with means $\{\nu_i = 2^{-i}: i \in \{0,1,2,3,4,5,6,7\}\}$. We choose this setting because it contains few small gaps and many large gaps. This is conducive to adaptive algorithms, as is evident from the bound in \cref{thm:freq_bound}. In all our plots, we show the mean performance and error bars over $5000$ runs. We compare the performance of seven algorithms, explained below.
\begin{itemize}
\item \ensuremath{\tt TS}\xspace\citep{russo2018tutorial}: Thompson sampling, where the best arm is sampled proportional to the number of pulls. The probability of misidentification of \ensuremath{\tt TS}\xspace can easily be bounded by observing that an $\tilde{O}(\sqrt{n})$ cumulative regret of TS implies a $\tilde{O}(1/\sqrt{n})$ simple regret for this strategy.
\item \ensuremath{\tt TS2}\xspace: Thompson sampling, where the best arm is chosen as the one with the highest posterior mean. This strategy does not have a guarantee for fixed-budget BAI. We choose it because it performed extremely well in our early experiments.
\item \ensuremath{\tt BayesElim}\xspace: \cref{alg:bayesian_successive_elimination} proposed in this paper. The probability of misidentification is bounded in \cref{thm:bound_with_integral}.
\item \ensuremath{\tt BayesElim2}\xspace: \cref{alg:bayesian_successive_elimination}, but where we do not discard previous samples at the end of each stage. This does not have a theoretical bound. It is well known that discarding of earlier observations is needed for analysis, but it hurts practical performance.
\item \ensuremath{\tt TTTS}\xspace\citep{russo2016simple}: Top-two Thompson sampling, a state-of-the-art algorithm for BAI, where the arm with the highest posterior mean is chosen as the best arm. This does not have a theoretical bound for fixed-budget BAI.
\item \ensuremath{\tt FreqElim}\xspace\citep{Karnin2013AlmostOE}: The frequentist version of the elimination algorithm proposed in this paper, which ignores the prior. The probability of error can be bounded analytically, as shown in Theorem 4.1 of \citet{Karnin2013AlmostOE}.
\item \ensuremath{\tt FreqElim2}\xspace: The frequentist analog of \ensuremath{\tt BayesElim2}\xspace. This strategy does not have a theoretical bound.
\end{itemize}
For our first experiment, we study the dependence of the probability of misidentification as a function of the budget $n$. For this experiment, we fix the prior variance $\sigma_0 = 0.5$ and the reward variance $\sigma = 0.5$. The left plot in \cref{fig:plot budget sigma0} shows the log probability of misidentification as a function of the budget. As expected, the probability of misidentification decreases as the budget increases for all algorithms. We highlight two observations. First, among all algorithms that have theoretical guarantees, \ensuremath{\tt BayesElim}\xspace performs the best. Furthermore, the performance of \ensuremath{\tt BayesElim2}\xspace (which does not have a theoretical guarantee), is similar to \ensuremath{\tt TTTS}\xspace, which performs the best among all algorithms. Second, the difference between the frequentist and Bayesian elimination algorithms decreases as the budget increases. This is expected since the benefit of the prior diminishes as more samples are available.
In our second experiment, we plot the probability of misidentification as a function of the prior variance $\sigma_0$. Here again, we observe \ensuremath{\tt BayesElim}\xspace to be the best algorithm among those with theoretical guarantees, and \ensuremath{\tt BayesElim2}\xspace to be close to optimal among all algorithms. We also observe that the performance gap between the frequentist and Bayesian elimination algorithms decreases as the variance of the prior increases. This is expected because the higher the prior variance, the less informative it is.
\section{INTRODUCTION}
\label{sec:introduction}
\emph{Best-arm identification (BAI)} is a \emph{pure exploration} bandit problem where the goal is to identify the optimal arm \citep{bubeck2010pure,audibert-2010-BAI}. It has many applications in practice, such as online advertising, recommender systems, and vaccine tests \citep{lattimore-Bandit}. In the \emph{fixed-budget (FB)} setting \citep{bubeck2010pure,audibert-2010-BAI}, the goal is to accurately identify the optimal arm within a fixed budget of observations (arm pulls). This setting is common in applications where the observations are costly, such as in Bayesian optimization \citep{krause08nearoptimal}. In the \emph{fixed-confidence (FC)} setting \citep{ActionElimination-Evendar2006a,soare2014bestarm}, the goal is to find the optimal arm with a guaranteed level of confidence, while minimizing the sample complexity. Some works even studied both settings \citep{Gabillon-2012,Karnin2013AlmostOE}.
Most BAI algorithms, including all of the aforementioned, are frequentist. This means that the bandit instance is chosen potentially adversarially from some hypothesis class, such as linear models, and the goal of the agent is to identify the best arm in it by only knowing the class. While frequentist BAI algorithms have strong guarantees, they cannot be easily integrated with side information, such as a prior distribution over bandit instances, which is often available. While Bayesian BAI algorithms can naturally do that \citep{pmlr-v33-hoffman14,russo2016simple}, to the best of our knowledge the analyses of all state-of-the-art Bayesian algorithms are frequentist. So, while the algorithms can benefit from the side information, their regret bounds do not show improvement due to better side information. One recent exception is the work of \citet{komiyama2021optimal}, where the authors prove the first lower bound on a simple Bayes regret and bound the simple Bayes regret of a BAI algorithm that explores uniformly.
In this work, we set out to address an obvious gap in prior works on Bayesian BAI. Specifically, we propose the first Bayesian BAI algorithm for the fixed-budget setting that uses a prior distribution over bandit instances as a side information, and also has an error bound that improves with a more informative prior. This work parallels modern analyses of Thompson sampling in the cumulative regret setting \citep{russo14learning,russo16information,hong22thompson,hong22hierarchical}. For instance, \citet{russo14learning} showed that the $n$-round Bayes regret of linear Thompson sampling is $O(\sqrt{d})$ lower than the best known regret bound in the frequentist setting \citep{agrawal13thompson}. \citet{hong22hierarchical} showed that the shared hyper-parameter in meta- and multi-task bandits provably reduces the $n$-round Bayes regret of Thompson sampling that uses this structure. We believe that our work lays foundations for similar future improvements in Bayesian BAI.
This paper makes the following contributions. First, we formulate the setting of fixed-budget BAI with $K$ arms and propose an elimination algorithm for it. The algorithm is a variant of successive elimination \citep{Karnin2013AlmostOE} where the \emph{maximum likelihood estimate (MLE)} of the mean arm reward is replaced with a Bayesian \emph{maximum a posteriori (MAP)} estimate. We call the algorithm \ensuremath{\tt BayesElim}\xspace. Second, we prove an upper bound on the probability that \ensuremath{\tt BayesElim}\xspace fails to identify the optimal arm. The upper bound is proved using a frequentist-like analysis, where we carry the prior information through, and then integrate out the random instance at the end. The carried prior shows reduced regret when compared to frequentist algorithms. Our analysis technique is novel and very different from typical Bayesian bandit analyses in the cumulative regret setting \citep{russo14learning,russo16information,hong22thompson,hong22hierarchical}, which condition on history and bound the regret in expectation over the posterior in each round. Third, we prove a matching lower bound for the case of $K = 2$ arms. Finally, we evaluate \ensuremath{\tt BayesElim}\xspace on several synthetic bandit instances and demonstrate the benefit of using the prior in BAI.
One surprising property of our upper and lower bounds is that they are proportional to $1 / \sqrt{n}$, where $n$ is the budget. At a first sight, this seems to contradict to the frequentist upper \citep{Karnin2013AlmostOE} and lower \citep{Carpentier16} bounds, which are proportional to $\exp[- n\Delta^2]$, where $\Delta$ is the gap. The reason for the seeming contradiction is that the frequentist bounds are proved in a harder setting, per instance instead of integrating out the instance, yet they are lower. The bounds are compatible though: roughly speaking, when the frequentist bounds are integrated over $\Delta$, which in our case can be viewed as $\Delta \sim \mathcal{N}(0, 1)$, the resulting integrals yield $1 / \sqrt{n}$, because the budget $n$ in $\exp[-n \Delta^2]$ plays the role of variance in the Gaussian integral. These claims are stated more rigorously in the paper.
\section{LOWER BOUND}\label{sec:lb}
In this section, we construct a lower bound on the probability of misidentification of any policy that recommends arm $J$ after $n$ exploration rounds.
Our lower bound construction is novel: we formulate a simpler setting where it is easier to handle prior weights and combine this fact with frequentist-like arguments for every possible problem instance.
We compare this lower bound to the guarantee of our algorithm and show that \ensuremath{\tt BayesElim}\xspace achieves optimal dependence in almost all parameters of the setting.
In addition, we give a lower bound on the expected probability of misidentification of any frequentist policy, i.e. policy that ignores prior information, applied to the Bayesian setting. Finally, in \cref{sec:proof_lb} we sketch the proof of our \cref{thm:lb}.
We are now ready to state the main lower bound. For simplicity, we focus on the case of $K=2$ arms and explore the dependence on the horizon and prior quantities. We have the following:
\begin{theorem}\label{thm:lb}
For any policy interacting with $K=2$ arms with mean rewards $\bm{\mu}=(\mu_1,\mu_2)$, reward distributions $\mathcal{N}(\mu_i,\sigma^2)$ and priors $\mu_i\sim\mathcal{N}(\nu_i,\sigma_0^2)$ for $i\in\{1,2\}$, we have that
\begin{align*}
&\Ex{\bm{\mu}}{\Prob{}{J\neq i_*|\bm{\mu}}} \\
&\geq \frac{1}{8}\Ex{\bm{\mu}}{\exp\left(-n\frac{\Delta^2}{2\sigma^2}- \frac{2\Delta|\nu_{1}-\nu_{2}|}{2\sigma_0^2}\right) },
\end{align*}
where $\Delta = |\mu_1-\mu_2|$.
\end{theorem}
In order to compare the above lower bound
to the guarantee for the expected probability of misidentification of \ensuremath{\tt BayesElim}\xspace, we use the upper bound derived in the intermediate \cref{eq:regret_bound_1}, that is, before integration over priors. Replacing the expressions of $\Sigma_r,R$ and using $K=2$ and the same notation $\Delta$ as in \cref{thm:lb}, the guarantee for \ensuremath{\tt BayesElim}\xspace in \cref{eq:regret_bound_1} becomes as follows:
\begin{corollary}
In the setting of \cref{thm:lb}, \ensuremath{\tt BayesElim}\xspace satisfies:
\begin{align*}
&\Ex{\bm{\mu}}{\Prob{}{J\neq i_*|\bm{\mu}}} \\
&\leq 3 \Ex{\bm{\mu}}{
\exp\left(-\frac{n}{8} \frac{\left(\frac{\sigma^2}{\sigma_{0}^2}\frac{\nu_{1}-\nu_{2}}{n} + \Delta\right)^2}{\sigma^2} \right)} \\
&= \mathbb{E}_{\bm{\mu}}\Bigg[
\exp\Bigg(-n\frac{\Delta^2}{8\sigma^2}- \frac{2\Delta (\nu_{1}-\nu_{2})}{8\sigma_0^2}\\
& \qquad\qquad\qquad
\qquad-\frac{\sigma^2}{\sigma_0^2}\frac{(\nu_{1}-\nu_{2})^2}{8n\sigma_0^2}\Bigg) \Bigg].
\end{align*}
\end{corollary}
From the above, it is clear that \ensuremath{\tt BayesElim}\xspace achieves \textit{optimal} guarantees in terms of the dependence on the budget $n$, prior variance $\sigma_0$, reward variance $\sigma$ and suboptimality gaps $\Delta$, up to constant factors. Moreover, the upper bound has similar dependence on the gaps between the prior means normalized by $\sigma_0^2$ as the lower bound. Notice that the term $\frac{\sigma^2}{\sigma_0^2}\frac{(\nu_{1}-\nu_{2})^2}{8n\sigma_0^2} $ is vanishing for larger budget. However, there is still a gap between the two bounds in terms of this dependence.
Finally, we formally quantify the loss incurred by using a frequentist policy in the Bayesian setting, by showing the following lower bound on the expected probability of misidentification of policies that ignore prior information:
\begin{restatable}{theorem}{thmFreqLBound}\label{thm:freq_lb}
When $\sigma_0\rightarrow 0$, there exist prior means $(\nu_1,\dots,\nu_K)$ with $\nu_1>\nu_{j}$ for all $j\in [K]\setminus{\{1\}}$, such that the expected probability of misidentification of any frequentist algorithm, that is any algorithm that is oblivious to prior information, satisfies:
\begin{align*}
&\Ex{\bm{\mu}}{\prob{J\neq i_*|\bm{\mu}}} \\
&\geq \frac{1}{6}
\exp\left(-\frac{12n}{\sum_{i>1}(\nu_1-\nu_i)^{-2}} - \sqrt{\frac{48n\log(6Kn)}{\sum_{i>1}(\nu_1-\nu_i)^{-2}}}\right).
\end{align*}
\end{restatable}
Observe that the setting of \cref{thm:freq_lb} corresponds to a trivial case for a Bayesian policy, since the means are known without any sampling - \ensuremath{\tt BayesElim}\xspace achieves zero probability of misidentification in that case. The proof of the above result is deferred to $\cref{app:freq_lb}$.
In the rest of this section, we outline the proof of our main lower bound in \cref{thm:lb} for the problem of Bayesian BAI.
\subsection{Proof of \cref{thm:lb}}\label{sec:proof_lb}
We consider a fixed reward vector $\bm{\mu}=(\mu_1,\mu_2)$ drawn from the prior. Let $i_1$ be the optimal and $i_2$ be the suboptimal arm of $\bm{\mu}$. We need to bound the probability that the learner fails to recommend the optimal arm when presented with vector $\bm{\mu}$, i.e. $\Ex{\bm{\mu}}{\Prob{}{J\neq i_1~|~\bm{\mu}}}$.
Instead, we consider the easier problem where the learner is also given the information that the instance it faces is one of two Gaussian bandit instances $\mathcal{I}_{i_1},\mathcal{I}_{i_2}$, where $\mathcal{I}_{i_1}$ is the original instance corresponding to the true mean reward vector $\bm{\mu}$, while in instance $\mathcal{I}_{i_2}$ the mean rewards are flipped such that the suboptimal arm, $i_2$, is optimal.
Formally, we define instance $\mathcal{I}_{i_1}$ where the realizations of the arms follow the product distribution $G_{i_1}=p_1\times p_2$ where:
$$p_1= \mathcal{N}(\mu_{1},\sigma^2), p_2= \mathcal{N}(\mu_{2},\sigma^2).$$
Similarly, we define instance $\mathcal{I}_{i_2}$ where the realizations of the arms follow the distribution $G_{i_2}=p_2\times p_1$.\\
Observe that all $\mathcal{I}_{i_1},\mathcal{I}_{i_2},G_{i_1},G_{i_2}$ are functions of $\bm{\mu}$, and thus they are random variables. Also, notice that the instances are defined such that arm $i$ is uniquely optimal in instance $\mathcal{I}_{i}$.
For $i\in[i_1,i_2]$, we use the notation $\Prob{i}{\cdot} = \Prob{G_i}{\cdot|\bm{\mu}}$ and $\Ex{i}{\cdot} = \Ex{G_i}{\cdot | \bm{\mu}}$ to denote the probability (resp. expectation) w.r.t. the interplay of the (possibly randomized) policy and the reward realizations within $n$ rounds in instance $\mathcal{I}_{i}$.
Let $n_{i_1},n_{i_2}$ be the number of times arms $i_1,i_2$ are played by a policy in $n$ rounds. Since we are dealing with only two arms, we can define the event $A=\{J=i_1\}$ and apply Bretagnolle-Huber inequality (\cref{thm:huber} in \cref{app}) to obtain the following:
\begin{align*}
&\Prob{i_1}{J\neq i_1}+\Prob{i_2}{J\neq i_2}\\
&=\Prob{i_1}{A^c}+\Prob{i_2}{A} \\
&\geq \frac{1}{2}e^{-\left(\Ex{i_1}{n_{i_1}}d_{KL}(p_1,p_2)+\Ex{i_1}{n_{i_2}}d_{KL}(p_2,p_1)\right)} \\
&= \frac{1}{2}\exp\left(-\left(\Ex{i_1}{n_{i_1}}\frac{\Delta^2}{2\sigma^2}+\Ex{i_1}{n_{i_2}}\frac{\Delta^2}{2\sigma^2}\right)\right)\\
&= \frac{1}{2}\exp\left(-n\frac{\Delta^2}{2\sigma^2}\right),
\end{align*}
where above we first use Bretagnolle-Huber inequality, then we use the expression for the KL-divergence between two Gaussian distributions with same standard deviation $\sigma$ and subsequently the fact that $n_{i_1}+n_{i_2}=n$.
Thus,
\begin{align}\label{eq:tmp1}
\max(\Prob{i_1}{J\neq i_1},\Prob{i_2}{J\neq i_2}) \geq \frac{1}{4}\exp\left(-n\frac{\Delta^2}{2\sigma^2}\right).
\end{align}
However, in contrast to the frequentist setting, here instances $\mathcal{I}_{i_1},\mathcal{I}_{i_2}$ have possibly different probabilities of happening and these probabilities are available to the policy. The conditional probability of instance $\mathcal{I}_{i_1}$ given that the learner faces either $\mathcal{I}_{i_1}$ or $\mathcal{I}_{i_2}$ is:
\begin{align*}
&\Prob{}{\mathcal{I}_{i_1}} = \\
&= \frac{e^{-\frac{(\mu_{i_1}-\nu_{i_1})^2+(\mu_{i_2}-\nu_{i_2})^2}{2\sigma_0^2}}}
{e^{-\frac{(\mu_{i_1}-\nu_{i_1})^2+(\mu_{i_2}-\nu_{i_2})^2}{2\sigma_0^2}}
+
e^{-\frac{(\mu_{i_1}-\nu_{i_2})^2+(\mu_{i_2}-\nu_{i_1})^2}{2\sigma_0^2}}}\\
&= \frac{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}
\end{align*}
and similarly for instance $\mathcal{I}_{i_2}$:
$$
\Prob{}{\mathcal{I}_{i_2}}
=\frac{1}
{\exp\left(-\frac{2(\mu_{i_1}-\mu_{i_2})(\nu_{i_1}-\nu_{i_2})}{2\sigma_0^2}\right)
+1}.
$$
By distinguishing cases for the conditional probabilities above, we can show the following:
\begin{restatable}{proposition}{propMinProb}\label{prop:min_prob}
We have that
\begin{align*}
\min(\Prob{}{\mathcal{I}_{i_1}},\Prob{}{\mathcal{I}_{i_2}})
\geq \frac{1}{2}\exp\left(-\frac{2\Delta|\nu_{i_1}-\nu_{i_2}|}{2\sigma_0^2}\right).
\end{align*}
\end{restatable}
\noindent
Putting it all together, the probability of error in this setting can be written as follows:
\begin{align*}
&\Prob{}{J\neq i_1, \mathcal{I}_{i_1}} + \Prob{}{J\neq i_2, \mathcal{I}_{i_2}} \\
&= \Prob{i_1}{J\neq i_1}\Prob{}{\mathcal{I}_{i_1}} + \Prob{i_2}{J\neq i_2}\Prob{}{\mathcal{I}_{i_2}}\\
&\geq \max(\Prob{i_1}{J\neq i_1},\Prob{i_2}{J\neq i_2}) \cdot \min(\Prob{}{\mathcal{I}_{i_1}},\Prob{}{\mathcal{I}_{i_2}})\\
&\geq \frac{1}{4}\exp\left(-n\frac{\Delta^2}{2\sigma^2}\right) \min(\Prob{}{\mathcal{I}_{i_1}},\Prob{}{\mathcal{I}_{i_2}}) \\
&\geq \frac{1}{8}\Ex{\bm{\mu}}{\exp\left(-\left(n\frac{\Delta^2}{2\sigma^2}+ \frac{2\Delta|\nu_{1}-\nu_{2}|}{2\sigma_0^2}\right) \right)}.
\end{align*}
where above we used \cref{eq:tmp1} and subsequently \cref{prop:min_prob}.
Returning to our original experiment, we conclude that:
\begin{align*}
&\Ex{\bm{\mu}}{\Prob{}{J\neq i_*|\bm{\mu}}} \\
&\geq \frac{1}{8}\Ex{\bm{\mu}}{\exp\left(-\left(n\frac{\Delta^2}{2\sigma^2}+ \frac{2\Delta|\nu_{1}-\nu_{2}|}{2\sigma_0^2}\right) \right)}.
\end{align*}
\section{PROBLEM SETTING}
Bayesian Fixed-Budget best-arm identification (BAI) involves $K$ arms with \textit{unknown} mean rewards $\bm{\mu}=(\mu_1,...,\mu_K)$ drawn from some \textit{known} prior distribution $Q$.
By playing arm $i\in [K]$ a policy observes a sample $X_i$ drawn from its reward distribution.
We focus on the Gaussian case, where the reward of each arm $i\in [K]$ follows a Gaussian distribution with known variance, i.e. $X_{i}\sim \mathcal{N}(\mu_i,\sigma_i^2)$ and the mean rewards of the arms are drawn independently from $\mu_i\sim \mathcal{N}(\nu_{i}, \sigma_{0}^2)$. We refer to $\sigma_i^2$ as the reward variance of arm $i$ and to $\nu_i$ and $\sigma_0^2$ as the prior mean and variance of the reward of arm $i$, respectively.
A policy interacts with the arms for $n$ exploration rounds, where $n$ is a known budget, with the goal of identifying an optimal arm in $[K]$, i.e. an arm $i_*$ such that $i_*=\argmax_{i\in[K]}\mu_i$. We denote by $J$ the arm that is recommended by the policy at the end of $n$ rounds. For any fixed parameter vector $\bm{\mu}$, the \textit{probability of misidentification} is
\begin{align*}
{\prob{J\neq i_*|\bm{\mu}}},
\end{align*}
where $\prob{\cdot|\bm{\mu}}$ is over the randomness of the policy and the reward realizations of each round and considering the parameter vector $\bm{\mu}$ fixed. The setting where the reward vector $\bm{\mu}$ is considered fixed corresponds to the frequentist BAI setting. There, the objective of a policy is to minimize the worst-case probability of misidentification for any possible mean reward vector. \\
In contrast to the frequentist setting, in Bayesian BAI the performance of a policy is measured in terms of its \textit{expected} {probability of misidentification}, i.e.:
\begin{align}\label{eq:objective}
\Ex{\bm{\mu}\sim Q}{\prob{J\neq i_*|\bm{\mu}}}
\end{align}
where the expectation is taken over the prior distributions of the mean rewards.
\paragraph{Notation.} We use the notation $\mu_* = \mu_{i_*}$ and $\nu_* = \nu_{i_*}$ for the mean reward and prior mean of the optimal arm, respectively. The suboptimality gap of each arm $i\in [K]$ is defined as $\Delta_{i} = \mu_* - \mu_i$. We note that $i_*, \mu_*, \nu_*$ and $\Delta_i$ are all functions of $\bm{\mu}$ and thus are random variables. For any $i\in [K]$, the posterior distribution (see \cite{Murphy07}) of its mean reward given $m$ i.i.d. observations $\{X_{i,s}\}_{s\in[m]}$ from its reward distribution, i.e. $X_{i,s}\sim \mathcal{N}(\mu_i,\sigma_i^2)$ for $s\in[m]$, is a Gaussian distribution:
\begin{align*}
\mathcal{N}\left( \bar \mu_{i,m}, \bar\sigma_{i,m}^2 \right),
\end{align*}
where the \textit{posterior mean} is given by the expression
\begin{align}
\bar \mu_{i,m}=\bar\sigma_{i,m}^2 \left(\frac{\nu_i}{\sigma_{0}^2}+\frac{\sum_{s\in [m]} X_{i,s}}{\sigma_i^2}\right)
\label{eq:mubarim}
\end{align}
while the \textit{posterior variance} is given by
$$\bar\sigma_{i,m}^2=\left(\frac{1}{\sigma_{0}^2}+\frac{m}{\sigma_i^2}\right)^{-1}.$$
We summarize notation in \cref{table:1}.
\begin{table}[!ht]
\centering
\begin{tabular}{ |p{0.08\textwidth}||p{0.35\textwidth}| }
\hline
\multicolumn{2}{|c|}{Notation} \\
\hline
$K$ & Number of arms \\
$n$ & Exploration budget \\
$J$ & Arm recommended by the policy \\
$\bm{\mu}$ & Mean reward vector \\
$Q$ & Prior distribution on $\bm{\mu}$ \\
$X_i$ & Stochastic reward of arm $i$ \\
$\mu_i,\sigma_i^2$ & Mean and Variance of the reward distribution of arm $i$ \\
$\nu_i,\sigma_{0,i}^2$ & Mean and variance of the prior distribution of arm $i$ \\
$\bar \mu_{i,m}, \bar\sigma_{i,m}^2$ & Posterior mean and variance of arm $i$ computed from $m$ i.i.d. samples \\
$i_*$ & Optimal arm of a random instance $\bm{\mu}$ \\
$\nu_*, \mu_*$ & Prior and reward mean of arm $i_*$ \\
$R$ & Number of elimination rounds \\
$S_r$ & Active set of arms in elimination round $r$ \\
$\Sigma_r$ & Sum of reward variances of active arms in round $r$ \\
$\prob{\cdot|\bm{\mu}}$ & Probability measure considering the vector $\bm{\mu}$ fixed \\
$\Ex{\bm{\mu}}{.}$ & Expectation over the randomness in $\bm{\mu}$ \\
$\prob{\bm{\mu}}$ & Probability of instance $\bm{\mu}$ \\
\hline
\end{tabular}
\caption{Notation}
\label{table:1}
\end{table}
\subsubsection*{\bibname}}
\usepackage{algorithm}
\usepackage{algorithmicx}
\usepackage{algpseudocode}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{bbm}
\usepackage{bm}
\usepackage{caption}
\usepackage{color}
\usepackage{dirtytalk}
\usepackage{dsfont}
\usepackage{enumerate}
\usepackage{fullpage}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{mathtools}
\usepackage[round]{natbib}
\usepackage{subfigure}
\usepackage{url}
\usepackage{xspace}
\usepackage{array,xtab,ragged2e}
\newlength\mylengtha
\newlength\mylengthb
\newcolumntype{P}[1]{>{\RaggedRight}p{#1}}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage[bookmarks=false]{hyperref}
\hypersetup{
pdffitwindow=true,
pdfstartview={FitH},
pdfnewwindow=true,
colorlinks,
linktocpage=true,
linkcolor=Green,
urlcolor=Green,
citecolor=Green
}
\usepackage[capitalize]{cleveref}
\usepackage[textsize=tiny]{todonotes}
\newcommand{\todob}[2][]{\todo[color=Red!20,size=\tiny,inline,#1]{B: #2}}
\newcommand{\todos}[2][]{\todo[color=Blue!20,size=\tiny,inline,#1]{S: #2}}
\newcommand{\todoa}[2][]{\todo[color=orange!20,size=\tiny,inline,#1]{A: #2}}
\newcommand{\commentout}[1]{}
\newcommand{\junk}[1]{}
\usepackage{thmtools}
\usepackage{thm-restate}
\declaretheorem[name=Theorem,refname={Theorem,Theorems},Refname={Theorem,Theorems}]{theorem}
\declaretheorem[name=Lemma,refname={Lemma,Lemmas},Refname={Lemma,Lemmas},sibling=theorem]{lemma}
\declaretheorem[name=Corollary,refname={Corollary,Corollaries},Refname={Corollary,Corollaries},sibling=theorem]{corollary}
\declaretheorem[name=Assumption,refname={Assumption,Assumptions},Refname={Assumption,Assumptions}]{assumption}
\declaretheorem[name=Proposition,refname={Proposition,Propositions},Refname={Proposition,Propositions},sibling=theorem]{proposition}
\declaretheorem[name=Fact,refname={Fact,Facts},Refname={Fact,Facts},sibling=theorem]{fact}
\declaretheorem[name=Definition,refname={Definition,Definitions},Refname={Definition,Definitions},sibling=theorem]{definition}
\declaretheorem[name=Example,refname={Example,Examples},Refname={Example,Examples}]{example}
\declaretheorem[name=Remark,refname={Remark,Remarks},Refname={Remark,Remarks}]{remark}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathsf{ch}}{\mathsf{ch}}
\newcommand{\diag}[1]{\mathrm{diag}\left(#1\right)}
\newcommand{\domain}[1]{\mathrm{dom}\left(#1\right)}
\newcommand{\mathsf{pa}}{\mathsf{pa}}
\newcommand{\range}[1]{\mathrm{rng}\left[#1\right]}
\newcommand{\E}[1]{\mathbb{E} \left[#1\right]}
\newcommand{\condE}[2]{\mathbb{E} \left[#1 \,\middle|\, #2\right]}
\newcommand{\Et}[1]{\mathbb{E}_t \left[#1\right]}
\newcommand{\prob}[1]{\mathbb{P} \left(#1\right)}
\newcommand{\condprob}[2]{\mathbb{P} \left(#1 \,\middle|\, #2\right)}
\newcommand{\probt}[1]{\mathbb{P}_t \left(#1\right)}
\newcommand{\var}[1]{\mathrm{var} \left[#1\right]}
\newcommand{\condvar}[2]{\mathrm{var} \left[#1 \,\middle|\, #2\right]}
\newcommand{\std}[1]{\mathrm{std} \left[#1\right]}
\newcommand{\condstd}[2]{\mathrm{std} \left[#1 \,\middle|\, #2\right]}
\newcommand{\cov}[1]{\mathrm{cov} \left[#1\right]}
\newcommand{\condcov}[2]{\mathrm{cov} \left[#1 \,\middle|\, #2\right]}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\ceils}[1]{\left\lceil#1\right\rceil}
\newcommand{\dbar}[1]{\bar{\bar{#1}}}
\newcommand*\dif{\mathop{}\!\mathrm{d}}
\newcommand{\floors}[1]{\left\lfloor#1\right\rfloor}
\newcommand{\I}[1]{\mathds{1} \! \left\{#1\right\}}
\newcommand{\inner}[2]{\langle#1, #2\rangle}
\newcommand{\kl}[2]{D_\mathrm{KL}(#1 \,\|\, #2)}
\newcommand{\klplus}[2]{D_\mathrm{KL}^+(#1 \,\|\, #2)}
\newcommand{\maxnorm}[1]{\|#1\|_\infty}
\newcommand{\maxnormw}[2]{\|#1\|_{\infty, #2}}
\newcommand{\negpart}[1]{\left[#1\right]^-}
\newcommand{\norm}[1]{\|#1\|}
\newcommand{\normw}[2]{\|#1\|_{#2}}
\newcommand{\pospart}[1]{\left[#1\right]^+}
\newcommand{\rnd}[1]{\bm{#1}}
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\subreal}[0]{\preceq}
\newcommand{\supreal}[0]{\succeq}
\newcommand{^\top}{^\top}
\newcommand{\bm{\mu}}{\bm{\mu}}
\newcommand{\sigma}{\sigma}
\newcommand{\bar\sigma}{\bar\sigma}
\newcommand{\nr}[1]{n_{r_{#1}}}
\newcommand{\Sigma_r}{\Sigma_r}
\DeclareMathOperator*{\argmax}{arg\,max\,}
\DeclareMathOperator*{\argmin}{arg\,min\,}
\let\det\relax
\DeclareMathOperator{\det}{det}
\DeclareMathOperator{\poly}{poly}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\sgn}{sgn}
\let\trace\relax
\DeclareMathOperator{\trace}{tr}
\mathchardef\mhyphen="2D
\newcommand\Ex[2]{\mathop{{\mathbb{E}_{#1}}\left[#2\right]}}
\newcommand\Prob[2]{\mathop{{\mathbb{P}_{#1}}\left(#2\right)}}
\newcommand{\ensuremath{\tt BayesElim}\xspace}{\ensuremath{\tt BayesElim}\xspace}
\newcommand{\ensuremath{\tt BayesElim2}\xspace}{\ensuremath{\tt BayesElim2}\xspace}
\newcommand{\ensuremath{\tt TS}\xspace}{\ensuremath{\tt TS}\xspace}
\newcommand{\ensuremath{\tt TS2}\xspace}{\ensuremath{\tt TS2}\xspace}
\newcommand{\ensuremath{\tt TTTS}\xspace}{\ensuremath{\tt TTTS}\xspace}
\newcommand{\ensuremath{\tt FreqElim}\xspace}{\ensuremath{\tt FreqElim}\xspace}
\newcommand{\ensuremath{\tt FreqElim2}\xspace}{\ensuremath{\tt FreqElim2}\xspace}
\begin{document}
\twocolumn[
\aistatstitle{Bayesian Fixed-Budget Best-Arm Identification}
\aistatsauthor{ Alexia Atsidakou \And Sumeet Katariya \And Sujay Sanghavi \And Branislav Kveton }
\aistatsaddress{ UT Austin \And Amazon \And UT Austin, Amazon \And Amazon }
]
\begin{abstract}
Fixed-budget best-arm identification (BAI) is a bandit problem where the learning agent maximizes the probability of identifying the optimal arm after a fixed number of observations. In this work, we initiate the study of this problem in the Bayesian setting. We propose a Bayesian elimination algorithm and derive an upper bound on the probability that it fails to identify the optimal arm. The bound reflects the quality of the prior and is the first such bound in this setting. We prove it using a frequentist-like argument, where we carry the prior through, and then integrate out the random bandit instance at the end. Our upper bound asymptotically matches a newly established lower bound for $2$ arms. Our experimental results show that Bayesian elimination is superior to frequentist methods and competitive with the state-of-the-art Bayesian algorithms that have no guarantees in our setting.
\end{abstract}
\input{Introduction}
\input{Setting}
\input{Algorithm}
\input{Analysis}
\input{Lower_Bound}
\input{Experiments}
\input{Conclusions}
\bibliographystyle{abbrvnat}
\subsubsection*{\bibname}}
\begin{document}
\onecolumn
\aistatstitle{Instructions for Paper Submissions to AISTATS 2022: \\
Supplementary Materials}
\section{FORMATTING INSTRUCTIONS}
To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2022.sty} as a style file and to follow the same formatting instructions as in the main paper.
The only difference is that the supplementary material must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files.
Note that reviewers are under no obligation to examine your supplementary material.
\section{MISSING PROOFS}
The supplementary materials may contain detailed proofs of the results that are missing in the main paper.
\subsection{Proof of Lemma 3}
\textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]}
\section{ADDITIONAL EXPERIMENTS}
If you have additional experimental results, you may include them in the supplementary materials.
\subsection{The Effect of Regularization Parameter}
\textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]}
\vfill
\end{document}
| {
"timestamp": "2022-11-17T02:05:09",
"yymm": "2211",
"arxiv_id": "2211.08572",
"language": "en",
"url": "https://arxiv.org/abs/2211.08572",
"abstract": "Fixed-budget best-arm identification (BAI) is a bandit problem where the agent maximizes the probability of identifying the optimal arm within a fixed budget of observations. In this work, we study this problem in the Bayesian setting. We propose a Bayesian elimination algorithm and derive an upper bound on its probability of misidentifying the optimal arm. The bound reflects the quality of the prior and is the first distribution-dependent bound in this setting. We prove it using a frequentist-like argument, where we carry the prior through, and then integrate out the bandit instance at the end. We also provide a lower bound on the probability of misidentification in a $2$-armed Bayesian bandit and show that our upper bound (almost) matches it for any budget. Our experiments show that Bayesian elimination is superior to frequentist methods and competitive with the state-of-the-art Bayesian algorithms that have no guarantees in our setting.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Bayesian Fixed-Budget Best-Arm Identification",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018362008348,
"lm_q2_score": 0.8104789178257654,
"lm_q1q2_score": 0.7903805288657519
} |
https://arxiv.org/abs/1211.2877 | How a nonconvergent recovered Hessian works in mesh adaptation | Hessian recovery has been commonly used in mesh adaptation for obtaining the required magnitude and direction information of the solution error. Unfortunately, a recovered Hessian from a linear finite element approximation is nonconvergent in general as the mesh is refined. It has been observed numerically that adaptive meshes based on such a nonconvergent recovered Hessian can nevertheless lead to an optimal error in the finite element approximation. This also explains why Hessian recovery is still widely used despite its nonconvergence. In this paper we develop an error bound for the linear finite element solution of a general boundary value problem under a mild assumption on the closeness of the recovered Hessian to the exact one. Numerical results show that this closeness assumption is satisfied by the recovered Hessian obtained with commonly used Hessian recovery methods. Moreover, it is shown that the finite element error changes gradually with the closeness of the recovered Hessian. This provides an explanation on how a nonconvergent recovered Hessian works in mesh adaptation. | \section{Introduction}
\label{sect:introduction}
Gradient and Hessian recovery has been commonly used in mesh adaptation for the numerical solution of partial differential equations (PDEs); e.g.,\ see~\cite{AinOde00,BabStr01,HuaRus11,Tang07,ZhaNag05,ZieZhu92,ZieZhu92a}.
The use typically involves the approximation of solution derivatives based on a computed solution defined on the current mesh (recovery), the generation of a new mesh using the recovered derivatives, and the solution of the physical PDE on the new mesh.
These steps are often repeated several times until a suitable mesh and a numerical solution defined thereon are obtained.
As the mesh is refined, a sequence of adaptive meshes, derivative approximations, and numerical solutions results.
A theoretical and also practical question is whether this sequence of numerical solutions converges to the exact solution.
Naturally, this question is linked to the convergence of the recovered derivatives used to generate the meshes.
It is known that recovered gradient through the least squares fitting~\cite{ZieZhu92,ZieZhu92a} or polynomial preserving techniques~\cite{ZhaNag05} is convergent for uniform or quasi-uniform meshes~\cite{ZhaNag05,ZhaZhu95} and superconvergent for mildly structured meshes~\cite{XuZha04} as well for a type of adaptive mesh~\cite{WuZha07}.
For the Hessian, it has been observed that, unfortunately, a convergent recovery cannot be obtained from linear finite element approximations for general nonuniform meshes~\cite{AgoLipVas10,Kam09,PicAlaBorGeo11}, although Hessian recovery is known to converge when the numerical solution exhibits superconvergence or supercloseness for some special meshes~\cite{BanXu03,BanXu03a,Ova07}.
On the other hand, numerical experiments also show that the numerical solution obtained with an adaptive mesh generated using a nonconvergent recovered Hessian is often not only convergent but also has an error comparable to that obtained with the exact analytical Hessian.
To demonstrate this, we consider a Dirichlet boundary value problem (BVP) for the Poisson equation
\begin{equation}
\begin{cases}
-\Delta u = f, &\text{in $\Omega= (0,1)\times(0,1)$}, \\
u = g, &\text{on $\partial\Omega$},
\end{cases}
\label{eq:bvp}
\end{equation}
where $f$ and $g$ are chosen such that the exact solution of the BVP is given by
\begin{equation}
u(x,y) = x^2 + 25 y^2.
\end{equation}
Two Hessian recovery methods, QLS (quadratic least squares fitting) and WF (weak formulation) are used (see~\cref{sect:recovery:methods} for the description of these and other Hessian recovery techniques).
\Cref{fig:x2y2:25:intro} shows the error in recovered Hessian and the linear finite element solution with exact and recovered Hessian.
One can see that the finite element error is convergent and almost undistinguishable for the exact and approximate Hessian (\cref{fig:x2y2:25:solution}) whereas the error of the Hessian recovery remains $\mathcal{O}(1)$ (\cref{fig:x2y2:25:recovery}).
Obviously, this indicates that a convergent recovered Hessian is not necessary for the purpose of mesh adaptation.
Of course, a badly recovered Hessian does not serve the purpose either.
\begin{figure}[t]
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-intro-fe-error}}}
\caption{finte element error $\Abs{u-u_h}_{H^1(\Omega)}$\label{fig:x2y2:25:solution}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-intro-h-error}}}
\caption{recovery error $\max_K \max_{\boldsymbol{x}} \norm{R_{K} - H(\boldsymbol{x})}_\infty$\label{fig:x2y2:25:recovery}}
\end{subfigure}
\caption{Finite element and Hessian recovery errors
as a function of $N$\label{fig:x2y2:25:intro}}
\end{figure}
How accurate should a recovered Hessian be for the purpose of mesh adaptation?
This issue has been studied by Agouzal et al.~\cite{AgoLipVas99} and Vassilevski and Lipnikov~\cite{VasLip99}.
In particular~\cite[Theorem~3.2]{AgoLipVas99}, they show that a mesh based on an approximation $R$ of the Hessian $H$ is quasi-optimal if there exist small (with respect to one) positive numbers $\varepsilon$ and $\delta$ such that
\begin{align}
\max_{\boldsymbol{x} \in \omega_i} \Norm{ H(\boldsymbol{x}) - H_{\omega_i} }_\infty & \leq \delta \lambda_{\min} \bigl(R(\boldsymbol{x}_i)\bigr),
\label{eq:AgLiVa99:1} \\
\Norm{ R(\boldsymbol{x}_i) - H_{\omega_i} }_\infty &\leq \varepsilon \lambda_{\min} \bigl(R(\boldsymbol{x}_i)\bigr)
\label{eq:AgLiVa99:2}
\end{align}
hold for any mesh vertex $\boldsymbol{x}_i$ and its patch $\omega_i$, where $H_{\omega_i}$ is the Hessian at a point in $\omega_i$ where $\Abs{\det H(\boldsymbol{x}) }$ attains its maximum and $\lambda_{\min}(\cdot)$ denotes the minimum eigenvalue of a matrix.
Notice that \cref{eq:AgLiVa99:2} does not require $R$ to converge to $H$ as the mesh is refined.
Instead, it requires the eigenvalues of $R^{-1} H$ to be around one (cf.~\cref{sect:analysis:1}).
Unfortunately, it is still too restrictive to be satisfied by most examples we tested; see~\cref{sect:examples}.
Thus, the work~\cite{AgoLipVas99,VasLip99} does not give a full explanation why a nonconvergent recovered Hessian works in mesh adaptation.
The objective of the paper is to present a study on this issue.
To be specific, we consider a BVP and its linear finite element solution with adaptive anisotropic meshes generated from a recovered Hessian.
We adopt the $M$-uniform mesh approach~\cite{Hua07,HuaRus11} to view any adaptive mesh as a uniform one in some metric depending on the computed solution.
An advantage of the approach is that the relation between the recovered Hessian and an adaptive anisotropic mesh generated using it can be fully characterized through the so-called alignment and equidistribution conditions (see \cref{eq:equi,eq:ali} in~\cref{sect:analysis:1}).
This characterization plays a crucial role in the development of a bound for the $H^1$ semi-norm of the finite element error.
The bound converges at a first order rate in terms of the average element diameter, $N^{-\frac{1}{d}}$, where $N$ is the number of elements and $d$ is the dimension of the physical domain.
Moreover, the bound is valid under a condition on the closeness of the recovered Hessian to the exact one; see \cref{eq:CRs} or \cref{eq:CRplus:general,eq:CRminus:general}.
This closeness condition is much weaker than \cref{eq:AgLiVa99:2}.
Roughly speaking, \cref{eq:AgLiVa99:2} requires the eigenvalues of $R^{-1} H$ to be around one whereas the new condition only requires them to be bounded below from zero and from above.
Numerical results in~\cref{sect:examples} show that the new closeness condition is satisfied in all examples for four commonly used Hessian recovery techniques considered in this paper whereas \cref{eq:AgLiVa99:2} is satisfied only in some examples.
Furthermore, the error bound is linearly proportional to the ratio of the maximum (over the physical domain) of the largest eigenvalues of $R^{-1} H$ to the minimum of the smallest eigenvalues.
Since the ratio is a measure of the closeness of the recovered Hessian to the exact one, the dependence indicates that the finite element error changes gradually with the closeness of the recovered Hessian.
Hence, the error for the linear finite element approximation of the BVP is convergent for the considered Hessian recovery techniques and insensitive to the closeness of the recovered Hessian to the exact one.
This provides an explanation how a nonconvergent recovered Hessian works for mesh adaptation.
An outline of the paper is as follows. Convergence analysis of the linear finite element approximation is given in~\cref{sect:analysis:1,sect:general} for the cases with positive definite and general Hessian, respectively.
A brief description of four common Hessian recovery techniques is given in~\cref{sect:recovery:methods} followed by numerical examples in~\cref{sect:examples}.
Finally, \cref{sect:conclusion} contains conclusions and further comments.
\section{Convergence of~linear finite element approximation for~positive definite Hessian}
\label{sect:analysis:1}
We consider the BVP
\begin{equation}
\begin{cases}
\mathcal{L} u = f, & \text{ in $\Omega$}, \\
u = g, & \text{ on $\partial\Omega$},
\end{cases}
\label{eq:bvp-2}
\end{equation}
where $\Omega$ is a polygonal or polyhedral domain of $\mathbb{R}^d$ ($d \ge 1$), $\mathcal{L}$ is an elliptic second-order differential operator, and $f$ and $g$ are given functions.
We are concerned with the adaptive mesh solution of this BVP using the conventional linear finite element method.
Denote a family of simplicial meshes for $\Omega$ by $\{ \mathcal{T}_h\}$ and the corresponding reference element by $\hat{K}$ which is chosen to be unitary in volume.
For each mesh $\mathcal{T}_h$, we denote the corresponding finite element solution by $u_h$.
C\'{e}a's lemma implies that the finite element error is bounded by the interpolation error, i.e.,
\begin{equation}
\Abs{u-u_h}_{H^1(\Omega)} \le C \Abs{u - \Pi_h u}_{H^1(\Omega)} ,
\label{eq:cea-1}
\end{equation}
where $C$ is a constant independent of $u$ and $\mathcal{T}_h$ and $\Pi_h$ is the nodal interpolation operator associated with the linear finite element space defined on $\mathcal{T}_h$.
Note that \cref{eq:cea-1} is valid for any mesh.
\subsection{\texorpdfstring{Quasi-$M$-uniform meshes}{Quasi-M-uniform meshes}}
In this paper we consider adaptive meshes generated based on a recovered Hessian $R$ and use the $M$-uniform mesh approach with which any adaptive mesh is viewed as a uniform one in some metric $M$ (defined in terms of $R$ in our current situation).
It is known~\cite{Hua07,HuaRus11} that such an $M$-uniform mesh satisfies the equidistribution and alignment conditions,
\begin{align}
\Abs{K} {\det(M_K)}^{\frac{1}{2}}
&= \frac{1}{N} \sum_{{\tilde{K}}\in\mathcal{T}_h} \Abs{{\tilde{K}}} {\det(M_{{\tilde{K}}})}^{\frac{1}{2}},
\quad \forall K \in \mathcal{T}_h,
\label{eq:equi} \\
\frac{1}{d} \tr\left( {(F_K')}^T M_K F_K' \right)
&= {\det\left( {(F_K')}^T M_K F_K' \right)}^{\frac{1}{d}},
\quad \forall K \in \mathcal{T}_h,
\label{eq:ali}
\end{align}
where $N$ is the number of mesh elements, $M_K$ is an average of $M$ over $K$, $F_K \colon \hat{K} \to K$ is the affine mapping from the reference element $\hat{K}$ to a mesh element $K$, $F_K'$ is the Jacobian matrix of $F_K$ (which is constant on $K$), and $\det(\cdot)$ and $\tr(\cdot)$ denote the determinant and trace of a matrix, respectively.
In practice, it is more realistic to generate less restrictive quasi-$M$-uniform meshes which satisfy
\begin{align}
\Abs{K} {\det(M_K)}^{\frac{1}{2}}
& \leq C_{eq} \frac{1}{N} \sum_{{\tilde{K}}\in\mathcal{T}_h}
\Abs{{\tilde{K}}} {\det(M_{{\tilde{K}}})}^{\frac{1}{2}},
\quad \forall K \in \mathcal{T}_h,
\label{eq:equi:approx}
\\
\frac{1}{d} \tr\left( {(F_K')}^T M_K F_K' \right)
& \leq C_{ali} \Abs{K}^{\frac{2}{d}} {\det(M_K)}^{\frac{1}{d}},
\quad \forall K \in \mathcal{T}_h,
\label{eq:ali:approx}
\end{align}
where $C_{eq}, C_{ali} \geq 1$ are some constants independent of $K$, $N$, and $\mathcal{T}_h$.
Numerical experiments in~\cite{Hua05a} and~\cref{sect:examples} (\cref{fig:flower:ceq,fig:flower:cali,fig:tanh:ceq,fig:tanh:cali}) show that quasi-$M$-uniform meshes with relatively small $C_{eq}$ and $C_{ali}$ can be generated in practice.
For this reason, we use quasi-$M$-uniform meshes in our analysis and numerical experiments.
We would like to point out that conditions \cref{eq:equi:approx,eq:ali:approx} with $C_{eq} = C_{ali} = 1$ imply \cref{eq:equi,eq:ali}.
Indeed, the inequality \cref{eq:ali:approx} with $C_{ali} = 1$ becomes the equality \cref{eq:ali} because the left-hand side of it (the arithmetic mean of the eigenvalues of ${(F_K')}^T M_K F_K'$) cannot be smaller than the right-hand side (the geometric mean of the eigenvalues).
Further, if $C_{eq} = 1$ then \cref{eq:equi:approx} becomes
\[
\Abs{K} {\det(M_K)}^{\frac{1}{2}}
\le \frac{1}{N} \sum_{{\tilde{K}}\in\mathcal{T}_h}
\Abs{{\tilde{K}}} {\det(M_{{\tilde{K}}})}^{\frac{1}{2}},
\quad \forall K \in \mathcal{T}_h.
\]
This implies
\begin{align*}
\max_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}}
&\leq \frac{1}{N} \sum_{K\in\mathcal{T}_h} \Abs{K} {\det(M_{K})}^{\frac{1}{2}}\\
&\leq \frac{1}{N} \left( (N-1)\max_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}}
+ \min_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}}
\right)
\end{align*}
and therefore
\[
\max_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}}
\leq \min_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}},
\]
which can only be valid if all values of $\Abs{K} {\det(M_K)}^{\frac{1}{2}}$ are the same for all $K$.
\subsection{Main result}
In this section we consider a special case where the Hessian of the solution is uniformly positive definite in $\Omega$; i.e.,
\begin{equation}
\exists \gamma > 0 \colon H(\boldsymbol{x}) \geq \gamma I, \quad \forall \boldsymbol{x} \in \Omega,
\label{eq:Hpd}
\end{equation}
where the greater-than-or-equal sign means that the difference between the left-hand side and right-hand side terms is positive semidefinite.
We also assume that the recovered Hessian $R$ is uniformly positive definite in $\Omega$.
This assumption is not essential and will be dropped for the general situation discussed in~\cref{sect:general}.
Recall from \cref{eq:cea-1} that the finite element error is bounded by the $H^1$ semi-norm of the interpolation error of the exact solution.
A metric tensor corresponding to the $H^1$ semi-norm can be defined as
\begin{equation}
M_K = {\det(R_K)}^{- \frac{1}{d+2}} \Norm{R_K}_2^{\frac{2}{d+2}} R_K,
\quad \forall K \in \mathcal{T}_h,
\label{eq:M:H1}
\end{equation}
where $R_K$ is an average of $R$ over $K$~\cite{Hua05a}.
For this metric tensor, mesh conditions \cref{eq:equi:approx,eq:ali:approx} become
\begin{align}
\Abs{K} {\det(R_K)}^{\frac{1}{d+2}} \Norm{R_K}_2^{\frac{d}{d+2}}
&\leq C_{eq} \frac{1}{N}
\sum_{\tilde{K}} \abs{{\tilde{K}}} {\det(R_{\tilde{K}})}^{\frac{1}{d+2}} \Norm{R_{\tilde{K}}}_2^{\frac{d}{d+2}},
\quad \forall K \in \mathcal{T}_h,
\label{eq:equi:H1}
\\
\frac{1}{d} \tr\left( {(F_K')}^T R_K F_K' \right)
&\leq C_{ali} \Abs{K}^{\frac{2}{d}} {\det(R_K)}^{\frac{1}{d}},
\quad \forall K \in \mathcal{T}_h.
\label{eq:ali:H1}
\end{align}
Note that the alignment condition \cref{eq:ali:H1} implies the inverse alignment condition
\begin{equation}
\frac{1}{d} \tr\left( {(F_K')}^{-T} R_K^{-1} {(F_K')}^{-1} \right)
<
{\left( \frac{d}{d-1} C_{ali}\right)}^{d-1}
\Abs{K}^{-\frac{2}{d}} {\det(R_K)}^{-\frac{1}{d}},
\quad \forall K \in \mathcal{T}_h .
\label{eq:ali:inverse}
\end{equation}
To show this, we denote the eigenvalues of ${(F_K')}^T R_K F_K'$ by $0 < \lambda_1 \le \cdots \le \lambda_d$ and rewrite \cref{eq:ali:H1} as
\[
\sum_i \lambda_i
\le d C_{ali} {\left(\prod_i \lambda_i\right)}^{\frac{1}{d}}.
\]
Then \cref{eq:ali:inverse} follows from
\begin{align*}
\frac{1}{d} \sum_i \lambda_i^{-1}
&= \prod_i \lambda_i^{-1} \cdot \frac{1}{d}
\sum_i \prod_{j\neq i} \lambda_j \\
&\leq \prod_i \lambda_i^{-1}
\cdot \frac{1}{d} \sum_i
{\left(\frac{\sum_{j\neq i} \lambda_j}{d-1} \right)}^{d-1}\\
&< \prod_i \lambda_i^{-1}
\cdot \frac{1}{d} \sum_i
{\left( \frac{ \sum_{j} \lambda_j }{d-1}\right)}^{d-1}
= \prod_i \lambda_i^{-1}
{\left( \frac{\sum_{j} \lambda_j }{d-1} \right)}^{d-1}\\
&\leq {\left( \frac{d}{d-1} C_{ali} \right)}^{d-1}
{\left(\prod_i \lambda_i\right)}^{-\frac{1}{d}}
.
\end{align*}
\begin{theorem}[Positive definite Hessian]
\label{thm:H1}
Assume that $H(\boldsymbol{x})$ and the recovered Hessian $R$ are uniformly positive definite in $\Omega$ and that $R$ satisfies
\begin{equation}
C_{R-,K} I \leq R_K^{-1} H (\bx) \leq C_{R+, K} I,
\quad \forall \boldsymbol{x} \in K,
\quad \forall K \in \mathcal{T}_h
\label{eq:CRs}
\end{equation}
where $C_{R-,K}$ and $C_{R+,K}$ are element-wise constants satisfying
\begin{equation}
C_{R-} \le \min_{K \in \mathcal{T}_h} C_{R-,K}
\qquad \text{and} \qquad
\sqrt{\frac{1}{N} \sum_{K\in \mathcal{T}_h} C_{R+,K}^2} \le C_{R+}
\label{CR+}
\end{equation}
with some mesh-independent positive constants $C_{R-}$ and $C_{R+}$.
If the solution of the BVP \cref{eq:bvp-2} is in $H^2(\Omega)$,
then for any quasi-$M$-uniform mesh associated with the metric tensor \cref{eq:M:H1} and satisfying \cref{eq:equi:approx,eq:ali:approx} the linear finite element error for the BVP is bounded by
\begin{equation}
\Abs{u-u_h}_{H^1(\Omega)}
\leq C
\cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+2}{2d}}
\cdot \frac{C_{R+}}{C_{R-}}
\cdot N^{-\frac{1}{d}}
\Norm{ {\det(H)}^{\frac{1}{d}} H }_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}.
\label{eq:thm:H1}
\end{equation}
\end{theorem}
\begin{proof}
The nodal interpolation error of a function $u \in H^2(\Omega)$ on $K$ is bounded by
\begin{equation}
\Abs{u - \Pi_h u}_{H^1(K)}
\leq C \Norm{ {(F_K')}^{-1}}_2
{\left( \int_K \Norm{{(F_K')}^T \Abs{H (\bx)} F_K'}_2^2 \,d\bx \right)}^{\frac{1}{2}} ,
\label{eq:HR11:1}
\end{equation}
where $\Abs{H(\boldsymbol{x})} = \sqrt{ {H(\boldsymbol{x})}^2 }$~\cite[Theorem~5.1.5]{HuaRus11} (the interested reader is referred to, for example,~\cite{CheSunXu07,ForPer01,Hua05a,HuaSun03,Mir12} for anisotropic error estimates for interpolation with linear and higher order finite elements).
Notice that $\Abs{H(\boldsymbol{x})} = H(\boldsymbol{x})$ in the current situation (symmetric and positive definite $H(\boldsymbol{x})$).
Further,
\begin{align*}
\Norm{ {(F_K')}^T H(\boldsymbol{x}) F_K' }_2
&= \Norm{ {H(\boldsymbol{x})}^{\frac{1}{2}} F_K' }_2^2
= \Norm{ {H(\boldsymbol{x})}^{\frac{1}{2}} R_K^{-\frac{1}{2}} R_K^{\frac{1}{2}} F_K' }_2^2
\\
&\le \Norm{ {H(\boldsymbol{x})}^{\frac{1}{2}} R_K^{-\frac{1}{2}} }_2^2
\Norm{ R_K^{\frac{1}{2}} F_K' }_2^2
\\
&= \Norm{ R_K^{-\frac{1}{2}} H(\boldsymbol{x}) R_K^{-\frac{1}{2}} }_2
\Norm{ {(F_K')}^T R_K F_K' }_2
\\
&= \lambda_{\max}\bigl( R_K^{-\frac{1}{2}} H(\boldsymbol{x}) R_K^{-\frac{1}{2}} \bigr)
\Norm{ {(F_K')}^T R_K F_K' }_2
\\
&= \lambda_{\max} \bigl( R_K^{-1} H(\boldsymbol{x}) \bigr)
\Norm{ {(F_K')}^T R_K F_K' }_2
\\
&\le \Norm{ R_K^{-1} H(\boldsymbol{x}) }_2
\Norm{ {(F_K')}^T R_K F_K' }_2.
\end{align*}
Similarly,
\[
\Norm{ {(F_K')}^{-1}}_2^2
= \Norm{ {(F_K')}^{-1} {(F_K')}^{-T} }_2
\le \Norm{ {(F_K')}^{-T} R_K^{-1} {(F_K')}^{-1} }_2 \Norm{R_K}_2.
\]
Thus, \cref{eq:HR11:1} yields
\[
\Abs{u - \Pi_h u}_{H^1(K)}^2
\leq C \Norm{ {(F_K')}^{-T} R_K^{-1} {(F_K')}^{-1} }_2 \Norm{R_K}_2
\int_K \Norm{ {(F_K')}^T R_K F_K' }_2^2 \Norm{ R_K^{-1} H(\boldsymbol{x})}_2^2 \,d\bx .
\]
Using this, \cref{eq:ali:approx}, \cref{eq:ali:inverse}, \cref{eq:CRs},
the fact that the trace of any $d\times d$ symmetric and positive definite matrix $A$ is equivalent to its $l^2$ norm, viz., $\Norm{A}_2 \le \tr(A) \le d \Norm{A}_2$, and absorbing powers of $d$ into the generic constant $C$, we get
\begin{align*}
\Abs{u - \Pi_h u}_{H^1(\Omega)}^2
&= \sum_K \Abs{u - \Pi_h u}_{H^1(K)}^2 \\
&\leq C \sum_K C_{ali}^{d-1}
\Abs{K}^{-\frac{2}{d}} {\det(R_K)}^{-\frac{1}{d}}\Norm{R_K}_2
\times \Abs{K} C_{ali}^2 \Abs{K}^{\frac{4}{d}}
{\det(R_K)}^{\frac{2}{d}} C_{R+,K}^2 \\
& = C C_{ali}^{d+1}
\sum_K \Abs{K}^{\frac{d+2}{d}} {\det(R_K)}^{\frac{1}{d}} \Norm{R_K}_2 C_{R+,K}^2\\
&= C C_{ali}^{d+1}
\sum_K {\left( \Abs{K} {\det(R_K)}^{\frac{1}{d+2}}
\Norm{R_K}_2^{\frac{d}{d+2}} \right)}^\frac{d+2}{d} C_{R+,K}^2.
\end{align*}
Applying \cref{eq:equi:approx} to the above result and using \cref{CR+} gives
\begin{align*}
\Abs{u - \Pi_h u}_{H^1(\Omega)}^2
&\leq C C_{ali}^{d+1}
\sum_K {\left( \frac{C_{eq}}{N} \sum_{\tilde{K}} \abs{{\tilde{K}}}
{\det(R_{\tilde{K}})}^{\frac{1}{d+2}} \Norm{R_{\tilde{K}}}_2^{\frac{d}{d+2}}
\right)}^{\frac{d+2}{d}} C_{R+,K}^2 \\
&= C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}} \left (\frac{1}{N} \sum_{K\in \mathcal{T}_h} C_{R+,K}^2\right )
{\left(\sum_{\tilde{K}} \abs{{\tilde{K}}}
{\det(R_{\tilde{K}})}^{\frac{1}{d+2}} \Norm{R_{\tilde{K}}}_2^{\frac{d}{d+2}}
\right)}^{\frac{d+2}{d}}\\
&\le C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}} C_{R+}^2
{\left(\sum_K \abs{K}
{\det(R_K)}^{\frac{1}{d+2}} \Norm{R_K}_2^{\frac{d}{d+2}}
\right)}^{\frac{d+2}{d}}\\
&= C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}} C_{R+}^2
{\left( \sum_{K} \int_K {\det(R_K)}^{\frac{1}{d+2}}
\Norm{R_K}_2^{\frac{d}{d+2}} \,d\bx \right)}^{\frac{d+2}{d}} .
\end{align*}
Further, assumption~\cref{eq:CRs} implies
\begin{equation}
\det(R_K)
\le \det\bigl(H(\boldsymbol{x})\bigr) \Norm{H^{-1}(\boldsymbol{x}) R_K}_2^d
\leq C_{R-}^{-d} \det\bigl(H(\boldsymbol{x})\bigr)
\label{eq:detR:detH}
\end{equation}
and
\begin{equation}
\Norm{R_K}_2
= \Norm{H(\boldsymbol{x}) H^{-1}(\boldsymbol{x}) R_K }_2
\le \Norm{H(\boldsymbol{x})}_2 \Norm{H^{-1}(\boldsymbol{x}) R_K}_2
\le C_{R-}^{-1} \Norm{H(\boldsymbol{x})}_2 .
\label{eq:detR:detH-1}
\end{equation}
Thus,
\begin{align*}
\Abs{u - \Pi_h u}_{H^1(\Omega)}^2
&\leq C C_{ali}^{d+1} C_{R+}^2 C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}}
{\left( C_{R-}^{\frac{-2d}{d+2}}
\int_\Omega {\left({\det(H(\boldsymbol{x}))}^{\frac{1}{d}} \Norm{H(\boldsymbol{x})}_2
\right)}^{\frac{d}{d+2}} \,d\bx \right)}^{\frac{d+2}{d}} \\
&= C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}}
{\left(\frac{C_{R+}}{C_{R-}}\right)}^2 N^{-\frac{2}{d}}
{\left( \int_\Omega \Norm{ {\det(H(\boldsymbol{x}))}^{\frac{1}{d}}
H(\boldsymbol{x})}_2^{\frac{d}{d+2}} \,d\bx \right)}^{\frac{d+2}{d}},
\end{align*}
which, together with \cref{eq:cea-1}, gives \cref{eq:thm:H1}.
\qquad
\end{proof}
\subsection{Remarks}
\Cref{thm:H1} shows how a nonconvergent recovered Hessian works in mesh adaptation.
The error bound \cref{eq:thm:H1} is linearly proportional to the ratio $C_{R+}/C_{R-}$, which is a measure for the closeness of $R$ to $H$.
Thus, the finite element error changes gradually with the closeness of the recovered Hessian.
If $R$ is a good approximation to $H$ (but not necessarily convergent), then $C_{R+}/C_{R-} = \mathcal{O}(1)$ and the solution-dependent factor in the error bound is
\begin{equation}
\Norm{{\det(H)}^{\frac{1}{d}} H}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}.
\label{eq:factor:1}
\end{equation}
On the other hand, if $R$ is not a good approximation to $H$, solution-dependent factor in the error bound will be larger.
For example, consider $R = I$ (the identity matrix), which leads to the uniform mesh refinement.
In this case the condition \cref{eq:CRs} is satisfied with
\[
C_{R+} = C_{R+,K}
= \max_{\boldsymbol{x}\in \Omega} \lambda_{\max} \bigl(H(\boldsymbol{x})\bigr)
\quad \text{and} \quad
C_{R-}
= \min_{\boldsymbol{x}\in \Omega} \lambda_{\min} \bigl(H(\boldsymbol{x})\bigr),
\]
where $\lambda_{\max} \bigl(H(\boldsymbol{x})\bigr)$ and $\lambda_{\min} \bigl(H(\boldsymbol{x})\bigr)$ denote the maximum and minimum eigenvalues of $H(\boldsymbol{x})$, respectively.
Thus, for $R=I$ the solution-dependent factor in the bound \cref{eq:thm:H1} becomes
\[
\frac{\max_{\boldsymbol{x}\in \Omega} \lambda_{\max} \bigl(H(\boldsymbol{x})\bigr)}
{ \min_{\boldsymbol{x}\in \Omega} \lambda_{\min} \bigl(H(\boldsymbol{x})\bigr)}
\Norm{{\det(H)}^{\frac{1}{d}} H}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}},
\]
which is obviously larger than \cref{eq:factor:1}.
Next, we study the relation between \cref{eq:CRs} and \cref{eq:AgLiVa99:1}--\cref{eq:AgLiVa99:2}.
In practical computation, the Hessian is typically recovered at mesh nodes (see~\cref{sect:recovery:methods}) and a recovered Hessian can be considered on the whole domain as a piecewise linear matrix-valued function.
In this case, the average $R_K$ of $R$ over any given element $K$ can be expressed as a linear combination of the nodal values of $R$.
Applying the triangle inequality to \cref{eq:AgLiVa99:1,eq:AgLiVa99:2} we get
\[
\Norm{R(\boldsymbol{x}_i)-H(\boldsymbol{x})}_\infty
\le \left( \delta + \varepsilon \right) \lambda_{\min}(R_{\boldsymbol{x}_i}),
\quad \forall \boldsymbol{x} \in \omega_i
\]
and, since $R_K$ is a linear combination of $R(\boldsymbol{x}_i)$,
\[
\Norm{R_K - H(\boldsymbol{x})}_\infty
\leq \left(\delta + \varepsilon \right) \lambda_{\min} ( R_K ) .
\]
Since $R_K - H(\boldsymbol{x})$ is symmetric, $\Norm{R_K - H(\boldsymbol{x})}_2 \le \Norm{R_K - H(\boldsymbol{x})}_\infty$.
Thus, conditions \cref{eq:AgLiVa99:1,eq:AgLiVa99:2} imply
\begin{equation}
\Norm{ R_K - H(\boldsymbol{x}) }_2
\leq \left( \delta + \varepsilon \right) \lambda_{\min} (R_K),
\quad \forall \boldsymbol{x} \in K, \quad \forall K \in \mathcal{T}_h
\label{eq:R:H:eps}
\end{equation}
and
\begin{equation}
\Norm{ R_K^{-1} H(\boldsymbol{x}) - I }_2 \leq \left( \delta + \varepsilon \right),
\quad \forall \boldsymbol{x} \in K, \quad \forall K \in \mathcal{T}_h
\label{eq:R:H:I}
\end{equation}
which in turn implies \cref{eq:CRs} with $C_{R+,K} = 1+ \left( \delta + \varepsilon \right)$ and $C_{R-} = 1 - \left( \delta + \varepsilon \right)$, if $\left(\delta + \varepsilon\right) < 1$.
Condition \cref{eq:R:H:I} and therefore \cref{eq:AgLiVa99:2} require the eigenvalues of $R_K^{-1} H (\bx)$ to stay closely around one.
On the other hand, condition \cref{eq:CRs} only requires the eigenvalues of $R_K^{-1} H (\bx)$ to be bounded from above and below from zero, which is weaker than \cref{eq:AgLiVa99:2}.
If $R$ converges to $H(\boldsymbol{x})$, both \cref{eq:AgLiVa99:2,eq:CRs} can be satisfied.
However, if $R$ does not converge to $H(\boldsymbol{x})$, as is the case for most adaptive computation, the situation is different.
As we shall see in~\cref{sect:examples}, condition \cref{eq:CRs} is satisfied for all of the examples tested whereas condition \cref{eq:AgLiVa99:2} is not satisfied by either of the examples.
We would like to point out that it is unclear if the considered monitor function \cref{eq:M:H1} (and the corresponding bound \cref{eq:thm:H1}) is optimal, although it seems to be the best we can get.
For example, if we choose the monitor function to be
\begin{equation}
M_K = {\det(R_K)}^{- \frac{1}{d+4}} R_K, \quad \forall K \in \mathcal{T}_h
\label{eq:M:L}
\end{equation}
which is optimal for the $L^2$ norm~\cite{Hua05a}, the error bound becomes
\begin{equation}
\Abs{u - u_h}_{H^1(\Omega)}
\leq C
\cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+4}{4d}}
\cdot \frac{C_{R+}}{C_{R-}}
\cdot N^{-\frac{1}{d}}
\Norm{{\det(H)}^{\frac{1}{d}}}_{L^{\frac{2d}{d+4}}(\Omega)}^{\frac{2}{d+4}}
\Norm{ {\det(H)}^{\frac{1}{d+4}} H }_{L^1(\Omega)}^{\frac{1}{2}}.
\label{thm:error:H1:2}
\end{equation}
This bound has a larger solution-dependent factor than \cref{eq:thm:H1} since Hölder's inequality yields
\[
\Norm{ {\det(H)}^{\frac{1}{d}} H}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}
\le \Norm{{\det(H)}^{\frac{1}{d}}}_{L^{\frac{2d}{d+4}}(\Omega)}^{\frac{2}{d+4}}
\Norm{ {\det(H)}^{\frac{1}{d+4}} H}_{L^1(\Omega)}^{\frac{1}{2}} .
\]
It is worth mentioning that when the metric tensor \cref{eq:M:L} is used, the $L^2$ norm of the piecewise linear interpolation error is bounded by
\begin{equation}
\Norm{u - \Pi_h u}_{L^2(\Omega)}
\leq C
\cdot C_{ali} C_{eq}^{\frac{d+4}{2 d}}
\cdot \frac{C_{R+}}{C_{R-}}
\cdot N^{-\frac{2}{d}}
\Norm{ {\det(H)}^{\frac{1}{d}}}_{L^{\frac{2d}{d+4}}(\Omega)} ,
\label{eq:error:L2}
\end{equation}
which is optimal in terms of convergence order and solution-dependent factor, e.g.,\ see~\cite{CheSunXu07,HuaSun03}.
Note that \cref{thm:H1} holds for $u \in H^2(\Omega)$ although the estimate \cref{eq:thm:H1} only requires
\begin{equation}
\Norm{ {\det(H)}^{\frac{1}{d}} H }_{L^{\frac{d}{d+2}}(\Omega)} < \infty.
\label{reg-1}
\end{equation}
Since
\[
\Norm{ {\det(H)}^{\frac{1}{d}} H }_{L^{\frac{d}{d+2}}(\Omega)}
\le
\Norm{ \frac{1}{d} \tr(H) \cdot H }_{L^{\frac{d}{d+2}}(\Omega)} ,
\]
\cref{reg-1} can be satisfied when $u \in W^{2, \frac{2d}{d+2}}(\Omega)$.
Thus, there is a gap between the sufficient requirement $u\in H^2(\Omega)$ and the necessary requirement $u \in W^{2, \frac{2d}{d+2}}(\Omega)$.
The stronger requirement $u \in H^2(\Omega)$ comes from the estimation of the interpolation error in~\cite[Theorem~5.1.5]{HuaRus11}.
It is unclear to the authors whether or not this requirement can be weakened.
It is pointed out that $u \in H^2(\Omega)$ may not hold when $\partial \Omega$ is not smooth.
For example, in 2D, if $\partial \Omega$ has a corner with an angle $\omega \in (0,2\pi)$, the solution of the BVP \cref{eq:bvp-2} with smooth $f$ and $g$ basically has the following form near the corner,
\[
u(r, \theta) = r^{\frac{\pi}{\omega}} u_0(\theta) + u_1(r, \theta),
\]
where $(r,\theta)$ denote the polar coordinates and $u_0(\theta)$ and $u_1(r, \theta)$ are some smooth functions.
Then,
\[
\Abs{u}_{H^2(\Omega)}^2
\sim \int_0^{b} {\left( r^{\frac{\pi}{\omega}-2} \right)}^2 r \,dr
\sim \left. r^{\frac{2 \pi}{\omega}-2} \right\vert_{0}^{b}
\]
for some constant $b>0$.
This implies that $u \notin H^2(\Omega)$ if $\omega > \pi$.
On the other hand, $W^{2, \frac{2d}{d+2}}(\Omega) = W^{2, 1}(\Omega)$ for $d = 2$ and
\[
\Abs{u}_{W^{2,1}(\Omega)}^2
\sim \int_0^{b} \left( r^{\frac{\pi}{\omega}-2} \right) r \,dr \\
\sim \left. r^{\frac{\pi}{\omega}} \right |_{0}^{b},
\]
which indicates that $u \in W^{2,1}(\Omega)$ for all $\omega \in (0,2\pi)$.
\section{Convergence of the linear finite element approximation for a general Hessian}
\label{sect:general}
In this section we consider the general situation where $H(\boldsymbol{x})$ is symmetric but not necessarily positive definite.
In this case, it is unrealistic to require the recovered Hessian $R$ to be positive definite.
Thus, we cannot use $R$ directly to define the metric tensor which is required to be positive definite.
A commonly used strategy is to replace $R$ by $\Abs{R} = \sqrt{R^2}$ since $\Abs{R}$ retains the eigensystem of $R$.
However, $\Abs{R}$ can become singular locally.
To avoid this difficulty, we regularize $\Abs{R}$ with a regularization parameter $\alpha_h > 0$ (to be determined).
From \cref{eq:M:H1}, we define the regularized metric tensor as
\begin{equation}
M_K = {\det(\alpha_h I + \Abs{R_K})}^{- \frac{1}{d+2}}
\Norm{\alpha_h I + \Abs{R_K} }_2^{\frac{2}{d+2}}
\left(\alpha_h I + \Abs{R_K}\right),
\quad \forall K \in \mathcal{T}_h,
\label{eq:M:H1:2}
\end{equation}
and obtain the following theorem with a proof similar to that of \cref{thm:H1}.
\begin{theorem}[General Hessian]
\label{thm:H1:general}
For a given positive parameter $\alpha_h > 0$, we assume that the recovered Hessian $R$ satisfies
\begin{align}
& C_{R-,K} I \le {\left( \alpha_h I + \Abs{R_K} \right)}^{-1} \left(\alpha_h I + \Abs{H(\boldsymbol{x})} \right),
\quad \forall \boldsymbol{x} \in K,
\quad \forall K \in \mathcal{T}_h,
\label{eq:CRminus:general}
\\
& {\left( \alpha_h I + \Abs{R_K} \right)}^{-1} \Abs{H(\boldsymbol{x})}
\leq C_{R+,K} I,
\quad \forall \boldsymbol{x} \in K,
\quad \forall K \in \mathcal{T}_h,
\label{eq:CRplus:general}
\end{align}
where $C_{R-,K}$ and $C_{R+,K}$ are element-wise constants satisfying \cref{CR+}.
If the solution of the BVP \cref{eq:bvp-2} is in $H^2(\Omega)$, then for any quasi-$M$-uniform mesh associated with metric tensor \cref{eq:M:H1:2} and satisfying \cref{eq:equi:approx,eq:ali:approx} the linear finite element error for the BVP is bounded by
\begin{equation}
\Abs{u - u_h}_{H^1(\Omega)}
\leq C
\cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+2}{2d}}
\cdot \frac{C_{R+}}{C_{R-}}
\cdot N^{-\frac{1}{d}}
\Norm{ {\det(\alpha_h I + \Abs{H})}^{\frac{1}{d}}
\left(\alpha_h I + \Abs{H}\right)
}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}.
\label{eq:thm:H1:general}
\end{equation}
\end{theorem}
From \cref{eq:CRplus:general,eq:CRminus:general,eq:thm:H1:general} we see that the greater $\alpha_h$ is, the easier the recovered Hessian satisfies \cref{eq:CRplus:general,eq:CRminus:general}; however, the error bound increases as well.
For example, consider the extreme case of $\alpha_h \to \infty$.
In this case, \cref{eq:CRplus:general,eq:CRminus:general} can be satisfied with $C_{R+} = C_{R-} = 1$ for any $R$.
At the same time, the metric tensor defined in \cref{eq:M:H1:2} has an asymptotic behavior $M_K \to \alpha_h^{\frac{4}{d+4}} I$ and the corresponding $M$-uniform mesh is a uniform mesh.
Obviously, the right-hand side of \cref{eq:thm:H1:general} is large for this case.
Another extreme case is $\alpha_h \to 0$ where \cref{eq:thm:H1:general} reduces to \cref{eq:thm:H1} if both $R$ and $H(\boldsymbol{x})$ are positive definite.
We now consider the choice of $\alpha_h$.
We define a parameter $\alpha$ through the implicit equation
\begin{equation}
\Norm{ \sqrt[d]{\det(\alpha I + \Abs{H})}
\cdot (\alpha I + \Abs{H}) }_{L^{\frac{d}{d+2}}(\Omega)}
= 2 \Norm{ \sqrt[d]{\det(\Abs{H})} \cdot H}_{L^{\frac{d}{d+2}}(\Omega)} .
\label{eq:alpha-3}
\end{equation}
The left-hand-side term is an increasing function of $\alpha$.
Moreover, the term is equal to the half of the right-hand-side term when $\alpha = 0$ and tends to infinity as $\alpha \to \infty$.
Thus, from the intermediate value theorem we know that \cref{eq:alpha-3} has a unique solution $\alpha > 0$ if $\Norm{ \sqrt[d]{\det(\Abs{H})} \cdot H}_{L^{\frac{d}{d+2}}(\Omega)} > 0$.
If we choose $\alpha_h = \alpha$, then the finite element error is bounded by
\begin{equation}
\Abs{u - u_h}_{H^1(\Omega)}
\leq C
\cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+2}{2d}}
\cdot \frac{C_{R+}}{C_{R-}}
\cdot 2N^{-\frac{1}{d}}
\Norm{ \sqrt[d]{\det(\Abs{H})} \cdot H
}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}},
\label{eq:thm:H1:general-2}
\end{equation}
which is essentially the same as \cref{eq:thm:H1}.
Note that \cref{eq:alpha-3} is impractical since it requires the prior knowledge of $H(\boldsymbol{x})$.
In practice it can be replaced by
\begin{equation}
\sum_K \Abs{K} {\det\left( \alpha_h I + \Abs{R_K}\right)}^{\frac{1}{d+2}}
\Norm{\alpha_h I + \Abs{R_K}}_2^{\frac{d}{d+2}}
= 2^{\frac{d}{d+2}} \sum_K \Abs{K}
{\det(\Abs{R_K})}^{\frac{1}{d+2}}\Norm{R_K}_2^{\frac{d}{d+2}} .
\label{alpha-2}
\end{equation}
This equation can be solved effectively using the bisection method.
Numerical results show that $\alpha_h$ is close to $\alpha$ (\cref{fig:tanh:alpha}).
\section{A selection of commonly used Hessian recovery methods}
\label{sect:recovery:methods}
In this section we give a brief description of four commonly used Hessian recovery algorithms for two-dimensional mesh adaptation.
The interested reader is referred to~\cite{Kam09,ValManDomDufGui07} for a more detailed description of these Hessian recovery techniques.
Recall that the goal of the Hessian recovery in the current context is to find an approximation of the Hessian in mesh nodes using the linear finite element solution $u_h$.
The approximation of the Hessian on an element is calculated as the average of the nodal approximations of the Hessian at the vertices of the element.
\subsection*{QLS:\ quadratic least squares fitting to nodal values}
This method involves the fitting of a quadratic polynomial to nodal values of $u_h$ at a selection of neighboring nodes in the least square sense and subsequent differentiation.
The original purpose of the QLS was the gradient recovery (e.g.,\ see Zhang and Naga~\cite{ZhaNag05}).
However, it is easily adopted for the Hessian recovery by simply differentiating the fitting polynomial twice.
More specifically, for a given node (say $\boldsymbol{x}_0$) at least five neighboring nodes are selected.
A quadratic polynomial (denoted by $p$) is found by least squares fitting to the values of $u_h$ at the selected nodes.
The linear system associated with the least squares problem usually has full rank and a unique solution.
If it does not, additional nodes from the neighborhood of $\boldsymbol{x}_0$ are added to the selection until the system has full rank.
An approximation to the Hessian of the solution $u$ at $\boldsymbol{x}_{0}$ is defined as the Hessian of $p$, viz.,
\[
R^{QLS}(\boldsymbol{x}_{0}) = H(p)(\boldsymbol{x}_{0}) .
\]
\subsection*{DLF:\ double linear least squares fitting}
\label{ssect:DLF}
The DLF method computes the Hessian by using linear least squares fitting twice.
First, the least squares fitting of the nodal values of $u_h$ in a neighbourhood of $\boldsymbol{x}_0$ is employed to find a linear fitting polynomial $p$.
The recovered gradient of function $u$ at $\boldsymbol{x}_0$ is defined as the gradient of $p$ at $\boldsymbol{x}_{0}$, i.e.,
\[
\nabla_h^{DLF} u(\boldsymbol{x}_{0}) = \nabla p(\boldsymbol{x}_{0}).
\]
Second-order derivatives are then obtained by subsequent application of this linear fitting to the calculated first order derivatives.
Mixed derivatives are averaged in order to obtain a symmetric recovered Hessian.
\subsection*{LLS:\ linear least squares fitting to first-order derivatives}
This method is similar to DLF except that the first-order derivatives at nodes are calculated in a different way.
In this method, the first-order derivatives are first calculated at element centers and then at nodes by linear least squares fitting to their values at element centers.
\subsection*{WF:\ weak formulation}
This approach recovers the Hessian by means of a variational formulation~\cite{Dol98}.
More specifically, let $\phi_0$ be a canonical piecewise linear basis function at node $\boldsymbol{x}_0$.
Then the nodal approximation $u_{xx,h}$ to the second-order derivative $u_{xx} $ at $\boldsymbol{x}_i$ is defined through
\[
u_{xx,h}(\boldsymbol{x}_0) \int_\Omega \phi_0(\boldsymbol{x}) \,d\bx
= -\int_\Omega \frac{\partial u_h}{\partial x}
\frac{\partial \phi_0}{\partial x} \,d\bx .
\]
The same approach is used to compute $u_{xy,h}$ and $u_{yy,h}$.
Since $\phi_0$ are piecewise linear and vanish outside the patch associated with $\boldsymbol{x}_0$, the involved integrals can be computed efficiently with appropriate quadrature formulas over a single patch.
\section{Numerical examples}
\label{sect:examples}
In this section we present two numerical examples to verify the analysis given in the previous sections.
We use \bamg{}~\cite{bamg} to generate adaptive meshes as quasi-$M$-uniform meshes for the regularized metric tensor \cref{eq:M:H1:2}.
Special attention will be paid to mesh conditions \cref{eq:equi:approx,eq:ali:approx} and closeness conditions \cref{eq:AgLiVa99:2,eq:CRplus:general,eq:CRminus:general}.
For the recovery closeness condition \cref{eq:AgLiVa99:2} we compare the regularized recovered and exact Hessians, i.e.,\ we compute $\varepsilon$ for
\begin{equation}
\Norm{(\alpha_h I + \Abs{R_K}) - (\alpha I + \abs{H_K}) }_\infty
\leq \varepsilon \lambda_{\min} (\alpha_h I + \Abs{R_K}),
\qquad \forall K \in \mathcal{T}_h,
\label{eq:eps:regularized}
\end{equation}
where $H_K$ is an average of the exact Hessian on the element $K$ and $\alpha_h$ and $\alpha$ are the is the regularization parameters for the recovered and the exact Hessians, respectively.
\vspace{0.5ex}
\begin{example}[{\cite[Example~4.3]{Hua05a}}]
\label{ex:flower}
\normalfont{}
The first example is in the form of BVP~\cref{eq:bvp} with $f$ and $g$ chosen such that the exact solution is given b
\begin{align*}
u(x,y) =&
\tanh \left[ 30 \left( x^2 + y^2 - 0.125 \right) \right] \\
&+ \tanh \left[ 30 \left( {(x-0.5)}^2 + {(y-0.5)}^2 - 0.125 \right) \right] \\
&+ \tanh \left[ 30 \left( {(x-0.5)}^2 + {(y+0.5)}^2 - 0.125 \right) \right] \\
&+ \tanh \left[ 30 \left( {(x+0.5)}^2 + {(y-0.5)}^2 - 0.125 \right) \right] \\
&+ \tanh \left[ 30 \left( {(x+0.5)}^2 + {(y-0.5)}^2 - 0.125 \right) \right]
.
\end{align*}
A typical plot of element-wise constants $C_{eq,K}$ and $C_{ali,K}$ in mesh quasi-$M$-uniformity conditions \cref{eq:equi:approx,eq:ali:approx} is shown in \cref{fig:flower:ceq,fig:flower:cali}, demonstrating that these conditions hold with relatively small $C_{eq}$ and $C_{ali}$.
For the given mesh example we have $0.5 \le C_{eq,K} \le 1.5$ and $1 \le C_{ali,K} \le 1.3$, which gives $C_{eq} = 1.5$ and $C_{ali} = 1.3$.
In fact, we found that $C_{eq} \le 2.0$ and $C_{ali} \le 2.1$ for all computations in this paper, indicating that \bamg{} does a good job in generating quasi-$M$-uniform meshes for a given metric tensor.
\Cref{fig:flower:epsk,fig:flower:eps} show a typical distribution of element-wise values of $\varepsilon$ in \cref{eq:eps:regularized} and its values for a sequence of adaptive grids.
We observe that for all methods $\varepsilon$ is not small with respect to one, which violates the condition \cref{eq:AgLiVa99:2}.
Typical element-wise values $C_{R+,K}/C_{R-}$ and values of $C_{R+}/C_{R-}$ for a sequence of adaptive grids are shown in \cref{fig:flower:crk,fig:flower:CR}.
Notice that $C_{R+} / C_{R-}$ stays relatively small and bounded, thus satisfying the closeness conditions \cref{eq:CRplus:general,eq:CRminus:general}.
For this example, the finite element error $\Abs{u-u_h}_{H^1(\Omega)}$ is almost undistinguishable for meshes obtained by means of the exact and recovered Hessian (\cref{fig:flower:error}) and the approximated $\alpha_h$, computed through \cref{alpha-2}, is very close the value for the exact Hessian (\cref{fig:flower:alpha}).
\begin{figure}[p]\centering
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-ce}}}
\caption{element-wise $C_{eq,K}$ for \cref{eq:equi:approx}\label{fig:flower:ceq}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-ca}}}
\caption{element-wise $C_{ali,K}$ for \cref{eq:ali:approx}\label{fig:flower:cali}}
\end{subfigure}\\[1em]
%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-epsk}}}
\caption{element-wise $\varepsilon$
for \cref{eq:eps:regularized}\label{fig:flower:epsk}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-crk}}}
\caption{element-wise $C_{R+,K}/C_{R-}$\label{fig:flower:crk}}
\end{subfigure}\\[1em]
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-eps}}}
\caption{$\varepsilon$ for
\cref{eq:eps:regularized}\label{fig:flower:eps}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-cr}}}
\caption{$C_{R+}/C_{R-}$ for
\cref{eq:CRplus:general,eq:CRminus:general}\label{fig:flower:CR}}
\end{subfigure}\\[1em]
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-fe-error}}}
\caption{finite element error $\Abs{u-u_h}_{H^1(\Omega)}$\label{fig:flower:error}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-flower-alpha}}}
\caption{comparison of $\alpha_h$ and $\alpha$\label{fig:flower:alpha}}
\end{subfigure}%
%
\caption{Numerical results for \cref{ex:flower}\label{fig:flower}}
\end{figure}
\end{example}
\vspace{0.5ex}
\begin{example}[Strong anisotropy]
\label{ex:tanh}
\normalfont{}
The second example is in the form of BVP~\cref{eq:bvp} with $f$ and $g$ chosen such that the exact solution is given by
\[
u(x,y) = \tanh ( 60y ) - \tanh \bigl( 60 (x - y) - 30 \bigr).
\]
This solution exhibits a very strong anisotropic behavior and describes the interaction between a boundary layer along the $x$-axis and a steep shock wave along the line $y = x - 1/2$.
\Cref{fig:tanh:epsk,fig:tanh:epsilon} show that $\varepsilon \approx 60$
and therefore not small with respect to one, violating the condition \cref{eq:AgLiVa99:2} for all meshes in the considered range of $N$ for all four recovery techniques.
On the other hand, \cref{fig:tanh:CR} shows that the ratio $C_{R+} / C_{R-}$ is large ($\approx 10^2$) but, nevertheless, it seems to stay bounded with increasing $N$, confirming that \cref{eq:CRplus:general,eq:CRminus:general} are satisfied by the recovered Hessian.
The fact that the ratio $C_{R+} / C_{R-}$ has different values in this and the previous examples indicates that the accuracy or closeness of the four Hessian recovery techniques depends on the behavior and especially the anisotropy of the solution.
Fortunately, as shown by \cref{thm:H1,thm:H1:general}, the finite element error is insensitive to the closeness of the recovered Hessian.
The finite element solution error is shown in \cref{fig:tanh:error} as a function of $N$.
Finally, \cref{fig:tanh:alpha} shows that $\alpha_h$, computed through \cref{alpha-2}, is close to the exact value $\alpha$ defined in \cref{eq:alpha-3}.
\begin{figure}[p]\centering
%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-ce}}}
\caption{element-wise $C_{eq,K}$ for \cref{eq:equi:approx}\label{fig:tanh:ceq}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-ca}}}
\caption{element-wise $C_{ali,K}$ for \cref{eq:ali:approx}\label{fig:tanh:cali}}
\end{subfigure}\\[1em]
%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-epsk}}}
\caption{element-wise $\varepsilon$
for \cref{eq:eps:regularized}\label{fig:tanh:epsk}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-crk}}}
\caption{element-wise $C_{R+,K}/C_{R-}$}
\end{subfigure}\\[1em]
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-eps}}}
\caption{$\varepsilon$
for \cref{eq:eps:regularized}\label{fig:tanh:epsilon}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-cr}}}
\caption{$C_{R+}/C_{R-}$ for
\cref{eq:CRplus:general,eq:CRminus:general}\label{fig:tanh:CR}}
\end{subfigure}\\[1em]
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-fe-error}}}
\caption{finite element error $\Abs{u-u_h}_{H^1(\Omega)}$\label{fig:tanh:error}}
\end{subfigure}%
%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[clip]{{{1211.2877f-tanh-alpha}}}
\caption{comparison of $\alpha_h$ and $\alpha$\label{fig:tanh:alpha}}
\end{subfigure}%
%
\caption{Numerical results for \cref{ex:tanh}\label{fig:tanh}}
\end{figure}
\end{example}
\section{Conclusion and further comments}
\label{sect:conclusion}
In the previous sections we have investigated how a nonconvergent recovered Hessian works in mesh adaptation.
Our main results are \cref{thm:H1,thm:H1:general} where an error bound for the linear finite element solution of BVP \cref{eq:bvp-2} is given for quasi-$M$-uniform meshes corresponding to a metric depending on a recovered Hessian.
As conventional error estimates for the $H^1$ semi-norm of the error in linear finite element approximations, our error bound is of first order in terms of the average element diameter, $N^{-\frac{1}{d}}$, where $N$ is the number of elements and $d$ is the dimension of the physical domain.
This error bound is valid under the closeness condition \cref{eq:CRs} (or \cref{eq:CRplus:general,eq:CRminus:general}), which is weaker than \cref{eq:AgLiVa99:2} used by Agouzal et al.~\cite{AgoLipVas99} and Vassilevski and Lipnikov~\cite{VasLip99}.
Numerical results in~\cref{sect:examples} show that the new closeness condition is satisfied by the recovered Hessian obtained with commonly used Hessian recovery algorithms.
The error bound also shows that the finite element error changes gradually with the closeness of the recovered Hessian to the exact one.
These results provide an explanation on how a nonconvergent recovered Hessian works in mesh adaptation.
In this work the closeness conditions \cref{eq:CRplus:general,eq:CRminus:general} have been verified only numerically.
Developing a theoretical proof of the condition for some Hessian recovery techniques is an interesting topic for further investigations.
\section*{Acknowledgment}
The authors are grateful to the anonymous referees for their comments and suggestions for improving the quality of this paper, particularly for the helpful comments on improving the proof of \cref{thm:H1}.
| {
"timestamp": "2014-04-11T02:10:33",
"yymm": "1211",
"arxiv_id": "1211.2877",
"language": "en",
"url": "https://arxiv.org/abs/1211.2877",
"abstract": "Hessian recovery has been commonly used in mesh adaptation for obtaining the required magnitude and direction information of the solution error. Unfortunately, a recovered Hessian from a linear finite element approximation is nonconvergent in general as the mesh is refined. It has been observed numerically that adaptive meshes based on such a nonconvergent recovered Hessian can nevertheless lead to an optimal error in the finite element approximation. This also explains why Hessian recovery is still widely used despite its nonconvergence. In this paper we develop an error bound for the linear finite element solution of a general boundary value problem under a mild assumption on the closeness of the recovered Hessian to the exact one. Numerical results show that this closeness assumption is satisfied by the recovered Hessian obtained with commonly used Hessian recovery methods. Moreover, it is shown that the finite element error changes gradually with the closeness of the recovered Hessian. This provides an explanation on how a nonconvergent recovered Hessian works in mesh adaptation.",
"subjects": "Numerical Analysis (math.NA)",
"title": "How a nonconvergent recovered Hessian works in mesh adaptation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.975201841245846,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.7903805284904247
} |
https://arxiv.org/abs/2006.10816 | Inequalities from Lorentz-Finsler norms | We show that Lorentz-Finsler geometry offers a powerful tool in obtaining inequalities. With this aim, we first point out that a series of famous inequalities such as: the (weighted) arithmetic-geometric mean inequality, Aczél's, Popoviciu's and Bellman's inequalities, are all particular cases of a reverse Cauchy-Schwarz, respectively, of a reverse triangle inequality holding in Lorentz-Finsler geometry. Then, we use the same method to prove some completely new inequalities, including two refinements of Aczél's inequality. | \section{Introduction}
The Cauchy-Schwarz inequality on the Euclidean space $\mathbb{R}^{n}:$
\begin{equation}
\left( \sum_{i=1}^{n}v_{i}^{2}\right) \cdot \left(
\sum_{i=1}^{n}w_{i}^{2}\right) \geq \left( \sum_{i=1}^{n}v_{i}w_{i}\right)
^{2}, \label{1_0}
\end{equation
$\forall v=\mathbf{(}v_{1},...,v_{n}\mathbf{),}$ $w=\mathbf{(}w_{1},...,w_{n
\mathbf{)\in }\mathbb{R}^{n}$, is a basic result, with applications in
almost all the branches of mathematics.
In 1956, Acz\'{e}l \cite{AC} introduced the following inequality
\begin{equation}
\left( v_{0}^{2}-v_{1}^{2}-...-v_{n}^{2}\right) \left(
w_{0}^{2}-w_{1}^{2}-...-w_{n}^{2}\right) \leq \left(
v_{0}w_{0}-v_{1}w_{1}-...-v_{n}w_{n}\right) ^{2}, \label{1_1}
\end{equation
(holding for all $v\mathbf{=(}v_{0},v_{1},...,v_{n}\mathbf{),}$ $w\mathbf{=(
w_{0},w_{1},...,w_{n}\mathbf{)\in }\mathbb{R}^{n+1}$ such that
v_{0}^{2}-v_{1}^{2}-...-v_{n}^{2}>0,$ $w_{0}^{2}-w_{1}^{2}-...-w_{n}^{2}>0
), in relation to the theory of functional equations in one variable. The Ac
\'{e}l inequality (\ref{1_1}), together with its generalization to
Lorentzian manifolds, known by the name of \textit{reverse Cauchy-Schwarz
inequality,} proved to be crucial to relativity theory and to theories of
physical fields.
Indeed, from a geometric standpoint, the two inequalities above are known to
be two sides of the same coin; while the usual Cauchy-Schwarz inequality
\ref{1_0}) is extended to positive definite inner product spaces - and
further on, to Riemannian manifolds - leading to the triangle inequality
\left\Vert v+w\right\Vert \leq \left\Vert v\right\Vert +\left\Vert
w\right\Vert ,$ (\ref{1_1}) is naturally extended to spaces with a
Lorentzian inner product (and more generally, to Lorentzian manifolds),
leading to a reverse triangle inequality (see, e.g., \cite{BeemEhrlich,ONeil
).
\bigskip
In this article we discuss a further generalization of the above picture
which has not been exploited so far. The Cauchy-Schwarz inequality and its
Lorentzian-reversed version can be extended to Finsler, \cite{Bao},
respectively, to Lorentz-Finsler spaces, \cit
{Aazami:2014ata,Javaloyes2019,Minguzzi2014,Minguzzi:2013sxa}. Roughly
speaking, while a Riemannian manifold is a space equipped with a smoothly
varying family of inner products, a Finsler manifold is a space equipped
with a family of \textit{norms}\footnote
Actually, the notion of Finsler norm is slightly more general than the usual
one, as it is only required to be positively homogeneous intead of
absolutely homogeneous.} that do not necessarily arise from a scalar
product. Similarly, a Lorentz-Finsler manifold is equipped with a smoothly
varying family of so-called \textit{Lorentz-Finsler norms} of vectors, that
do not necessarily arise as the square root of any quadratic expression -
but are just positively 1-homogeneous in the considered vectors.
\bigskip
Usually, the Finslerian Cauchy-Schwarz inequality (also called the \textit
fundamental inequality, }\cite{Bao}) is proven under the assumption that the
Finsler norm $F$ of vectors has the property that the Hessian $Hess(F^{2})$
is positive definite; in the particular case of Riemannian spaces, this will
turn into the condition that the metric tensor $g$ is positive definite.
Similarly, its Lorentzian-reversed counterpart, \cit
{Aazami:2014ata,Javaloyes2019,Minguzzi2014,Minguzzi:2013sxa} is proven under
the assumption that $Hess(F^{2})$ has Lorentzian signature for all vectors
in a strictly convex set. Under these assumptions, the obtained inequalities
are \textit{strict}, i.e., equality only holds when the vectors $v$ and $w$
are collinear.
\bigskip
As a preliminary step, we show that the above conditions can be relaxed.
Namely, the respective inequalities still hold - just, non-strictly - if we
allow $Hess(F^{2})$ to be degenerate along some directions; also, for
practical matters, we replace the strict convexity assumption on the set of
interest with the more relaxed one that $F$ is defined on a convex conic
domain\textbf{\ }$\mathcal{T}$. Moreover, we prove two refinements of the
reverse triangle inequality holding in general Lorentz-Finsler spaces in
Section \ref{Section degenerate case}.
\bigskip
While the above generalization is not spectacular in itself, it allows us
much more freedom in choosing the range of examples and applications.
Indeed, we show in Section~\ref{sec:ex} that some of the most famous
inequalities on $\mathbb{R}^{n}$ are nothing but reverse Cauchy-Schwarz
inequalities for conveniently chosen (possibly, degenerate) Lorentz-Finsler
norms:
\begin{enumerate}
\item The usual arithmetic-geometric mean inequality: $\dfrac{1}{n}~\overset
n}{\underset{i=1}{\dsum }}v_{i}\geq {\large (}\overset{n}{\underset{i=1}
\dprod }}v_{i}{\large )}^{1/n},$ $\ \ \forall v_{i}\geq 0,$ $i=\overline{1,n
.$
\item The weighted arithmetic-geometric mean inequality:
\begin{equation}
\dfrac{1}{a}\sum_{i=1}^{n}a_{i}v_{i}\geq \lbrack
(v_{1})^{a_{1}}...(v_{n})^{a_{n}}]^{1/a}, \label{weighted_a}
\end{equation
for all $a,a_{i},v_{i}\in \mathbb{R}_{+}^{\ast }$, such that $\overset{n}
\underset{i=1}{\sum }}a_{i}=a.$
\item Popoviciu's inequality, \cite{Pop}:
\begin{equation}
\left( v_{0}^{p}-v_{1}^{p}-...-v_{n}^{p}\right) ^{1/p}\left(
w_{0}^{q}-w_{1}^{q}-...-w_{n}^{q}\right) ^{1/q}\leq
v_{0}w_{0}-v_{1}w_{1}-...-v_{n}w_{n}, \label{1_2}
\end{equation
holding for all $v_{i},w_{i}>0,$ such that
v_{0}^{p}-v_{1}^{p}-...-v_{n}^{p}>0$ and
w_{0}^{q}-w_{1}^{q}-...-w_{n}^{q}>0;$ the powers $p,q>1$ are such that
\dfrac{1}{p}+\dfrac{1}{q}=1$. In particular, for $p=q=\dfrac{1}{2},$
Popoviciu's inequality yields Acz\'{e}l's inequality.
\item Another result, due to Bellman \cite{Bel, Mitrinovic1970,
Mitrinovic1993}:
\begin{equation}
\left( v_{0}^{p}-v_{1}^{p}-...-v_{n}^{p}\right) ^{1/p}+\left(
w_{0}^{p}-w_{1}^{p}-...-w_{n}^{p}\right) ^{1/p}\leq \lbrack
(v_{0}+w_{0})^{p}-(v_{1}+w_{1})^{p}-...-(v_{n}+w_{n})^{p}]^{1/p},
\label{Bellman}
\end{equation
(with $v_{i},w_{i}$ as above and$\ p>1$) is just the reverse triangle
inequality corresponding to (\ref{1_2}). \newline
\end{enumerate}
Further, we use two particular classes of Lorentz-Finsler norms (the
so-called \textit{bimetric} and \textit{Kropina} norms) in order to prove
some new inequalities in Sections~\ref{Section_bimetric} and \re
{Section_Kropina}.
In Section \ref{Aczel section}, we study the following class of inequalities
on $\mathbb{R}^{n+1}$
\begin{equation}
\lbrack v_{0}w_{0}-\bar{g}_{\vec{v}}(\vec{v},\vec{w})]^{2}-[v_{0}^{2}-\lef
\Vert \vec{v}\right\Vert ^{2}][w_{0}^{2}-\left\Vert \vec{w}\right\Vert
^{2}]\geq 0 \label{Aczel_Finsler}
\end{equation
(for all $\vec{v}=\left( v_{1},....,v_{n}\right) ,\vec{w}=\left(
w_{1},....,w_{n}\right) \in \mathbb{R}^{n},$ $v_{0},w_{0}>0$ such that
v_{0}^{2}-\left\Vert \vec{v}\right\Vert ^{2}\geq 0,$ $w_{0}^{2}-\left\Vert
\vec{w}\right\Vert ^{2}\geq 0$), where $\left\Vert \vec{v}\right\Vert =\bar{
}(\vec{v})$ is an arbitrary Finsler norm on $\mathbb{R}^{n}$ and $\bar{g}_
\vec{v}}=\dfrac{1}{2}Hess_{\vec{v}}(\bar{F}^{2})$ is the corresponding
Finsler metric tensor -- thus generalizing the usual Acz\'{e}l inequality.
Using the positive definite version of the Finslerian Cauchy-Schwarz
inequality, we then find two refinements thereof:
\begin{equation}
\lbrack v_{0}w_{0}-\bar{g}_{\vec{v}}(\vec{v},\vec{w})]^{2}-[v_{0}^{2}-\lef
\Vert \vec{v}\right\Vert ^{2}][w_{0}^{2}-\left\Vert \vec{w}\right\Vert
^{2}]\geq \dfrac{\left( w^{0}\right) ^{2}-\left\Vert \vec{w}\right\Vert ^{2
}{\left\Vert \vec{w}\right\Vert ^{2}}\left( \left\Vert \vec{v}\right\Vert
^{2}\left\Vert \vec{w}\right\Vert ^{2}-\bar{g}_{\vec{v}}(\vec{v},\vec{w
)\right) ; \label{refinement1}
\end{equation}
\begin{equation}
\lbrack v_{0}w_{0}-\bar{g}_{\vec{v}}(\vec{v},\vec{w})]^{2}-[v_{0}^{2}-\lef
\Vert \vec{v}\right\Vert ^{2}][w_{0}^{2}-\left\Vert \vec{w}\right\Vert
^{2}]\geq \left[ w^{0}\dfrac{\bar{g}_{\vec{v}}(\vec{v},\vec{w})}{\left\Vert
\vec{w}\right\Vert }-v^{0}\left\Vert \vec{w}\right\Vert \right] ^{2}.
\label{refinement2}
\end{equation}
\section{Reverse inequalities for Lorentzian bilinear forms\labe
{sec:classineq}}
Before we study the extended Finslerian case, we briefly recall the
classical inequalities for bilinear forms.
Throughout the paper, we denote by $V$ a real $\left( n+1\right)
-dimensional space. We will use Einstein's summation convention, if not
otherwise explicitly stated: whenever in an expression an index $i$ appears
both as a superscript and as a subscript, we will automatically understand
summation over all possible values of $i$, i.e., instead of $\underset{i=0}
\overset{n}{\sum }}a_{i}b^{i},$ we will write simply, $a_{i}b^{i}$. This is
why, we will typically number components of vectors with superscripts from
0 $ to $n$, rather than as subscripts; this way, the expression of a vector
v\in V$ in the basis $\left\{ e_{i}\right\} _{i=\overline{0,n}}$ will be
written a
\begin{equation*}
v=v^{i}e_{i}.
\end{equation*
Unless elsewhere specified, by "smooth", we will always mean $\mathcal{C
^{\infty }$ (though usually, differentiability of some finite order is
sufficient). We will denote by $i,j,k...$\ indices running from $0$\ to $n$\
and by Greek letters $\alpha ,\beta ,\gamma ,..,$\ indices running from $1$\
to $n.$
\bigskip
A \textit{Lorentzian scalar product, \cite{BeemEhrlich, Minguzzi2019, ONeil}
} on the $\left( n+1\right) $-dimensional vector space $V$ is a symmetric
bilinear form $g:V\times V\rightarrow \mathbb{R}$ of index $n$. If $V$
admits a Lorentzian scalar product, it is called an $(n+1)$-dimensional
\textit{Minkowski spacetime}. Choosing an arbitrary basis, we have:
\begin{equation}
g(v,v)=g_{ij}v^{i}v^{j}, \label{Lorentzian bilinear form general}
\end{equation
where $(g_{ij})$ is a matrix with constant entries. In particular, in a $g
-orthonormal basis of $V,$ the bilinear form $g$ has the expressio
\begin{equation}
g(v,v)=\eta _{ij}v^{i}v^{j}=\left( v^{0}\right) ^{2}-\left( v^{1}\right)
^{2}-...-\left( v^{n}\right) ^{2}, \label{Minkowski metric}
\end{equation
where $\left( \eta _{ij}\right) =diag(1,-1,-1,...,-1).$
A nonzero vector $v\in V$ is called \textit{timelike} if $g(v,v)>0$ and
\textit{causal}, if $g(v,v)\geq 0$. The set of causal vectors consists of
two connected components, corresponding to the choices $v^{0}>0$ and
v^{0}<0 $ respectively in a given (arbitrary) $g$-orthonormal basis.
In the following, we will denote by $C$ one of these two connected
components. By conveniently choosing the basis, we can assume that, for all
v\in C,$ we have $v^{0}>0.$ The elements of $C$ are called \textit
future-directed causal vectors.}
Denote
\begin{equation}
F(v):=\sqrt{g(v,v)},~\ \ \forall v\in C. \label{pseudo-norm}
\end{equation
The function $F:C\rightarrow \mathbb{R}^{+}$ defined by the above relation
is sometimes called, by analogy with the Euclidean case, the Lorentzian
(pseudo-)\textit{norm }associated to the Lorentzian scalar-product $g$.
We will denote by
\begin{equation}
\mathcal{T}~\ \mathcal{=}\left\{ v\in C~\ |~F(v)>0\right\} ,
\label{definition_T}
\end{equation
the subset of $C$ consisting of timelike vectors. Elements of $\mathcal{T}$
are called \textit{future-directed timelike vectors}. The set $\mathcal{T}$
is always convex.
On a Minkowski spacetime $(V,g),$ the following inequalities hold (see,
e.g., \textit{\cite[Proposition 30]{ONeil}})
\begin{itemize}
\item \textbf{Reverse Cauchy-Schwarz inequality}
\begin{equation}
g(v,w)\geq F(v)F(w),~\ \ \ \forall v,w\in C;
\label{classical reverse CS ineq}
\end{equation}
\item \textbf{Reverse triangle inequality:
\begin{equation*}
F(v+w)\geq F(v)+F(w),~\ \forall v,w\in C.
\end{equation*
These inequalities are \textit{strict}, in the sense that equality holds if
and only if $v$ and $w$ are collinear.
\end{itemize}
\bigskip
\textbf{Particular case (Acz\'{e}l's inequality). }For $V=\mathbb{R}^{n+1}$
equipped with a $g$-orthonormal basis, the reverse Cauchy-Schwarz inequality
\begin{equation}
(v^{0}w^{0}-v^{1}w^{1}-....-v^{n}w^{n})^{2}\geq \lbrack \left( v^{0}\right)
^{2}-\left( v^{1}\right) ^{2}...-\left( v^{n}\right) ^{2}][\left(
w^{0}\right) ^{2}-\left( w^{1}\right) ^{2}...-\left( w^{n}\right) ^{2}],
\label{Aczel}
\end{equation
$\forall v,w\in C$, becomes Acz\'{e}l's inequality (\ref{1_1}).
\bigskip
\begin{remark}
\textbf{(Positive definite bilinear form\textbf{s}):} In the case when the
metric $g$ is positive definite, we have: $C=V,$ $\mathcal{T}=V\backslash
\{0\}.$ The usual, non-reversed Cauchy-Schwarz inequality
\begin{equation}
g(v,w)\leq F(v)F(w) \label{non-reversed CS}
\end{equation
and the usual triangle inequality
\begin{equation}
F(v+w)\leq F(v)+F(w) \label{non-reversed triangle ineq}
\end{equation
hold strictly on the entire space $V,$ see for example \cite[Proposition 18
{ONeil}.
\end{remark}
\section{\label{ineq_Finsler norms}Finsler and Lorentz-Finsler functions on
a vector space}
We saw in the previous section that the famous reverse and non-reverse
triangle inequalities, thus in particular Acz\'{e}l's inequality, are
closely connected to the geometric concepts of (pseudo)-Riemannian geometry.
Here we present that further famous inequalities are also related to a
geometric concept, namely to the concept of \textit{(pseudo)-Finsler geometr
}.
\subsection{Finsler structures on a vector space}
Let $V$ be a real $\left( n+1\right) $-dimensional space as above.
A \textit{Finsler} norm on $V$ is "almost" a norm in the usual sense; the
difference consists in the fact that it is only \textit{positively
homogeneous, instead of absolutely homogeneous. The precise definition is
given below.
\begin{definition}
(\cite{Bao}): A Finsler norm on the vector space $V$ is a function
F:V\rightarrow \lbrack 0,\infty )$ with the following properties:
1) $F$ is smooth on $\mathcal{T}${\textbf{$=$}}$V\backslash \{0\}$ and
continuous at $v=0;$
2) $F$ is positively homogeneous of degree 1, i.e., $F(\lambda v)=\lambda
F(v),$ $\forall \lambda >0;$
3) For every $v\in \mathcal{T},$ the fundamental tensor $g_{v}:V\times
V\rightarrow \mathbb{R},$
\begin{equation}
\ g_{v}(u,w):=\dfrac{1}{2}\dfrac{\partial ^{2}F^{2}}{\partial t\partial s
(v+tu+ws)\mid _{t=s=0} \label{g_ definition}
\end{equation
is positive definite.
\end{definition}
\bigskip
\textbf{Note: }In the Finsler geometry literature, a function $F$ with the
above properties is called a \textit{Minkwoski norm}. Yet, in order to avoid
confusions with the \textit{Minkowski metric} $\eta $ as defined above, we
will avoid this terminology here and call $F$ instead, a \textsl{Finsler
\textit{\ }norm.
\bigskip
\textbf{Particular case. }Euclidean spaces are recovered for $F(v)=\sqrt
a_{ij}v^{i}v^{j}},$ where $\left( a_{ij}\right) $ is a constant matrix
(i.e., $g_{v}=a$ does not depend on $v$). In this case, the Finsler norm $F$
is a Euclidean one, since it arises from a scalar product.
\bigskip
Yet, generally, a Finsler norm does not generally arise from a scalar
product. Nevertheless, there exists a similar notion to a scalar product -
namely, the \textit{fundamental }(or \textit{metric})\textit{\ tensor }$g_{v}
$ - but, in general $g_{v}$ has a nontrivial dependence on the vector $v.$
More precisely, the fundamental tensor of the Finsler space $(V,F)$ is the
mapping $g:V\backslash \{0\}\rightarrow T_{2}^{0}(V),$ $v\mapsto g_{v},$
which associates to each vector $v$ the symmetric and positive definite
bilinear form $g_{v.}$ defined above. With respect to an arbitrary basis
\left\{ e_{i}\right\} _{i=\overline{0,n}}$ of $V,$ the fundamental tensor
g_{v}$ has the matrix
\begin{equation}
g_{ij}(v)=\dfrac{1}{2}\dfrac{\partial ^{2}F^{2}}{\partial v^{i}\partial v^{j
}(v). \label{metric tensor}
\end{equation
that is
\begin{equation}
g_{v}(u,w)=g_{ij}(v)u^{i}w^{j}. \label{g_v bilinear}
\end{equation
Hence, for each $v\in V,~g_{v}$ is a scalar product (with "reference vector"
$v$) on $V.$ Moreover, due to the homogeneity of $F,$ there holds a similar
formula to the one in Euclidean geometry:
\begin{equation}
F(v)=\sqrt{g_{v}(v,v)}. \label{F in terms of g_v}
\end{equation}
\bigskip
In the following, we denote the derivatives of $F$ with subscripts:\ $F_{i}:
\dfrac{\partial F}{\partial v^{i}},$ $F_{ij}=\dfrac{\partial ^{2}F}{\partial
v^{i}\partial v^{j}}$ etc.
At any $v\in V\backslash \{0\},$ the Hessian of $F:$
\begin{equation}
F_{ij}(v)=\dfrac{1}{F}[g_{ij}(v)-F_{i}(v)F_{j}(v)], \label{angular metric}
\end{equation
is positive semidefinite, with radical spanned by $v.$ This fact serves to
prove, (see \cite{Bao}, p. 8-9):
1)\ \textit{the fundamental (or Cauchy-Schwarz) inequality:
\begin{equation}
dF_{v}(w)\leq F(w),~\ \forall v,w\in V\backslash \{0\};
\label{pos def CS Finsler}
\end{equation}
2)\ \textit{the triangle inequality:
\begin{equation}
F(v+w)\leq F(v)+F(w),~~\forall v,w\in V.
\label{pos def triangle ineq Finsler}
\end{equation
The above inequalities are strict, i.e., equality only holds when $v$ and $w$
are collinear.
\bigskip
With respect to a given basis, the fundamental inequality takes the form
\begin{equation}
F_{i}(v)w^{i}\leq F(w). \label{pos_def_CS_FInsler_coords}
\end{equation
The name of Cauchy-Schwarz inequality for (\ref{pos_def_CS_FInsler_coords})\
is justified by the following. Noticing tha
\begin{equation}
dF_{v}(w)=F_{i}(v)w^{i}=\dfrac{g_{ij}(v)v^{j}w^{i}}{F(v)}=\dfrac{g_{v}(v,w)}
F(v)}, \label{F_w}
\end{equation
this inequality can be equivalently written as
\begin{equation}
g_{v}(v,w)\leq F(v)F(w), \label{pos def CS Finsler detailed}
\end{equation
i.e., the fundamental inequality (\ref{pos def CS Finsler}) is just a
generalization of the usual Cauchy-Schwarz inequality \eqref{non-reversed CS
.
\subsection{Lorentz-Finsler structures}
An important feature of Lorentz-Finsler functions is that, typically, they
can only be defined on a conic subset of $V.$
In the following, by\textbf{\ }a \textit{conic domain} of $V,$ we will mean
an open connected subset $\mathcal{Q}$ of $V\backslash \{0\}$ with the
\textit{conic property}:
\begin{equation*}
\forall v\in \mathcal{Q},\forall \lambda >0:~\lambda v\in \mathcal{Q}.
\end{equation*}
The definition below is slightly more general than the one by Javaloyes and
Sanchez, \cite[ p. 21]{Javaloyes2019}:
\begin{definition}
Let $\mathcal{T}\subset V\backslash \{0\}$ be a conic domain. We call a
Lorentz-Finsler norm on $\mathcal{T}$ a smooth function $F:\mathcal{T
\rightarrow (0,\infty )$ such that:
1) $F$ is positively homogeneous of degree 1: $F(\lambda v)=\lambda F(v),$
\forall \lambda >0,$ $\forall v\in \mathcal{T}.$
2) For every $v\in \mathcal{T},$ the fundamental tensor $g_{v}:V\times
V\rightarrow \mathbb{R},$
\begin{equation*}
\ g_{v}(u,w):=\dfrac{1}{2}\dfrac{\partial ^{2}F^{2}}{\partial t\partial s
(v+tu+ws)\mid _{t=s=0}
\end{equation*
has Lorentzian signature $(+,-,-,...,-)$.
\end{definition}
A Lorentz-Finsler norm can always be prolonged as 0 at $v=0.$
\bigskip
\textbf{Notes:}
1)\ The difference between the above introduced notion and the one of
\textit{Lorentz-Minkowski norm} presented in \cite{Javaloyes2019} is that we
will \textit{not} require $F$ to be extended as 0 on $\partial \mathcal{T};$
while this requirement is important to applications in physical theories, in
our case, it would just uselessly limit the range of allowed examples (see,
e.g., Section \ref{Section_Popoviciu}). Actually, as we will see in the next
section, we will even allow $F$ to be degenerate at some vectors $v\in
\mathcal{T}$.
2) Equipping a differentiable manifold $M$ with a smooth family of Lorentz
Finsler norms $p\mapsto F(p)$ which define a Lorentz Finsler structure $F(p)$
on each tangent space $T_{p}M,$ $p\in M$, and demanding that $F|_{\partial
\mathcal{T}}=0,$ makes the pair $(M,L=F^{2})$ a Finsler spacetime, \cit
{cosmo-Berwald,Javaloyes2019}. Finsler spacetimes gain attention in the
application to gravitational physics \cite{Hohmann:2018rpp,Pfeifer:2019wus},
as well as in the mathematical community as generalizations of Lorentzian
manifolds \cite{Bernal:2020bul}.
3)\ If, in the above definition, one replaces the condition of Lorentzian
signature with positive definiteness, one obtains the notion of (positive
definite) \textit{conic Finsler metric, \cite{Javaloyes2019}}. Thus, a usual
Finsler metric is a conic Finsler metric with $\mathcal{T}=V\backslash
\{0\}. $
\bigskip
For Lorentz-Finsler norms $F$, the matrix $g_{ij}(v)$ is defined by the same
formula (\ref{metric tensor}) (but this time, it has Lorentzian signature)
and the relation $F(v)=\sqrt{g_{v}(v,v)}$ still holds. The Hessian $F_{ij}$
is negative semidefinite with radical spanned by $v,$ i.e.
\begin{equation}
F_{ij}(v)w^{i}w^{j}\leq 0, \label{Hess(F)_ineq}
\end{equation
for all $v,w\in \mathcal{T},$ where equality implies that $w$ is collinear
to $v$. Conversely, if the radical of $(F_{ij}(v))$\ is 1-dimensional, then
g_{v}\ $has $(+,-,-,...,-)$ signature, (see\footnote
The proof of (\ref{Hess(F)_ineq}) inside the open conic set $\mathcal{T}$ in
the cited paper does not require $F$ to be extendable on $\partial T,$ hence
the result holds with no modification in our case\textbf{.}} \cit
{Javaloyes2019}, Proposition 4.8 and, respectively, Lemma 4.7)\textbf{.}
\bigskip
\textbf{Examples of Lorentz-Finsler norms. }Here we just briefly list some
examples $F:\mathcal{T}\rightarrow \mathbb{R}$ (defined on conic subsets
\mathcal{T\subset }\mathbb{R}^{n+1}$), to be examined in the following
sections.
1) The $(n+1)$-dimensional \textit{Minkowski metric}: $F(v)=\sqrt{\eta
_{ij}v^{i}v^{j}}.$
2)\ \textit{The Kropina spacetime metric: }$F(v)=\dfrac{\eta _{ij}v^{i}v^{j
}{v^{0}}$.
3) \textit{The }$p$\textit{-pseudo-norm}: $F(v)=$ $\left[ \left(
v^{0}\right) ^{p}-\left( v^{1}\right) ^{p}-...-\left( v^{n}\right) ^{p
\right] ^{\tfrac{1}{p}}.$
4)\ The $(n+1)$-dimensional Berwald-Mo\'or metric
F(v)=(v^{0}v^{1}...v^{n})^{\tfrac{1}{n+1}}.$
5)\ Bimetric spaces: $~F(v)=[(\eta _{ij}v^{i}v^{j})(h_{kl}v^{k}v^{l})]^
\tfrac{1}{4}}$, where $h_{kl}v^{k}v^{l}$ has Lorentzian signature.
The latter three examples belong to a wider class of Lorentz-Finsler
functions $F$, called $m$\textit{-th root metrics}, expressed as the $m$-th
root of some polynomial of degree $m>2$ in $v^{i}$.
\subsection{The degenerate case\label{Section degenerate case}}
In previous works on the topic, such as \cit
{Aazami:2014ata,Javaloyes2019,Minguzzi2014,Minguzzi:2013sxa}, the Finslerian
generalizations of the reverse Cauchy-Schwarz inequality and of the reverse
triangle inequality were proven under the hypothesis that the set $B(1)
\mathcal{T}\cap F^{-1}([1,\infty ))$ is strictly convex (a sufficient
condition thereof is that $F$ vanishes on $\partial \mathcal{T}$ - which, as
we mentioned above, is not assumed here). These inequalities are strict,
i.e., equality happens if and only if $v$ and $w$ are collinear. Also, in
\cite{Javaloyes2019}, it is proven that, if $B(1)$\ is just (non-strictly)\
convex, then the inequalities hold non-strictly.
\bigskip
In the following, we will prove that these inequalities still hold if we
relax both the nondegeneracy condition on $F$ and the convexity requirement;
while requiring that the open cone $\mathcal{T}$ is (non-strictly) convex
does not affect the strictness of the inequalities, allowing $F\ $to be
degenerate does indeed affect it. With this goal in mind, let us first prove
a lemma, which extends (\ref{Hess(F)_ineq}) to degenerate Finsler structures.
\begin{lemma}
\label{signature_Hess(F)} Consider a smooth, 1-homogeneous function $F
\mathcal{T\rightarrow }\mathbb{R}$ defined on an arbitrary conic domain
\mathcal{T}\subset V\backslash \{0\}$ and denote, with respect to an
arbitrary basis:
\begin{equation*}
g_{ij}(v)=\dfrac{1}{2}\dfrac{\partial ^{2}F^{2}}{\partial v^{i}\partial v^{j
}(v).
\end{equation*
Then:
(i) If, at some $v\in \mathcal{T},$ the matrix $g_{ij}(v)$ has only one
positive eigenvalue, then the Hessian $(F_{ij}(v))$ is negative semidefinite.
(ii) If, at some $v\in \mathcal{T},$ the matrix $g_{ij}(v)$ is positive
semidefinite, then the Hessian $(F_{ij}(v))$ is positive semidefinite.
\end{lemma}
\begin{proof}
\textit{(i) }Fix $v\in \mathcal{T}$. Then, using
g_{ij}(v)=FF_{ij}(v)+F_{i}(v)F_{j}(v),$ we find for any $u\in V:
\begin{equation}
FF_{ij}(v)u^{i}u^{j}=g_{ij}(v)u^{i}u^{j}-(F_{i}(v)u^{i})^{2}.
\label{ineq_F_g}
\end{equation
As the signature of $g_{v}$ does not depend on the choice of the basis
\left\{ e_{i}\right\} _{i=\overline{0,n}}$, we can freely choose this basis.
For instance, we can choose an orthogonal basis for $g_{v},$\ with $e_{0}=v.$
Since $g_{v}(e_{0},e_{0})=g_{ij}(v)v^{i}v^{j}=F^{2}(v)>0,$ it follows from
the hypothesis that all the other diagonal entries $g_{ii}(v)$ are
nonpositive. Setting $u=e_{\alpha }$ for $\alpha \not=0,$ the orthogonality
condition is written, taking into account (\ref{F_w}), as $F_{i}(v)u^{i}=0;$
therefore, $F_{ij}(v)u^{i}u^{j}=\dfrac{1}{F(v)}g_{ij}(v)u^{i}u^{j}\leq 0.$
But, on the other hand, we have:
F_{ij}(v)e_{0}^{i}e_{0}^{j}=F_{ij}(v)v^{i}v^{j}=0,$ i.e, evaluating the
bilinear form $F_{ij}(v)$ on any basis vector $e_{\alpha }$, we get
nonpositive values, i.e., $F_{ij}(v)$ is negative semidefinite for any $v\in
\mathcal{T}$.
\textit{(ii) }is proven similarly, taking into account that, this time,
F_{ij}(v)u^{i}u^{j}=g_{ij}(v)u^{i}u^{j}\geq 0.$
\end{proof}
We obtain two statements:
\begin{theorem}
\label{prop_degenerate Finsler case}\textbf{(The degenerate-Lorentzian case)
} Let $\mathcal{T}\subset V\backslash \{0\}$ be a convex conic domain, $F
\mathcal{T\rightarrow }(0,\infty )$ a smooth positively 1-homogeneous
function such that the Hessian $g_{v}$ of $F^{2}$ has only one positive
eigenvalue for all $v\in \mathcal{T}$. Then, there hold:
(i) The fundamental (or reverse Cauchy-Schwarz) inequalit
\begin{equation}
dF_{v}(w)\geq F(w). \label{reverse_CS_coord_free}
\end{equation}
(ii) The reverse triangle inequality
\begin{equation}
F(v+w)\geq F(v)+F(w); \label{reverse triangle}
\end{equation
If the fundamental tensor $g_{v}$ is everywhere nondegenerate (i.e.,
Lorentzian), the above inequalities are strict.
\end{theorem}
\begin{proof}
\textit{(i)} The technique follows roughly the same steps as in the positive
definite case (see, e.g., \cite{Bao}, p. 8-9). Consider two arbitrary
vectors $u,v\in \mathcal{T}.$ Since $\mathcal{T}$ is convex, it follows that
$\dfrac{u+v}{2}\in \mathcal{T};$ but as it is also conic, we find $u+v\in
\mathcal{T},$ which means that it makes sense to speak about $F(u+v)$. Now,
perform a Taylor expansion around $v$, with the remainder in Lagrange form
\begin{equation}
F(u+v)=F(v)+F_{i}(v)u^{i}+\dfrac{1}{2}F_{ij}(v+\varepsilon u)u^{i}u^{j}.
\label{Taylor1}
\end{equation
From the above Lemma, we obtain that $F_{ij}$ is negative semidefinite, that
is, $F_{ij}(v+\varepsilon u)u^{i}u^{j}\leq 0$ and therefore
\begin{equation}
F(u+v)\leq F(v)+F_{i}(v)u^{i}. \label{aux}
\end{equation}
Then, denoting $w:=u+v,$ the above becomes $F(w)\leq
F(v)+F_{i}(v)(w^{i}-v^{i}),$ which, using the 1-homogeneity of $F,$ leads
to: $F(w)\leq F_{i}(v)w^{i},$ which is the coordinate form of \textit{(i).}
\textit{(ii)} The reverse triangle inequality now follows similarly to the
nondegenerate case, see \cite{Minguzzi2014}. Choose any $v,w\in \mathcal{T}$
and set $\xi :=v+w.$ Since $\mathcal{T}$ is conic and convex, we have $\xi
\in \mathcal{T}$ and, by 1-homogeneity,
\begin{equation*}
F(\xi )=F_{i}(\xi )\xi ^{i}=F_{i}(\xi )(v^{i}+w^{i})=F_{i}(\xi
)v^{i}+F_{i}(\xi )w^{i}.
\end{equation*
Using the fundamental inequality twice in the right hand side, we get
\begin{equation*}
F(v+w)=F(\xi )\geq F(v)+F(w).
\end{equation*}
\textit{Strictness:\ }If $g_{v}$ is nondegenerate, then the equality
F_{ij}(v+\varepsilon u)u^{i}u^{j}=0$ can only happen when $v$ and $u$ are
collinear; in turn, this means that in (\ref{aux}), equality also happens
only if $u$ and $v$ are collinear. This leads to the strictness of (\re
{reverse_CS_coord_free}) and (\ref{reverse triangle}).
\end{proof}
\bigskip
Using (\ref{F_w}), the fundamental inequality can be equivalently written as
\begin{equation}
F_{i}(v)w^{i}\geq F(w), \label{CS_coords}
\end{equation
or as
\begin{equation*}
g_{v}(v,w)\geq F(v)F(w).
\end{equation*}
\bigskip
\textbf{Example. }To convince ourselves that the reverse Cauchy-Schwarz
inequality becomes non-strict if $F\ $is degenerate, consider
\begin{equation*}
F:\mathcal{T}\rightarrow \mathbb{R},~~F(v)=\sqrt{\left( v^{0}\right)
^{2}-\left( v^{1}\right) ^{2}-....-\left( v^{k}\right) ^{2}};
\end{equation*
here, $n>3,~k\leq n-2$ and the cone $\mathcal{T}\subset \mathbb{R}^{n+1}$ is
the Cartesian product $\mathcal{T}=\mathcal{T}_{k}\times \mathbb{R}^{n-k},$
where $\mathcal{T}_{k}=\{u\in \mathbb{R}^{k+1}~|~\left( u^{0}\right)
^{2}-\left( u^{1}\right) ^{2}-...-\left( u^{k}\right) ^{2}>0,~u^{0}>0\}.$
Since $\mathcal{T}_{k}$ is a convex cone in $\mathbb{R}^{k+1}$, it follows
that $\mathcal{T}$ is also convex. The corresponding metric tensor is
g_{v}=diag(1,-1,...,-1,0,...,0),$ where the number of $-1$ entries is $k.$
Picking $v=(1,0,0,....0,1,0)$ and $w=(1,0,0,....,0,1),$ we get:
g_{v}(v,w)=1,$ $F(v)=1$ and $F(w)=1,$ which means that $g_{v}(v,w)=F(v)F(w),$
while, obviously, $v$ and $w$ are not collinear.
In the maximally degenerate case, when $g_{v}$ has everywhere signature
(+,0,0,...,0),$ the $(F_{ij}(v))$ is the zero matrix, hence, (\re
{reverse_CS_coord_free}) and (\ref{reverse triangle}) become equalities for
\textit{all }$v,w\in \mathcal{T}.$
\bigskip
Similarly, it holds:
\begin{proposition}
\label{positive semidefinite case}\textbf{(The positive semidefinite case):}
Let $\mathcal{T}\subset V\backslash \{0\}$ be a convex conic domain and $F
\mathcal{T\rightarrow }(0,\infty ),$ a smooth positively 1-homogeneous
function such that the Hessian $g_{v}$ of $F^{2}$ is positive semidefinite
for all $v\in \mathcal{T}$. Then, the Cauchy-Schwarz inequality (\ref{pos
def CS Finsler})\ and the triangle inequality (\ref{pos def triangle ineq
Finsler}) still hold, but they are generally non-strict.
\end{proposition}
The proof is identical to the one of Proposition \ref{prop_degenerate
Finsler case}, with the only difference that, in (\ref{Taylor1}), the matrix
$F_{ij}(v+\varepsilon u)$ is positive semidefinite, which leads to the
opposite inequality: $F(u+v)\geq F(v)+F_{i}(v)u^{i},$ hence, to the usual
(non-reversed)\ Cauchy-Schwarz and triangle inequalities.
\bigskip
Here are two refinements of the reverse triangle inequality, holding for
(possibly degenerate) Lorentz-Finsler functions.
\begin{theorem}
If the smooth, positively 1-homogeneous\textbf{\ }function $F:\mathcal{T
\rightarrow (0,\infty )$ defined on a convex conic domain $\mathcal{T\subset
}V\backslash \{0\}$ has the property that $g_{v}=\dfrac{1}{2}Hess(F^{2})$
has everywhere a single positive eigenvalue, then, for all $v,w\in \mathcal{
}$ and for any $0<a\leq b,$ we have
\begin{equation}
a\left[ F(v+w)-F(v)-F(w)\right] \leq F(av+bw)-aF(v)-bF(w)\leq
b[F(v+w)-F(v)-F(w)]. \label{reverse triangle_1}
\end{equation
Moreover, if $Hess(F^{2})$ is nondegenerate, i.e., $F$ is a Lorentz-Finsler
norm, then the inequalities are strict.
\end{theorem}
\begin{proof}
The first inequality is equivalent (after canceling out the $-aF(v)$ terms
and grouping the $F(w)$ ones into the left hand side) to:
\begin{equation*}
aF(v+w)+(b-a)F(w)\leq F(av+bw).
\end{equation*}
But, since $F$ is positively homogeneous and $b-a\geq 0,$ we get:
aF(v+w)=F(av+aw)$ and $(b-a)F(w)=F(bw-aw)$. Then, the reverse triangle
inequality yields:
\begin{equation*}
aF(v+w)+(b-a)F(w)=F(av+aw)+F(bw-aw)\leq F(av+bw)
\end{equation*
as required.
The second inequality is proven in a completely similar way to be equivalent
to: $F(av+bw)+F(bv-av)\leq F(bv+bw),$ which, again, holds by virtue of the
reverse triangle inequality.
\end{proof}
\begin{proposition}
With the assumptions from above Theorem, we have:
\begin{equation}
F(v)+F(w)\leq 2\int_{0}^{1}F(tv+(1-t)w)dt\leq F\left( v+w\right) ,
\label{reverse triangle_3}
\end{equation
for all $v,w\in \mathcal{T\subset }$ $V\backslash \{0\}$.
\end{proposition}
\begin{proof}
We use the same idea as in \cite{Min-Pal}. As we have seen above, the
Hessian $\left( F_{ij}\right) $ of $F$ is negative semidefinite, i.e., the
function $F$ is concave. Therefore:
\begin{equation*}
F(tv+(1-t)w))\geq tF(v)+(1-t)F(w),
\end{equation*
for every $v,w\in \mathcal{T}$, $t\in \lbrack 0,1]$. Integrating with
respect to $t,$ from $0$ to $1$, we obtain:
\begin{equation*}
\frac{F(v)+F(w)}{2}\leq \int_{0}^{1}F(tv+(1-t)w)dt,
\end{equation*
i.e., the first inequality (\ref{reverse triangle_3}). Further, using the
reverse triangle inequality, we have $F(v+w)=F{\large (}v+(1-t)w+(1-t)v+t
{\large )}\geq F(tv+(1-t)w)+F((1-t)v+tw)$. Integrating from $0$ to $1$, we
deduce: $F(v+w)\geq
\dint_{0}^{1}F(tv+(1-t)w)dt+\dint_{0}^{1}F((1-t)v+tw)dt=
\dint_{0}^{1}F(tv+(1-t)w)dt$, which is just the second inequality (\re
{reverse triangle_3}).
\end{proof}
The two above results trivially hold when one of the vectors $v,w$ is zero.
\section{Lorentz-Finsler norms and their inequalities}
\label{sec:ex}
The set of Lorentz-Finsler norms is rich in interesting examples whose
reverse Cauchy-Schwarz or triangle inequality yields immediately famous
inequalities from the literature and opens a pathway to reveal further
interesting inequalities in a simple way.
We already pointed out in Section \ref{sec:classineq} that, for the simplest
example of Lorentz-Finsler structure on $\mathbb{R}^{n+1}$, the Minkowski
metric $F(v)=\sqrt{\eta _{ij}v^{i}v^{j}}$, for which $\mathcal{T=}\left\{
v\in V~|~\eta _{ij}v^{i}v^{j}>0,v^{0}>0\right\} $, its reverse
Cauchy-Schwarz inequality led directly to Acz\'{e}l's inequality (\ref{1_1}).
In the following, we explore some nontrivial Finslerian cases.
\subsection{Popoviciu's inequality\label{Section_Popoviciu}}
\begin{proposition}
Let $\mathcal{T}\subset \mathbb{R}^{n+1}$ be the conic domain:
\begin{equation*}
\mathcal{T}:=\{v\in \mathbb{R}^{n+1}~|~v^{0},v^{1},...,v^{n}>0,\left(
v^{0}\right) ^{p}-\left( v^{1}\right) ^{p}-...-\left( v^{n}\right) ^{p}>0\}.
\end{equation*
Moreover let $F:\mathcal{T}\rightarrow \mathbb{R}^{+}$ be the Finsler
structure defined by
\begin{equation}
F(v)=H(v)^{\frac{1}{p}},\quad H(v)=\left( v^{0}\right) ^{p}-\left(
v^{1}\right) ^{p}-...-\left( v^{n}\right) ^{p}\,, \label{p-pseudo-norm}
\end{equation
where $p>1$. Then:
(i) the fundamental inequality $F_{i}(v)w^{i}\geq F(w)$ is Popoviciu's
inequality:
\begin{equation}
\eta _{ij}a^{i}b^{j}\geq \left[ (a^{0})^{q}-\left( a^{1}\right)
^{q}-...-\left( a^{n}\right) ^{q}\right] ^{\frac{1}{q}}\left[ \left(
b^{0}\right) ^{p}-\left( b^{1}\right) ^{p}-...-\left( b^{n}\right) ^{p
\right] ^{\frac{1}{p}},\ \forall a,b\in \mathcal{T}\,,
\label{Popoviciu inequality}
\end{equation
where $\dfrac{1}{p}+\dfrac{1}{q}=1$;
(ii) the reverse triangle inequality of $F$ is Bellmann's inequality:
\begin{equation}
\left( v_{0}^{p}-v_{1}^{p}-...-v_{n}^{p}\right) ^{1/p}+\left(
w_{0}^{p}-w_{1}^{p}-...-w_{n}^{p}\right) ^{1/p}\leq \lbrack
(v_{0}+w_{0})^{p}-(v_{1}+w_{1})^{p}-...-(v_{n}+w_{n})^{p}]^{1/p}.
\label{Bellmann}
\end{equation}
\end{proposition}
\begin{proof}
\textit{(i) }To see that the fundamental inequality holds, we realise that
the Hessian of $H$ is\textbf{\
\begin{equation*}
H_{ij}(v)=p(p-1)diag\left(
(v^{0})^{p-2},-(v^{1})^{p-2},....,-(v^{n})^{p-2}\right) .
\end{equation*
and thus has Lorentzian signature on $\mathcal{T}$. Moreover, $T$ is convex
(but not strictly convex), as it can be identified with the epigraph of the
convex function $\tilde{H}(v^{1},...,v^{n})=(v^{1})^{p}+....+\left(
v^{n}\right) ^{p}$, defined for all $v^{\alpha }>0$.
By Proposition~\ref{prop:signHg} (see Appendix), we obtain that $g_{ij}(v)$
is Lorentzian for all $v\in \mathcal{T}$ and hence, the fundamental
inequality inequality (\ref{CS_coords}) holds.
By a straightforward calculation, we get:
\begin{equation*}
F_{i}(v)=F(v)^{1-p}\eta _{ij}(v^{j})^{p-1};
\end{equation*
therefore, the fundamental inequality $F_{i}(v)w^{i}\geq F(w)$ becomes:
\begin{equation*}
\eta _{ij}(v^{j})^{p-1}w^{i}\geq F(v)^{p-1}F(w)=H(v)^{\frac{p-1}{p}}H(w)^
\frac{1}{p}}.
\end{equation*
Evaluating the above equation with the following notation:
\begin{equation*}
q:=\dfrac{p}{p-1},\quad a^{i}:=\left( v^{i}\right) ^{p-1},\quad \
b^{j}:=w^{j},
\end{equation*
(in particular, $\dfrac{1}{p}+\dfrac{1}{q}=1$), yields Popoviciu's
inequality.
Statement \textit{(ii) }is obvious.
\end{proof}
\begin{remark}
Similarly, H\"{o}lder's inequality
\begin{equation*}
\delta _{ij}a^{i}b^{j}\leq \left[ \left( a^{0}\right) ^{q}+...+\left(
a^{n}\right) ^{q}\right] ^{\frac{1}{q}}\left[ \left( b^{0}\right)
^{p}+...+\left( b^{n}\right) ^{p}\right] ^{\frac{1}{p}},\ \ \ \ \ \forall
a^{i},b^{i}>0,i=\overline{0,n},
\end{equation*
where $\delta _{ij}$ is the Kronecker symbol, can be treated as fundamental
inequality of the Finsler norm $F(v)=[\left( v^{0}\right) ^{p}+...+\left(
v^{n}\right) ^{p}]^{\tfrac{1}{p}}$ - which is positive definite for $v^{i}>0,
$ $i=\overline{0,n}$ and Minkowski's inequalit
\begin{equation*}
\left[ \left( a^{0}+b^{0}\right) ^{p}+...+\left( a^{n}+b^{n}\right) ^{p
\right] ^{\frac{1}{p}}\leq \left[ \left( a^{0}\right) ^{p}+...+\left(
a^{n}\right) ^{p}\right] ^{\frac{1}{p}}+\left[ \left( b^{0}\right)
^{p}+...+\left( b^{n}\right) ^{p}\right] ^{\frac{1}{p}},
\end{equation*
$\forall a^{i},b^{i}>0,i=\overline{0,n},p>1$ is just the corresoponding
triangle inequality.
\end{remark}
\subsection{The aritmetic-geometric mean inequality\label{Section_BM}}
\begin{proposition}
Let $\mathcal{T}\subset \mathbb{R}^{n+1}$ be the convex conic domain
\begin{equation*}
\mathcal{T}:=\left\{ v\in \mathbb{R}^{n+1}~|~v^{0},v^{1},...,v^{n}>0\right\}
\subset \mathbb{R}^{n+1}\,.
\end{equation*
Moreover let $F:\mathcal{T}\rightarrow \mathbb{R}^{+}$ be the Berwald-Mo\'{o
r Finsler structure defined by
\begin{equation*}
F(v)=(v^{0}v^{1}...v^{n})^{\tfrac{1}{n+1}}\,.
\end{equation*
Then, the fundamental inequality $F_{i}(v)w^{i}\geq F(w)$ is the
aritmetic-geometric mean inequality:
\begin{equation}
\dfrac{a_{0}+....+a_{n}}{n+1}\geq \left( a_{0}a_{1}...a_{n}\right) ^{\tfrac{
}{n+1}},~\ \ \forall a_{i}\in \mathbb{R}_{+}^{\ast }\,.
\label{arithmetic-geometric mean}
\end{equation}
\end{proposition}
\begin{proof}
The $n$-dimensional Berwald-Mo\'{o}r metric is known, \cite{Asanov1980}, to
be of Lorentzian signature. Yet, for the sake of completeness, we sketch a
proof of this fact below. To this aim, we will use Proposition \re
{prop:signHg}\textbf{.}
The Hessian of the $(n+1)$-th power $H(v):=v^{0}v^{1}...v^{n+1}$ of $F$ is:
\begin{equation}
H_{ij}(v)=\left\{
\begin{array}{c}
0,~\ \ if~\ i=j \\
\dfrac{H(v)}{v^{i}v^{j}},\,\ \ \ \ if~~\ i\not=j
\end{array
\right. \label{BM_H}
\end{equation
On $\mathcal{T},$ the matrix $(H_{ij}(v))$ has Lorentzian signature. To see
this, fix an arbitrary\textbf{\ }$v\in \mathcal{T}$ and introduce the
vectors $e_{0}:=v$ and$\ \ \{e_{\alpha }\},$ $\alpha =\overline{1,n}$ as
follows:
\begin{equation*}
e_{\alpha }^{i}=A_{\alpha }^{i}v^{i},~\ i=\overline{0,n}\text{ }
\end{equation*
(where no summation is understood over $i$), such that:
\begin{equation*}
\sum_{i=0}^{n}A_{\alpha }^{i}=A_{\alpha }^{0}+\sum_{\beta =1}^{n}A_{\alpha
}^{\beta }=0,~\ \ \ \ \ \det (A_{\alpha }^{\beta })_{\alpha ,\beta
\overline{1,n}}\not=0.
\end{equation*
The vectors $\left\{ e_{0},e_{\alpha }\right\} $ are linearly independent,
as the matrix with the columns $e_{0},e_{\alpha }$ has the determinant
H(v)\det (A_{\alpha }^{\beta })\not=0$. Moreover, $e_{\alpha }$ span the
(H_{ij}(v))$-orthogonal complement of $e_{0}=v$, since, using (\ref{BM_H}),
we find:
\begin{equation*}
H_{ij}(v)v^{i}e_{\alpha }^{j}=0,~\ \ \forall \alpha =1,...,n.
\end{equation*
Then, on one hand, we have
\begin{equation*}
H_{ij}(v)v^{i}v^{j}=n(n-1)H(v)>0
\end{equation*
and, on the other hand, $H_{ij}(v)$ is negative definite on $Span\{e_{\alpha
}\}$, since:
\begin{equation*}
H_{ij}(v)e_{\alpha }^{i}e_{\alpha }^{j}=H(v)\sum_{i\neq j}\frac{v^{i}}{v^{i}
\frac{v^{j}}{v^{j}}A_{\alpha }^{i}A_{\alpha }^{j}=H(v)\left( \left(
\sum_{i=0}^{n}A_{\alpha }^{i}\right) ^{2}-\sum_{i=0}^{n}(A_{\alpha
}^{i})^{2}\right) =0-H(v)\sum_{i=0}^{n}(A_{\alpha }^{i})^{2}\,.
\end{equation*
Consequently, $H_{ij}(v)$ has Lorentzian signature. Then, by Proposition \re
{prop:signHg}, also $g_{ij}(v)$ has Lorentzian signature on $\mathcal{T}$
and the fundamental inequality $F_{i}(v)w^{i}\geq F(w)$ holds $\forall
v,w\in \mathcal{T}$. We easily find:
\begin{equation*}
F_{i}(v)=\frac{1}{n+1}H^{\frac{1}{n+1}-1}\frac{H(v)}{v^{i}}=\frac{F(v)}{n+1
\frac{1}{v^{i}}
\end{equation*
and thus
\begin{equation*}
\frac{F(v)}{n+1}\sum_{i=0}^{n}\frac{w^{i}}{v^{i}}\geq F(w)
\end{equation*
or equivalently
\begin{equation*}
\frac{1}{n+1}\sum_{i=0}^{n}\frac{w^{i}}{v^{i}}\geq \frac{F(w)}{F(v)}=\left(
\dfrac{w^{0}}{v^{0}}\dfrac{w^{1}}{v^{1}}...\dfrac{w^{n}}{v^{n}}\right) ^
\tfrac{1}{n+1}}\,.
\end{equation*
Setting $a_{i}:=\dfrac{w^{i}}{v^{i}},\ i=\overline{0,n}$, $a_{i}$ take all
possible values in $\mathbb{R}_{+}^{\ast }$ and the fundamental inequality
becomes the aritmetic-geometric mean inequality
\eqref{arithmetic-geometric
mean}.
\end{proof}
\subsection{Weighted arithmetic-geometric mean inequality\labe
{Section_weighted}}
\begin{proposition}
Let $\mathcal{T}\subset \mathbb{R}^{n+1}$ be the convex conic domain
\begin{equation*}
\mathcal{T}:=\left\{ v\in \mathbb{R}^{n+1}~|~v^{0},v^{1},...,v^{n}>0\right\}
\subset \mathbb{R}^{n+1}\,.
\end{equation*
Then, the function $F:\mathcal{T}\rightarrow \mathbb{R}^{+},$ defined by
\begin{equation*}
F(v)=v^{a_{0}}v^{a_{1}}...v^{a_{n}},\ \sum_{i=0}^{n}a_{i}=1\quad a_{i}\geq
0\,.
\end{equation*
is a Lorentz-Finsler norm whose fundamental inequality is the weighted
arithmetic-geometric mean inequality
\begin{equation*}
\sum_{i=0}^{n}a_{i}v^{i}\geq
(v^{0})^{a_{0}}(v^{1})^{a_{1}}...(v^{n})^{a_{n}},~\ \ \ \ \ \ \ \ \ \ \
\forall v^{i}\in \mathbb{R}_{+}^{\ast }\,.
\end{equation*}
\end{proposition}
\begin{proof}
Fix $u\in \mathcal{T}.$ The components of the fundamental tensor are (no sum
convention is employed in the following expression)
\begin{equation*}
g_{ij}(u)=\frac{1}{2}\frac{\partial ^{2}F^{2}}{\partial u^{i}\partial u^{j}
=F(u)^{2}\left( \frac{2a_{i}a_{j}}{u^{i}u^{j}}-\frac{a_{i}\delta _{ij}}
(u^{i})^{2}}\right) \,.
\end{equation*
Its signature can be determined similarly to the previous case. Introducing
the vectors $e_{0}=u$ and $e_{\alpha },\alpha =\overline{1,n}$ with
components
\begin{equation*}
e_{\alpha }^{i}=B_{\alpha }^{i}u^{i}
\end{equation*
such that
\begin{equation}
\sum_{i=0}^{n}a_{i}B_{\alpha }^{i}=a_{0}B_{\alpha }^{0}+\sum_{\beta
=1}^{n}a_{\beta }B_{\alpha }^{\beta }=0, \label{B}
\end{equation
and $\det (B_{\alpha }^{\beta })\not=0,$ the vectors $\left\{
e_{0},e_{\alpha }\right\} $ are linearly independent and
\begin{equation*}
g_{ij}(u)e_{0}^{i}e_{\alpha }^{j}=0\,.
\end{equation*
Thus, $\{e_{\alpha }\}_{\alpha =\overline{1,n}}$ span the orthogonal
complement of $e_{0}=u$. Moreover, $g_{ij}(u)$ is negative definite on
Span\{e_{\alpha }\}$, since, by (\ref{B}), we have:
\begin{equation*}
g_{ij}(u)e_{\alpha }^{i}e_{\alpha }^{j}=F(u)^{2}\left( 2\left(
\sum_{i=0}^{n}a_{i}B_{\alpha }^{i}\right) ^{2}-\sum_{i=0}^{n}a_{i}(B_{\alpha
}^{i})^{2}\right) =-F(u)^{2}\sum_{i=0}^{n}a_{i}(B_{\alpha }^{i})^{2}<0;
\end{equation*
taking into account that $g_{ij}(u)e_{0}^{i}e_{0}^{j}=F(u)^{2}>0,$ we obtain
that $g_{ij}(u)\ $is Lorentzian. Calculating
\begin{equation*}
F_{i}(u)=a_{i}\frac{F(u)}{u^{i}},
\end{equation*
the fundamental inequality \eqref{CS_coords} becomes
\begin{equation*}
F(u)\sum_{i=0}^{n}\frac{a_{i}w^{i}}{u^{i}}\geq F(w)\,.
\end{equation*
Introducing $v_{i}=\frac{w^{i}}{u^{i}}\in \mathbb{R}_{+}^{\ast },$ it can be
rewritten as: $\sum\limits_{i=0}^{n}a_{i}v^{i}\geq
(v^{0})^{a_{0}}(v^{1})^{a_{1}}...(v^{n})^{a_{n}}\,.$The generalization (\re
{weighted_a}) then follows immediately.
\end{proof}
\subsection{\label{Section_bimetric}Bimetric structures on $\mathbb{R}^{n+1}
}
On 4-dimensional spacetime manifolds, Finsler functions of the type
\begin{equation*}
F(v)=[(g_{ij}v^{i}v^{j})(h_{lk}v^{k}v^{l})]^{\frac{1}{4}}\,,
\end{equation*
where $g_{ij}$ and $h_{kl}$ are Lorentzian metrics of same signature type
(+,-,-,...,-)$, are relevant in physics when one describes the propagation
of light in birefringent crystals, see for example \cit
{PerlickBook,Pfeifer:2016har,Punzi:2007di}. We can always choose a basis the
tangent spaces of the manifold such that one of the bilinear forms $h$ or $g$
assumes its normal form, i.e. it is locally diagonal with entries
g_{ij}=\eta _{ij}$.
\begin{proposition}
Consider $\mathbb{R}^{n+1}$ equipped with the Minkowski metric $\eta $ and
another bilinear form $h$ of Lorentzian signature. Let $\mathcal{T}\subset
\mathbb{R}^{n+1}$ be the convex conic domain given by the intersection of
the future pointing causal vectors of $\eta $ and $h$
\begin{equation*}
\mathcal{T}:=\left\{ v\in \mathbb{R}^{n+1}|\eta _{ij}v^{i}v^{j}>0,\
h_{ij}v^{i}v^{j}>0,v^{0}>0\right\} \subset \mathbb{R}^{n+1}
\end{equation*
and $F:\mathcal{T}\rightarrow \mathbb{R}^{+},$ the bimetric Finsler
structure defined by
\begin{equation*}
F(v)=[(\eta _{ij}v^{i}v^{j})^{\frac{1}{4}}(h_{lk}v^{k}v^{l})]^{\frac{1}{4
}\,.
\end{equation*
Then, the fundamental inequality $F_{i}(v)w^{i}\geq F(w)$ is
\begin{equation}
\frac{1}{2}\left( \frac{\eta _{ij}w^{i}v^{j}}{\eta _{kl}v^{k}v^{l}}+\frac
h_{ij}w^{i}v^{j}}{h_{kl}v^{k}v^{l}}\right) \geq \frac{F(w)}{F(v)},~\ \ \ \ \
\ \forall v,w\in \mathcal{T}\,.
\end{equation}
\end{proposition}
\begin{proof}
The Finsler structure is built from a fourth order polynomial $H(v):=(\eta
_{ij}v^{i}v^{j})(h_{lk}v^{k}v^{l})$ whose Hessian is given by
\begin{equation*}
H_{ij}=2\eta _{ij}(h_{lk}v^{k}v^{l})+4(\eta _{ik}h_{jl}+\eta
_{jk}h_{il})v^{k}v^{l}+2h_{ij}(\eta _{kl}v^{k}v^{l});
\end{equation*
it was proven, see \cite{Pfeifer:2011tk,Pfeifer:2013gha}, that $H_{ij}$ has
Lorentzian signature on $\mathcal{T}.$ Consequently, again using Proposition
\ref{prop:signHg}, $g_{ij}$ is of Lorentzian signature on $\mathcal{T}$ and
the fundamental inequality $F_{i}(v)w^{i}\geq F(w)$ holds. We easily
calculate:
\begin{equation*}
F_{i}(v)=\frac{1}{2}\frac{1}{H(v)^{\frac{3}{4}}}\left( \eta _{ij}v^{j}\
(h_{lk}v^{k}v^{l})+(\eta _{kl}v^{k}v^{l})\ h_{ij}v^{j}\right) \,,
\end{equation*
and thus the inequality
\begin{equation*}
\frac{1}{2}\frac{1}{H(v)^{\frac{3}{4}}}\left( \eta _{ij}w^{i}v^{j}\
(h_{lk}v^{k}v^{l})+(\eta _{kl}v^{k}v^{l})\ h_{ij}w^{i}v^{j}\right) \geq
H(w)^{\frac{1}{4}}
\end{equation*
holds. Both sides of the inequality can be multiplied with the positive
factor $H(v)^{-\frac{1}{4}}$ to obtain
\begin{equation*}
\frac{1}{2}\left( \frac{\eta _{ij}w^{i}v^{j}}{\eta _{kl}v^{k}v^{l}}+\frac
h_{ij}w^{i}v^{j}}{h_{kl}v^{k}v^{l}}\right) \geq \left( \frac{H(w)}{H(v)
\right) ^{\frac{1}{4}}\,.
\end{equation*}
\end{proof}
\bigskip
As an explicit example, take $\mathbb{R}^{2}$ with $v=(v^{0},v^{1})$,
w=(w^{0},w^{1})$ and set $h_{kl}v^{k}v^{l}:=2(v^{0})^{2}-(v^{1})^{2}.$ As
\eta _{ij}v^{i}v^{j}=(v^{0})^{2}-(v^{1})^{2}$, we find for all
v^{0},v^{1},w^{0},w^{1}$ such that $(v^{0})^{2}>(v^{1})^{2}$ and
(w^{0})^{2}>(w^{1})^{2}:
\begin{equation}
\frac{1}{16}\left( \frac{v^{0}w^{0}-v^{1}w^{1}}{(v^{0})^{2}-(v^{1})^{2}}
\frac{2v^{0}w^{0}-v^{1}w^{1}}{2(v^{0})^{2}-(v^{1})^{2}}\right) ^{4}\geq
\frac{((w^{0})^{2}-(w^{1})^{2})(2(w^{0})^{2}-(w^{1})^{2})}
((v^{0})^{2}-(v^{1})^{2})(2(v^{0})^{2}-(v^{1})^{2})}\,.
\end{equation}
\subsection{\label{Section_Kropina}Kropina metric}
We will use a Kropina-type deformation of the Minkowski metric $\eta $ in
order to find out an inequality regarding $\eta .$
\begin{proposition}
Let $\mathcal{T}\subset \mathbb{R}^{n+1}$ be the convex conic domain
\begin{equation*}
\mathcal{T}:=\left\{ v\in \mathbb{R}^{n+1}~|~\eta
_{ij}v^{i}v^{j}>0,v^{0}>0\right\} \subset \mathbb{R}^{n+1}\,
\end{equation*
and the smooth, 1-homogeneous function $F:\mathcal{T}\rightarrow \mathbb{R
^{+}$ be defined by
\begin{equation*}
F(v)=\frac{\eta _{ij}v^{i}v^{j}}{v^{0}}=\dfrac{1}{v^{0}
[(v^{0})^{2}-(v^{1})^{2}-....-(v^{1})^{2}].
\end{equation*
Then, $F$ obeys the non-strict fundamental inequality (\re
{reverse_CS_coord_free}), which becomes
\begin{equation}
2\eta (v,w)\geq \frac{w^{0}}{v^{0}}\eta (v,v)+\dfrac{v^{0}}{w^{0}}\eta
(w,w),~\ \ \ \ \forall v,w\in \mathcal{T}. \label{eq:kropineq}
\end{equation}
\end{proposition}
\begin{proof}
Let us rewrite $F$ as
\begin{equation*}
F(v)=v^{0}-\dfrac{\vec{v}\cdot \vec{v}}{v^{0}},
\end{equation*
where $v=(v^{0},\vec{v}),$ $\vec{v}=(v^{1},...,v^{n})$ and $\vec{v}\cdot
\vec{v}=\delta _{\alpha \beta }v^{\alpha }v^{\beta }$ denotes the standard
Euclidean product on $\mathbb{R}^{n}.$ We easily get the derivatives of $F$
as:
\begin{eqnarray}
F_{0}(v) &=&1+\dfrac{\vec{v}\cdot \vec{v}}{(v^{0})^{2}},~\ \ \ F_{\alpha
}(v)=-2\dfrac{\delta _{\alpha \beta }v^{\beta }}{v^{0}},~\ \ \alpha =1,...,n;
\label{derivs_F_1} \\
F_{00}(v) &=&-2\dfrac{\vec{v}\cdot \vec{v}}{(v^{0})^{3}},~\ \ F_{\alpha 0}=
\dfrac{\delta _{\alpha \beta }v^{\beta }}{(v^{0})^{2}},~\ \ \ F_{\alpha
\beta }=-2\dfrac{\delta _{\alpha \beta }}{v^{0}}. \label{derivs_F_2}
\end{eqnarray
Now, we can prove that the matrix $g_{ij}(v)$ has Lorentzian signature for
all $v\in \mathcal{T}.$
First, we notice that $g_{v}(v,v)=g_{ij}(v)v^{i}v^{j}=F(v)^{2}>0$ on
\mathcal{T}$.
Second, we show that $g_{v}$ is negative semidefinite on the $g_{v}
-orthogonal complement of $v.$ This complement is defined by
g_{ij}(v)v^{j}w^{i}=0$, which is equivalent to: $F_{i}(v)w^{i}=0;$ taking
into account the identity $g_{ij}(v)=FF_{ij}(v)+F_{i}(v)F_{j}(v)$, for $g_{v}
$-orthogonal vectors $w,$ we can write
\begin{equation}
g_{ij}(v)w^{i}w^{j}=F(v)(F_{ij}(v)w^{i}w^{j}). \label{g_rel_Kropina}
\end{equation
Taking into account (\ref{derivs_F_2}), we get:
\begin{eqnarray*}
F_{ij}(v)w^{i}w^{j} &=&\dfrac{-2}{(v^{0})^{3}}\left[ (\vec{v}\cdot \vec{v
)\left( w^{0}\right) ^{2}-2\left( \vec{w}\cdot \vec{v}\right)
v^{0}w^{0}+\left( \vec{w}\cdot \vec{w}\right) (v^{0})^{2}\right] \\
&=&\dfrac{-2}{(v^{0})^{3}}\left[ w^{0}(\vec{v}\cdot \vec{v})-v^{0}\left(
\vec{w}\cdot \vec{w}\right) \right] ^{2}\leq 0.
\end{eqnarray*
Since $F(v)>0$ on $\mathcal{T},$ we obtain from (\ref{g_rel_Kropina}) that
g_{ij}(v)w^{i}w^{j}\leq 0$ on the $g_{v}$-orthogonal complement of $v$.
The fundamental inequality can quickly be calculated with help of
\begin{equation*}
F_{i}(v)=2\frac{\eta _{ij}v^{j}}{v^{0}}-\frac{\eta _{jk}v^{j}v^{k}}
(v^{0})^{2}}\delta _{i}^{0}
\end{equation*
which eventually leads to the desired inequality \eqref{eq:kropineq}.
\end{proof}
\section{\label{Aczel section}Acz\'{e}l inequality: generalizations and
refinements}
In the following, we will use a particular class of Lorentzian Finsler
metrics to obtain a generalization and some refinements of the Acz\'{e}l
inequality (\ref{1_1}).
Consider, on $\mathbb{R}^{n+1}\backslash \{0\},$ a smooth, positive definite
Finsler norm $\bar{F}$ and set
\begin{equation}
F(v):=\sqrt{\left( v^{0}\right) ^{2}-\bar{F}^{2}(\vec{v})}, \label{smooth F}
\end{equation
where $v=\left( v^{0},\vec{v}\right) $ belongs to the open conic subset of
\mathbb{R}^{n+1}\backslash \{0\}:
\begin{equation}
\mathcal{T}=\left\{ v=\left( v^{0},\vec{v}\right) \in \mathbb{R
^{n+1}~|v^{0}>\bar{F}(\vec{v})\right\} . \label{T_set_Aczel}
\end{equation
Such functions $F$ belong to the class of stationary Lorentz Finsler norms
studied in \cite{Lammerzahl:2012kw,Stancarone}. As the function $\bar{F}$\
is convex, its epigraph $T$ is convex. Moreover, it is connected since it
can be characterized as the preimage of $(0,\infty )$ through the continuous
function $v\mapsto \left( v^{0}\right) -\bar{F}(\vec{v})$\textbf{. }Hence,
\mathcal{T}$ is a convex conic domain.
The metric tensor $g=\dfrac{1}{2}Hess(F^{2})$ is Lorentzian on $\mathcal{T},$
except for the half line $\left\{ \left( v^{0},0,0,....,0\right)
~|v^{0}>0\right\} .$ That is, it will obey on $\mathcal{T}$ the non-strict
version of the reverse Cauchy-Schwarz inequality. As $g_{00}=1,g_{0\alpha
}=0 $ and $g_{\alpha \beta }=-\dfrac{1}{2}\dfrac{\partial ^{2}\bar{F}^{2}}
\partial v^{\alpha }\partial v^{\beta }},$ we immediately obtain:
\begin{proposition}
\textbf{(Finslerian Acz\'{e}l inequality)}\textit{: Let }$\bar{F}:\mathbb{R
^{n}\rightarrow \mathbb{R}$ be an arbitrary Finsler norm on $\mathbb{R}^{n}$
and se
\begin{equation}
\left\Vert v\right\Vert =\bar{F}(\vec{v}),~\ \ \ \ \ \bar{g}_{\vec{v}}(\vec{
},\vec{w}):=\dfrac{1}{2}\dfrac{\partial ^{2}\bar{F}^{2}}{\partial v^{\alpha
}\partial v^{\beta }}(\vec{v})v^{\alpha }w^{\beta }. \label{alpha_norm}
\end{equation
Then, for all $v^{0},w^{0}>0$ and for all $\vec{v},\vec{w}\in \mathbb{R}^{n}$
such that $\left( v^{0}\right) ^{2}-\left\Vert \vec{v}\right\Vert ^{2}>0,$
\left( w^{0}\right) ^{2}-\left\Vert \vec{w}\right\Vert ^{2}>0$, there holds
\begin{equation}
\left[ v^{0}w^{0}-\vec{g}_{\vec{v}}(\vec{v},\vec{w})\right] ^{2}-[\left(
v^{0}\right) ^{2}-\left\Vert \vec{v}\right\Vert ^{2}][\left( w^{0}\right)
^{2}-\left\Vert \vec{w}\right\Vert ^{2}]\geq 0. \label{Finslerian Aczel}
\end{equation}
\end{proposition}
\bigskip
In the following, we will obtain two refinements of the above inequality,
based on the following lemma:
\begin{lemma}
For all $v,w\in \mathbb{R}^{n+1},$ $v=(v^{0},\vec{v}),$ $w=(w^{0},\vec{w})$
with $\vec{w}\not=0,$ there holds
\begin{eqnarray}
&&\left[ v^{0}w^{0}-\vec{g}_{\vec{v}}(\vec{v},\vec{w})\right] ^{2}-[\left(
v^{0}\right) ^{2}-\left\Vert \vec{v}\right\Vert ^{2}][\left( w^{0}\right)
^{2}-\left\Vert \vec{w}\right\Vert ^{2}]= \label{lemma} \\
&=&\left[ w^{0}\dfrac{\vec{g}_{\vec{v}}(\vec{v},\vec{w})}{\left\Vert \vec{w
\right\Vert }-v^{0}\left\Vert \vec{w}\right\Vert \right] ^{2}+\dfrac{\left(
w^{0}\right) ^{2}-\left\Vert \vec{w}\right\Vert ^{2}}{\left\Vert \vec{w
\right\Vert ^{2}}\left[ \left\Vert \vec{v}\right\Vert ^{2}\left\Vert \vec{w
\right\Vert ^{2}-\vec{g}_{\vec{v}}(\vec{v},\vec{w})\right] . \notag
\end{eqnarray}
\end{lemma}
\begin{proof}
By direct computation, we find that both hand sides are actually equal to:
\begin{equation}
\left[ \left\Vert \vec{v}\right\Vert ^{2}\left( w^{0}\right) ^{2}+\left(
v^{0}\right) ^{2}\left\Vert \vec{w}\right\Vert ^{2}-2v^{0}w^{0}\vec{g}_{\vec
v}}(\vec{v},\vec{w})\right] +\left[ \vec{g}_{\vec{v}}(\vec{v},\vec{w
)-\left\Vert \vec{v}\right\Vert ^{2}\left\Vert \vec{w}\right\Vert ^{2}\right]
. \label{lhs_Aczel}
\end{equation}
\end{proof}
\begin{remark}
The Finslerian Acz\'{e}l inequality is actually strict. This can be seen as
follows. From the (non-reversed) Cauchy-Schwarz inequality for $\bar{F},$ we
find that both square brackets in (\ref{lhs_Aczel}) are positive. If both
these brackets are zero, then, $\vec{g}_{\vec{v}}(\vec{v},\vec{w
)-\left\Vert \vec{v}\right\Vert ^{2}\left\Vert \vec{w}\right\Vert ^{2}=0$
implies: $\vec{w}=\alpha \vec{v};$ the vanishing of the other bracket then
leads to $w^{0}=\alpha v^{0},$ i.e., the vectors $v,w\in \mathbb{R}^{n+1}$
are collinear.
\end{remark}
\bigskip
From the above lemma, we find:
\begin{proposition}
\textbf{(Refinements of the Finslerian Acz\'{e}l inequality)}. Let
\left\langle \cdot ,\cdot \right\rangle _{v}$ be defined by a Finslerian
norm $\bar{F}=\left\Vert \cdot \right\Vert $ on $\mathbb{R}^{n}$ as in (\re
{alpha_norm}). Then, for any $v^{0},w^{0}\in \mathbb{R}$ and any $\vec{v}
\vec{w}\in \mathbb{R}^{n}$, $\vec{w}\not=0,$\textbf{\ }such that\textbf{\ }
\left( v^{0}\right) ^{2}-\left\Vert \vec{v}\right\Vert ^{2}>0,$ $\left(
w^{0}\right) ^{2}-\left\Vert \vec{w}\right\Vert ^{2}>0$\textbf{], }there
hold the inequalities:
(i) $\left[ v^{0}w^{0}-\vec{g}_{\vec{v}}(\vec{v},\vec{w})\right]
^{2}-[\left( v^{0}\right) ^{2}-\left\Vert \vec{v}\right\Vert ^{2}][\left(
w^{0}\right) ^{2}-\left\Vert \vec{w}\right\Vert ^{2}]\geq \dfrac{\left(
w^{0}\right) ^{2}-\left\Vert \vec{w}\right\Vert ^{2}}{\left\Vert \vec{w
\right\Vert ^{2}}(\left\Vert \vec{v}\right\Vert ^{2}\left\Vert \vec{w
\right\Vert ^{2}-\vec{g}_{\vec{v}}(\vec{v},\vec{w}));$
(ii) $\left[ v^{0}w^{0}-\vec{g}_{\vec{v}}(\vec{v},\vec{w})\right]
^{2}-[\left( v^{0}\right) ^{2}-\left\Vert \vec{v}\right\Vert ^{2}][\left(
w^{0}\right) ^{2}-\left\Vert \vec{w}\right\Vert ^{2}]\geq \lbrack w^{0
\dfrac{\vec{g}_{\vec{v}}(\vec{v},\vec{w})}{\left\Vert \vec{w}\right\Vert
-v^{0}\left\Vert \vec{w}\right\Vert ]^{2}.$
\end{proposition}
\bigskip
\textbf{Acknowledgements} C.P. was supported by the Estonian Ministry for
Education and Science through the Personal Research Funding Grants PSG489,
as well as the European Regional Development Fund through the Center of
Excellence TK133 \textquotedblleft The Dark Side of the
Universe\textquotedblright . N.V. was supported by a local grant of
Transilvania University\textbf{.}
\begin{appendices}
\section{The signature of $m$-th root metrics}
\label{app:mthroot} Among the Lorentz-Finsler norms in Section~\ref{sec:ex},
we encountered $m$-th root metrics, which are functions of the type:
\begin{equation*}
F:\mathcal{T}\rightarrow \mathbb{R},\quad F(v)=H(v)^{\tfrac{1}{m}},\quad
\text{with}\quad H(v):=a_{i_{1}...i_{m}}v^{i_{1}}...v^{i_{m}}\,,
\end{equation*
where $a_{i_{1}...i_{m}}$ are constants and $\mathcal{T}$ is a convex conic
domain in\textbf{\ }$\mathbb{R}^{n+1}$\textbf{\ }where\textbf{\ }$H(v)>0$.
Here, $m>2$ is fixed.
To determine the signature of $g_{ij}=\frac{1}{2}\frac{\partial F^{2}}
\partial v^{i}\partial v^{j}}$ in some of our examples we relate it to the
signature of the Hessian $H_{ij}:=\frac{\partial H}{\partial v^{i}\partial
v^{j}}$ of $H$. By direct calculation, we get (see also \cite{Pfeifer:2011tk
, \cite{Brinzei-projective}):
\begin{equation}
H_{ij}=mF^{m-2}[g_{ij}+(m-2)F_{i}F_{j}]. \label{P_ij}
\end{equation
The following result will greatly help us simplify calculations in concrete
examples.
\begin{proposition}
\label{prop:signHg} If, for a vector $v\in \mathcal{T},$ the matrix
H_{ij}(v)$ has Lorentzian signature $\left( +,-,-,..,-\right) ,$ then
g_{ij}(v)$ has also Lorentzian signature.
\end{proposition}
\begin{proof}
Assume that $H_{ij}(v)$ has index $n$; therefore, there exists an $n$
-dimensional subspace of $\mathbb{R}^{n+1}$ where it is negative definite.
Pick any vector $w\in \mathcal{T}$ with the property $H_{ij}(v)w^{i}w^{j}<0
. From (\ref{P_ij}) we find:
g_{ij}(v)w^{i}w^{j}+(m-2)(l_{i}(v)w^{j})^{2}<0. $ Since $m>2,$ this implies
g_{ij}(v)w^{i}w^{j}<0,$ that is, $(g_{ij})$ is also negative definite on at
least the same $n$-dimensional subspace.
But,\ $g_{ij}(v)v^{i}v^{j}=F^{2}(v)>0,$ which means that $g_{ij}$ cannot be
negative (semi-)definite on the whole $V.$ Consequently, it must be
Lorentzian.
\end{proof}
\end{appendices}
| {
"timestamp": "2020-06-22T02:02:53",
"yymm": "2006",
"arxiv_id": "2006.10816",
"language": "en",
"url": "https://arxiv.org/abs/2006.10816",
"abstract": "We show that Lorentz-Finsler geometry offers a powerful tool in obtaining inequalities. With this aim, we first point out that a series of famous inequalities such as: the (weighted) arithmetic-geometric mean inequality, Aczél's, Popoviciu's and Bellman's inequalities, are all particular cases of a reverse Cauchy-Schwarz, respectively, of a reverse triangle inequality holding in Lorentz-Finsler geometry. Then, we use the same method to prove some completely new inequalities, including two refinements of Aczél's inequality.",
"subjects": "Differential Geometry (math.DG)",
"title": "Inequalities from Lorentz-Finsler norms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517424466175,
"lm_q2_score": 0.8080672204860316,
"lm_q1q2_score": 0.7903315530103583
} |
https://arxiv.org/abs/0706.3707 | Comparing powers and symbolic powers of ideals | We develop tools to study the problem of containment of symbolic powers $I^{(m)}$ in powers $I^r$ for a homogeneous ideal $I$ in a polynomial ring $k[{\bf P}^N]$ in $N+1$ variables over an algebraically closed field $k$. We obtain results on the structure of the set of pairs $(r,m)$ such that $I^{(m)}\subseteq I^r$. As corollaries, we show that $I^2$ contains $I^{(3)}$ whenever $S$ is a finite generic set of points in ${\bf P}^2$ (thereby giving a partial answer to a question of Huneke), and we show that the containment theorems of Ein-Lazarsfeld-Smith and Hochster-Huneke are optimal for every fixed dimension and codimension. | \section{Introduction}\label{intro}
Consider a homogeneous ideal $I$ in a polynomial
ring $k[{\bf P}^N]$. Taking powers of $I$ is a natural
algebraic construction, but it can be
difficult to understand their structure
geometrically (for example, knowing generators
of $I^r$ does not make it easy to know its
primary decomposition). On the other hand,
symbolic powers of $I$ are more natural geometrically
than algebraically. For example, if $I$ is a radical ideal
defining a finite set of points $p_1,\ldots,p_s\in{\bf P}^N$, then
its $m$th symbolic power $I^{(m)}$ is generated by all
forms vanishing to order at least $m$ at each point $p_i$,
but it is not easy to write down specific generators
for $I^{(m)}$, even if one has generators for $I$.
Thus it is of interest to compare the two constructions,
and a good deal of work has been done recently comparing
powers of ideals with symbolic powers in various ways.
See for example, \cite{refHo}, \cite{refS}, \cite{refK}, \cite{refELS}, \cite{refHH1},
\cite{refCHHT} and \cite{refLS}.
Here we ask when a power of $I$ contains
a symbolic power, or vice versa. The second question
has an easy answer: if $I$ is nontrivial (i.e., not $(0)$ or $(1)$),
then $I^r\subseteq I^{(m)}$ if and only if $m\le r$ \cite[Lemma 8.1.4]{refPSC}.
Thus here we focus on the first question,
and for that question all that it is easy to say
is that if $I$ is nontrivial and $I^{(m)}\subseteq I^r$, then $m\ge r$ (Lemma \ref{GGPlem}(a)).
The problem of precisely for which $m\ge r$ we have
$I^{(m)}\subseteq I^r$ is largely open.
As a stepping stone, we introduce an asymptotic
quantity which we refer to as
the {\it resurgence}, namely
$\rho(I)=\hbox{sup}\{m/r : I^{(m)}\not\subseteq I^r\}$.
In particular, if $m > \rho(I)r$, then one is guaranteed
that $I^{(m)}\subseteq I^r$. Until recently it would not have been clear
that the sup always exists, but results of \cite{refS}
imply, for radical ideals at least, that it does,
and \cite{refHH1}, generalizing the result of \cite{refELS},
shows in fact that $\rho(I)\le N$
and hence for a nontrivial homogeneous ideal $I$ we have $1\le \rho(I)\le N$ (see Lemma \ref{postcrit1}(b)).
There are still, however, very few cases for which the actual value of
$\rho(I)$ is known, and they are almost all cases for which
$\rho(I)=1$. For example, by Macaulay's unmixedness theorem
it follows that $\rho(I)=1$ when $I$ is a complete intersection
(also see \cite{refHo} and \cite{refLS}). And if $I$ is a monomial ideal,
it is sometimes possible to compute $\rho(I)$ directly; for example,
if $I$ defines three noncollinear points in ${\bf P}^2$, then
one can show $\rho(I)=4/3$ (see \cite{refBH2}).
In this paper we give the first results
regarding the structure of the set of pairs
$(r,m)$ for which $I^{(m)}\subseteq I^r$.
These results are in terms of numerical invariants
of $I$. In particular, let $\alpha(I)$ be the
least degree of a generator in any set of homogeneous generators of $I$,
let $\omega(I)$ be the least degree $t$ such that $I$
is generated by forms of degree $t$ and less,
and let ${\rm reg}(I)$ be the regularity of $I$.
We also define an invariant $\gamma(I)$,
which is like a Seshadri constant.
We then obtain the following structural results.
If $m/r\le \alpha(I)/\gamma(I)$, we prove that
$I^{(mt)}\not\subseteq I^{rt}$ for all $t\gg0$ (Lemma \ref{postcrit1}).
If in addition $I$ defines a zero dimensional subscheme,
then we show $m/r\ge {\rm reg}(I)/\gamma(I)$ implies that
$I^{(m)}\subseteq I^r$ (Corollary \ref{PCcor}), and we show that
$m/r > \omega(I)/\gamma(I)$ implies that
$I^{(mt)}\subseteq I^{rt}$ for all $t\gg0$ (Corollary \ref{asympcor}).
From these results it follows that $\alpha(I)/\gamma(I)\le \rho(I)$, and,
when $I$ defines a zero-dimensional subscheme of ${\bf P}^N$,
that $\rho(I)\le {\rm reg}(I)/\gamma(I)$ (see Theorem \ref{SCthm}).
By applying these results we give the first
determinations of $\rho(I)$ in cases for which $\rho(I)>1$
and $I$ is not monomial (see Theorem \ref{skeletonThm}(a)
and Proposition \ref{coneprop}(a)).
As a corollary, it follows that the upper bounds
on $\rho$ coming from \cite{refELS} and \cite{refHH1} are sharp
(see Corollary \ref{LESHHmaxThm}).
Our original motivation for this work was a question
of Huneke's which is still open: if $I=I(S)$ is the ideal defining
any finite set $S$ of points in ${\bf P}^2$, is it true that $I^{(3)}\subseteq I^2$? This
question was prompted by the results of \cite{refHH1} and \cite{refELS},
which guarantee that $I^{(4)}\subseteq I^2$. The question of
the containment $I^{(3)}\subseteq I^2$ turns out to be quite delicate.
Here we show that containment holds at least when $S$ is a set
of generic points (Theorem \ref{8}).
\subsection{Comparison Invariants}
As mentioned above, given any homogeneous ideal $0\ne I\subsetneq R=k[{\bf P}^N]$,
we define the {\it resurgence}, $\rho(I)$, of $I$ to be the supremum of all ratios $m/r$ such that
$I^r$ does not contain $I^{(m)}$, where by
$I^{(m)}$ we mean, as in \cite{refHH1}, the contraction of $I^mR_A$ to $R$,
where $R_A$ is the localization of $R$ by the multiplicative system $A$, and $A$ is
the complement of the union of the
associated primes of $I$.
We refer to the maximum height among the associated primes of $I$
as the codimension, $\hbox{cod}(I)$, of $I$.
The saturation $\hbox{sat}(I)$ of a homogeneous ideal $I$ is
the ideal generated by all forms $F$ such that
$(x_0,\ldots,x_N)^tF\subseteq I$ for some $t$ sufficiently large.
If $I=\hbox{sat}(I)$, we say
$I$ is saturated. In any case, there is always a $t$ such that
$I_j=\hbox{sat}(I)_j$ for all $j\ge t$. The least such $t$ is
the saturation degree, $\hbox{satdeg}(I)$, of $I$.
In case $I$ is saturated and thus we have $I=I(X)$ for a subscheme
$X\subseteq {\bf P}^N$, we may write $\rho(X)$ to mean
$\rho(I)$.
A case of particular interest to us here is when $I(X)$
is an intersection $I=I(X)=\cap_i I(L_i)^{m_i}$
of powers of ideals of linear subspaces $L_i\subseteq{\bf P}^N$,
none of which contains another, in which case we refer to $X$
as a {\it fat flat} subscheme. Taking symbolic powers of $I(X)$ is then
straightforward; since $I(L_i)^{m_i}$ is primary, $I^{(m)}=\cap_i I(L_i)^{mm_i}$.
A special case of particular importance is when each
space $L_i$ is a single reduced point $p_i$. In this case $X$ is known as
a {\it fat point subscheme} and $I^{(m)}$ is just
the saturation of $I^m$.
Now let $\rho(N,d)$ denote the supremum of $\rho(I)$ over
homogeneous ideals $0\ne I\subsetneq k[{\bf P}^N]$ of codimension $d$.
(Since $d=N$ is an important special
case, will just write $\rho(N)$ for $\rho(N,N)$.) The main theorem of
\cite{refHH1} implies that $\rho(N,d)\le d$.
Here we show that in fact $\rho(N,d)=d$.
Although, as far as we know, it has not previously been shown for any $N$ or $d>1$
that $\rho(N,d)=d$, examples of Ein (see Section \ref{Optsubsect} and \cite{refHH2})
show that $\hbox{lim}_{N\to\infty}\rho(N,d)=d$.
We obtain:
\begin{cor}\label{LESHHmaxThm}
For each $N\ge 1$ and $1\le d\le N$, we have $\rho(N,d)=d$.
\end{cor}
Our proof of Corollary \ref{LESHHmaxThm} involves finding, for each $N$ and $d$,
a sequence of subschemes $S_N(d,i)\subsetneq {\bf P}^N$ such that
$\lim_{i\to \infty}\rho(S_N(d,i))=d$. These subschemes
can be taken to be fat flat subschemes, and, in fact, reduced.
Our main technical tool involves developing bounds, as discussed above, on $\rho(Z)$
for subchemes $Z\subsetneq {\bf P}^N$,
mostly in terms of postulational invariants of $I(Z)$; i.e., invariants
that are determined by the Hilbert functions of $I(Z)^{(m)}$.
Thus these bounds are the same
for any $Z$ for which the Hilbert functions of $I(Z)$ and its symbolic powers
remain the same. This is useful since postulational data is reasonably accessible,
either computationally or theoretically
(for example, \cite{refGuardoHar} and \cite{refGHM} classify all sets of up to 8 points
in ${\bf P}^2$ according to the postulational data of fat point subschemes
supported at the points).
\subsection{Postulational Bounds and Seshadri Constants}
We now discuss in detail the postulational invariants we will use.
Given a homogeneous ideal $0\ne I\subseteq R=k[{\bf P}^N]$, let
$\alpha(I)$ be the least degree $t$ such that the homogeneous
component $I_t$ in degree $t$ is not zero. Thus $\alpha$ is, so to speak, the degree
in which the ideal begins. It is also the degree of a generator of least degree,
and it is the $M$-adic order of $I$ (i.e., the largest $t$ such that
$I\subseteq M^t$), where $M$ is the maximal homogeneous ideal.
If $Z\subseteq {\bf P}^{N-1}\subsetneq {\bf P}^N$ is a subscheme
contained in a hyperplane, in cases which are not clear from context
we will use
$\alpha_{N-1}(I(Z))$ or $\alpha_N(I(Z))$ to distinguish
whether we are considering $\alpha$ for the ideal defining $Z$ in
${\bf P}^{N-1}$ or in ${\bf P}^N$.
Let $\tau(I)$ be the least degree such that
the Hilbert function becomes equal to the Hilbert polynomial of $I$
and let $\sigma(I)=\tau(I)+1$.
Given a minimal free resolution $0\to F_N\to\cdots\to F_0\to I\to0$
of $I$ over $R$, where $F_i$ as a graded $R$-module is
$\oplus R[-b_{ij}]$, the {\it Castelnuovo-Mumford regularity} ${\rm reg}(I)$
of $I$ is the maximum over all $i$ and $j$ of $b_{ij}-i$.
If $I$ defines a 0-dimensional subscheme of ${\bf P}^N$
(i.e., $I$ has codimension $N$),
then ${\rm reg}(I)$ is the maximum
of $\hbox{satdeg}(I)$ and $\sigma(\hbox{sat}(I))$, hence if $I$ is already
saturated (and so is the ideal of a 0-dimensional subscheme), then
${\rm reg}(I)=\sigma(I)$ (see \cite{refGGP}).
(We will only be concerned with the regularity in case $I$ defines a
0-dimensional subscheme.)
Our results depend on our developing bounds on $\rho(I)$.
Our bounds involve the quantity
$\gamma(I)=\hbox{lim}_{m\to \infty}\alpha(I^{(m)})/m$
for a homogeneous ideal $0\ne I\subsetneq k[{\bf P}^N]$.
Because of the subadditivity of $\alpha$,
this limit exists (see Remark III.7 of \cite{refHR2} or Lemma \ref{subadd}).
Moreover, $\gamma(I)>0$ (see Lemma \ref{postcrit1}).
Given a subscheme $Z\subsetneq {\bf P}^N$,
we will write $\gamma(Z)$ for $\gamma(I(Z))$.
Since $\alpha(I^m)$ is linear in $m$,
note that $\alpha(I)/\gamma(I)=\hbox{lim}_{m\to \infty} \alpha(I^m)/\alpha(I^{(m)})$.
Thus $\alpha(I)/\gamma(I)$ gives an asymptotic measure of the growth
of $I^{(m)}$ compared to $I^m$.
Our next result thus shows that $\rho(I)$ measures additional growth,
in comparison to $\alpha(I)/\gamma(I)$
(hence the term resurgence for $\rho$).
\begin{thm}\label{SCthm}
Let $0\ne I\subsetneq k[{\bf P}^N]$ be a homogeneous ideal.
\begin{itemize}
\item[(a)] Then ${\alpha(I)/\gamma(I)}\le\rho(I)$.
\item[(b)] If in addition $I$ defines a 0-dimensional subscheme,
then $\rho(I)\le {\rm reg}(I)/\gamma(I)$.
\end{itemize}
\end{thm}
Thus, for example, given $I=I(Z)$ for a fat point subscheme $Z$ with $\alpha(I)=\sigma(I)$, this theorem shows that
computing $\rho(Z)$ is equivalent to computing $\gamma(Z)$.
The quantity $\gamma$ is in that case essentially a uniform version of a multi-point Seshadri constant.
Indeed, if $Z$ is a reduced finite generic set of $n$ points in ${\bf P}^N$, then $\gamma(Z)=
n(\varepsilon(N,Z))^{N-1}$ (see Lemma \ref{subadd}), where,
following the exposition of \cite{refHR2, refHR3}, $\varepsilon(N,Z)$ is
the codimension 1 multipoint
Seshadri constant for $Z=\{p_1, \dots, p_n\}$; i.e., the real number
$$\varepsilon(N,Z)=\root{N-1}\of {\hbox{inf}
\left\{\frac{\hbox{deg}(H)}{\Sigma_{i=1}^n \hbox{mult}_{p_i}H}\right\}},$$
where the infimum is taken with respect to
all hypersurfaces $H$, through at least one of the points (see \cite{refD}
and \cite{refXb}). We also define
$\varepsilon(N,n)$ to be ${\hbox{sup}\{\varepsilon(N,Z)\}},$ where
the supremum is taken with respect to
all choices $Z$ consisting of $n$ distinct points $p_i$ of ${\bf P}^{N}$.
In case $N$ is clear from context, we will write $\varepsilon(Z)$
for $\varepsilon(N,Z)$.
While it is in any case obvious from the definitions that $\gamma(Z)\ge
n(\varepsilon(N,Z))^{N-1}$, equality can fail since the latter takes notice of
hypersurfaces whose multiplicities at the points $p_i$
need not all be the same. (For example, if $Z$ is the reduced scheme
consisting of $n=4$ points in ${\bf P}^2$, 3 of them on a line and one off,
then $5/3=\gamma(Z) >n\varepsilon(2,Z)=4/3$.)
\subsection{Application to generic points}
As an interesting example, consider
${\bf P}^N$ and some $s$, and let $I$ be the ideal of
$n=\binom{s+N}{N}$ generic points of ${\bf P}^N$; then in
Theorem \ref{SCthm} we have $\alpha(I)=s+1=\sigma(I)={\rm reg}(I)$.
Although, in the case of $N=2$, $\varepsilon(2,n)$ (and hence $\gamma(I)$) is known for $n<10$,
a famous and still open conjecture of Nagata \cite{refNag} is equivalent
to asserting that $\varepsilon(2,n)=1/\sqrt{n}$ for $n\ge 10$.
For no nonsquare $n\ge10$ is $\varepsilon(2,n)$ currently known.
However, it is not hard to show that $\varepsilon(2,n)=1/\sqrt{n}$ if
$n$ is any square. Thus we have the following corollary.
\begin{cor}\label{SCcor}
If $n=\binom{s+N}{N}$, then
for the subscheme $Z\subset {\bf P}^N$ consisting of the union of
$n$ distinct generic points we have $\rho(Z)=\frac{s+1}{n(\varepsilon(N,Z))^{N-1}}$.
If in addition $N=2$ and $n$ is a square, then
$$\rho(Z)={\frac{s+1}{\sqrt{n}}}=\sqrt{2}\sqrt{\frac{s+1}{s+2}}.$$
\end{cor}
We remark that there are infinitely many integers $n$ which are
at the same time a square and of the form $\binom{s+2}{2}$.
(An easy argument shows that $n=\binom{s+2}{2}$ is a square if and only if
either $s+1=2x^2$ for some $y$ such that $y^2-2x^2=1$,
or $s+2=2x^2$ for some $y$ such that $y^2-2x^2=-1$. The fact that there are
infinitely many such $x$ follows from the theory of Pell's equation.
The first few $s$ that arise are 0, 7, 48, 287, 1680, 9799, etc.)
\section{Preliminaries}\label{prelims}
In this section we establish our postulational criteria
for containment. We use two basic but surprisingly powerful
ideas.
\subsection{The Containment Principles}
The first idea, given homogeneous
ideals $I$ and $J$ in $k[{\bf P}^N]$, is that by examining
the zero loci of $I_t$ and $J_t$ (called $t$ degree envelopes in \cite{refTe})
we get a necessary criterion for
containment. In particular, if $I\subseteq J$, then the zero locus
of $I_t$ must contain the zero locus of $J_t$ in every degree $t$.
This is useful when trying to show that containment fails.
The second idea uses the obvious fact
that $I^{(m)}\subseteq I^{(r)}$ if $r\le m$, and the fact (when $I$ defines a
0-dimensional subscheme) that
$(I^{(r)})_t=(I^r)_t$ for $t$ large enough. Given $r$,
if we pick $m\ge r$ large enough, then $\alpha(I^{(m)})$
will be large enough so that $(I^{(r)})_t=(I^r)_t$ for all $t\ge \alpha(I^{(m)})$,
and hence $(I^{(m)})_t\subseteq (I^{(r)})_t=(I^r)_t$ for $t\ge \alpha(I^{(m)})$.
Since $(I^{(m)})_t=(0)\subseteq (I^r)_t$ for $t<\alpha(I^{(m)})$, we obtain
$I^{(m)}\subseteq I^{r}$.
Given a homogeneous ideal $J\subseteq k[{\bf P}^N]$,
let $h_J(t) = \dim J_t$ denote its Hilbert function.
Let $P_J$ denote the Hilbert polynomial. Thus $\alpha(J)$,
defined when $J\ne 0$,
is the least $t\ge0$ such that $h_J(t)>0$, and $\tau(J)$
is the least $t$ such that $h_J(t)=P_J(t)$.
\subsection{Some Notation for Fat Flats}
We now recall a convenient notation for denoting fat flats.
Let $I\subseteq k[{\bf P}^N]$ be any ideal of the form
$I=\cap_iI(L_i)^{m_i}$, where each $L_i\subsetneq {\bf P}^N$ is a proper linear subspace,
with no $L_i$ containing $L_j$, $j\ne i$, and where each $m_i$ is a nonnegative integer.
The fat flat subscheme $Z$ defined by $I$ depends only on
the spaces $L_i$, the integers $m_i$ and the space
${\bf P}^N$ containing $Z$. Since the latter is usually clear from context, it is
convenient to denote the subscheme formally by $Z=m_1L_1+\cdots+m_nL_n$
and write $I=I(Z)$ for the defining ideal. In particular,
$I(mZ)=I(Z)^{(m)}$ for each positive integer $m$.
Given a fat flat subscheme $Z=m_1L_1+\cdots+m_nL_n\subsetneq {\bf P}^N$,
the set $\hbox{Supp}(Z)=\{L_i : m_i>0\}$ is called the {\it support\/} of $Z$. In case
$Z$ is a fat point subscheme,
we denote the sum $\sum_i \binom{m_i+N-1}{N}$ by $\hbox{deg}(Z)$;
as is well known, $P_{I(Z)}(t) = \binom{t+N}{N} - \hbox{deg}(Z)$.
It is easy to see that $\hbox{deg}(rZ)$ is a strictly increasing function of $r$.
\subsection{Preliminary Lemmas}
We begin by considering $\gamma(I)$.
\begin{lem}\label{subadd} For any homogeneous ideal $0\ne I\subseteq k[{\bf P}^N]$,
the limit
$$\gamma(I)=\lim_{m\to\infty}\alpha(I^{(m)})/m$$
exists. Moreover, if $I=I(Z)$, where $Z$ is the reduced subscheme $Z\subsetneq {\bf P}^N$
consisting of a finite generic set of $n$ points, we have $\gamma(Z) =
n(\varepsilon(N,Z))^{N-1}$.
\end{lem}
\begin{proof} This is proved in Remark III.7 of \cite{refHR2}.
For the reader's convenience we recall the proof here.
First we show $\gamma(I)$ is defined.
Note that $\alpha$ is subadditive
(i.e., $\alpha(I^{(m_1+m_2)})\le \alpha(I^{(m_1)})+\alpha(I^{(m_2)})$,
and hence $\alpha(I^{(n)})/n \le (qm/n)\alpha(I^{(m)})/m+\alpha(I^{(r)})/n\le
\alpha(I^{(m})/m+\alpha(I^{(r)})/n$ for any positive integers
$n=mq+r$, and $\alpha(I^{(n)})/n \le \alpha(I^{(m})/m$ if $r=0$.
Thus $\alpha(I^{(n!)})/n!\le \alpha(I^{(m)})/m$ whenever $m$ divides $n!$.
Thus $\alpha(I^{(n!)})/n!$ is a non-increasing sequence, and
hence has some limit $c$. In addition, for all $d\ge n!$, using integer division
to write $d=q(n!)+r$ with $0\le r< n!$, we have $\alpha(I^{(d)})/d \le
\alpha(I^{(n!)})/(n!)+\alpha(I^{(r)})/d$. It follows that the limit exists and is equal to $c$.
For the second statement, argue as in the proof of Corollary 5 of \cite{refR}
to reduce to the case that the multiplicities are all equal. This uses
the fact that the points are generic and thus one can, essentially, average over the points.
Now we see that $n(\varepsilon(N,Z))^{N-1}$ is, by definition, the infimum
of the sequence $\alpha(I^{(n)})/n$ whose limit defines $\gamma(I)$, but it is obvious from our
argument above that $\gamma(I)\le \alpha(I^{(n)})/n$, and hence $\gamma(I)$ is the infimum.
\end{proof}
We now give a criterion for containment to fail, and thence a lower bound for $\rho(I)$:
\begin{lem}[Postulational Criterion 1]\label{postcrit1}
Let $0\ne I\subsetneq k[{\bf P}^N]$ be a homogeneous ideal.
Then $\gamma(I)\ge1$ and we have:
\begin{itemize}
\item[(a)] If $r\alpha(I)>\alpha(I^{(m)})$, then $I^r$ does not contain $I^{(m)}$.
\item[(b)] If $m/r< \alpha(I)/\gamma(I)$, then, for all $t\gg 0$, $I^{rt}$ does not contain $I^{(mt)}$.
In particular, $1\le\alpha(I)/\gamma(I)\le \rho(I)$.
\end{itemize}
\end{lem}
\begin{proof} For $\gamma(I)\ge1$, see \cite[Lemma 8.2.2]{refPSC}.
(a) This is because $(I^r)_t = 0$ but $(I^{(m)})_t \ne 0$
for $t = \alpha(I^{(m)})$, since $\alpha(I^r)=r\alpha(I)>\alpha(I^{(m)})$.
(b) Suppose $m/r<\alpha(I)/\gamma(I)$. Let $0< \delta$ be such that
$m/r < \alpha(I)/(\delta+\gamma(I))$.
By definition, $\alpha(I^{(mt)})/(mt) \le \gamma(I)+\delta$ for $t\gg0$,
so $\alpha(I^{(mt)}) \le mt(\gamma(I)+\delta)<rt\alpha(I)$ for $t\gg0$,
and hence $I^{rt}$ does not contain $I^{(mt)}$ for $t\gg 0$, which now
implies $\alpha(I)/\gamma(I)\le \rho(I)$. Finally, by subadditivity, as in the proof of
Lemma \ref{subadd}, we have $\gamma(I)\le \alpha(I)$, hence
$1\le \alpha(I)/\gamma(I)$.
\end{proof}
It is possible to give refined versions of Lemma \ref{postcrit1},
in which both $(I^r)_t$ and $(I^{(m)})_t$ may be nonzero,
but in which the zero locus of the former is bigger than that of the latter.
These refined versions are useful in doing examples and
will be the topic of a subsequent paper, \cite{refBH2}.
We next develop our criteria for containment to hold.
First we recall a few well known facts.
\begin{lem}\label{GGPlem} Let $0\ne I\subsetneq k[{\bf P}^N]$ be a homogeneous ideal.
\begin{itemize}
\item[(a)] If $I^{(m)} \subseteq I^r$, then $r\le m$.
\item[(b)] We have $\alpha(I^{(m)})\le m\alpha(I)$ and $\alpha(I)\le {\rm reg}(I)$.
\item[(c)] If $I$ defines a 0-dimensional subscheme and $t \ge r{\rm reg}(I)$,
then $(I^r)_t = (\hbox{sat}(I^r))_t$;
in particular, if $I$ is saturated and defines a 0-dimensional subscheme, then
then $t \ge r\sigma(I)$ implies $(I^r)_t = (I^{(r)})_t$.
\end{itemize}
\end{lem}
\begin{proof}
(a) We have $I^m\subseteq I^{(m)} \subseteq I^r$, hence
$m\alpha(I) = \alpha(I^m) \ge \alpha(I^r) = r\alpha(I)$
so $m\ge r$.
(b) The claim $\alpha(I^{(m)})\le m\alpha(I)$ follows by the subadditivity
of $\alpha$.
The second claim is immediate from the definition of regularity,
since ${\rm reg}(I)$ is at least as much as the degree of the
homogeneous generator of greatest degree in any minimal set of homogeneous generators of $I$,
while $\alpha(I)$ is the degree of the generator of least degree.
(c) We argue as in the proof of Proposition 2.1 of \cite{refAV}.
By Theorem 1.1 of \cite{refGGP}, $r{\rm reg}(I)\ge {\rm reg}(I^r)\ge \hbox{satdeg}(I^r)$, hence
$t\ge r{\rm reg}(I)$ implies $(I^r)_t=(\hbox{sat}(I^r))_t$.
The second statement is just an instance of the first.
\end{proof}
Here we give a criterion for containment to hold:
\begin{lem}[Postulational Criterion 2]\label{postcrit2}
Let $I\subseteq k[{\bf P}^N]$ be a homogeneous ideal
(not necessarily saturated) defining
a 0-dimensional subscheme.
If $r{\rm reg}(I) \le \alpha(I^{(m)})$, then $I^{(m)} \subseteq I^r$.
\end{lem}
\begin{proof} First, $r{\rm reg}(I) \le \alpha(I^{(m)})\le m\alpha(I)\le m{\rm reg}(I)$,
so $r\le m$, hence $(I^{(m)})_t \subseteq (I^{(r)})_t$ for all $t\ge0$.
Moreover, if $I$ is not saturated, then the maximal homogeneous ideal
$M$ is an associated prime, so $I^{(m)}=I^m$ for all
$m\ge 1$, hence $I^{(m)}=I^m\subseteq I^r$. Thus
we may as well assume that $I$ is saturated.
But $(I^r)_t= (I^{(r)})_t$ by Lemma \ref{GGPlem}(c)
for $t \ge r{\rm reg}(I)$,
while $r{\rm reg}(I) \le \alpha(I^{(m)})$
implies $(I^{(m)})_t=0 \subseteq (I^r)_t$ for $t<r{\rm reg}(I)$.
\end{proof}
As an application of Postulational Criterion 2 we have:
\begin{cor}\label{PCcor} Let $I\subseteq k[{\bf P}^N]$ be a homogeneous ideal
(not necessarily saturated) defining
a 0-dimensional subscheme. If $c$ is a positive
real number such that $mc \leq \alpha(I^{(m)})$ for all $m \ge 1$,
then $I^{(m)} \subseteq I^r$ if $m/r \ge {\rm reg}(I)/c$; in particular,
$\rho(I) \le {\rm reg}(I)/c$.
\end{cor}
\begin{proof} By Lemma \ref{postcrit2},
$r{\rm reg}(I)\le mc$, or equivalently
${\rm reg}(I)/c\le m/r$, implies $I^{(m)} \subseteq I^r$.
\end{proof}
\begin{Rmk}\label{SCrem}\rm
Let $I\subseteq k[{\bf P}^N]$ be a homogeneous ideal defining a 0-dimensional subscheme.
Since we can evaluate limits on subsequences
and since by subadditivity the sequence $\alpha(I^{(i!m)})/(i!m)$ is non-increasing,
we see that $\gamma(I)\le \alpha(I^{(m)})/m$ for all $m\ge 1$.
Thus the $c$ in Corollary \ref{PCcor} can be taken to be $\gamma(I)$.
It is reasonable to ask: why not just take
$c=\gamma(I)$? Unfortunately, the exact value of $\gamma(I)$ is rarely known
even if $I=I(Z)$ for a fat point subscheme
$Z=m_1p_1+\cdots+m_np_n$ in ${\bf P}^2$,
so it is useful that the statement not be in terms of $\gamma(I)$.
On the other hand, good lower bounds are known for $\gamma(Z)$
in certain cases
(see for example \cite{refB}, \cite{refH1}, \cite{refHR}, \cite{refST}
and \cite{refT}, among many others). Also, exact values are known in some cases,
such as when $\hbox{Supp}(Z)$ consists of any $n\le 8$ points
in ${\bf P}^2$. (Since the subsemigroup of classes of effective divisors for a blow up of ${\bf P}^2$ at
$n\le 8$ points is polyhedral and the postulation for any such $Z$ is known,
one can explicitly determine $\gamma(Z)$ in this situation if one knows
the effective subsemigroup.
The effective subsemigroups for all subsets of $n\le 8$ points of the plane
are now known, as a consequence of the classification of
the configuration types of $n\le 8$ points of ${\bf P}^2$, given in
\cite{refGuardoHar} for $n\le 6$ and
\cite{refGHM} for $7\le n\le 8$.)
\end{Rmk}
\begin{cor}\label{Bndscor} Let $I=I(Z)$ for a nontrivial fat point subscheme $Z\subset{\bf P}^N$.
If $\alpha(I) = \sigma(I)$, then $\rho(I) = \alpha(I)/\gamma(I)$.
\end{cor}
\begin{proof} This is immediate from Corollary \ref{PCcor},
Remark \ref{SCrem} and Lemma \ref{postcrit1}.
\end{proof}
\begin{Rmk}\label{Omegarem}\rm
One can sometimes do better using non-postulational data.
The paper \cite{refEHU} gives various bounds
on the regularity under various assumptions.
For another example that we will refer to in Section \ref{AECQ},
let $I\subset k[{\bf P}^N]$ be a homogeneous
ideal defining a 0-dimensional subscheme. Then ${\rm reg}(I^r)\le r\omega(I)+
2({\rm reg}(I)-\omega(I))$ for any $r\ge 2$ by Theorem 0.4 of \cite{refCh}
(or see Section 6 of \cite{refCh2}),
where $\omega(I)$ is the maximum degree
of a generator in any minimal set of homogeneous generators of $I$.
Replacing $r{\rm reg}(I)$ by $r\omega(I)+
2({\rm reg}(I)-\omega(I))$ in the argument
of the proof of Lemma \ref{postcrit2} and then arguing as in
Lemma \ref{PCcor}, keeping in mind Remark \ref{SCrem},
gives $I^{(m)} \subseteq I^r$ if
$m/r\ge (\omega(I)+2({\rm reg}(I)-\omega(I))/r)/\gamma(I)$.
\end{Rmk}
\begin{cor}\label{asympcor} Let $I$ define a 0-dimensional subscheme of ${\bf P}^N$
and let $c>\omega(I)/\gamma(I)$.
Then $I^{(m)} \subseteq I^r$ for all but finitely many pairs $(m,r)$ with $m/r\ge c$.
In particular, if $m/r > \omega(I)/\gamma(I)$, then $I^{(mt)} \subseteq I^{rt}$
for all $t\gg 0$.
\end{cor}
\begin{proof} By Remark \ref{Omegarem}, we have $I^{(m)} \subseteq I^r$
if $(m,r)$ is on or above the line
$m = (\omega(I)/\gamma(I))r+2({\rm reg}(I)-\omega(I))/\gamma(I)$.
But $c$ is greater than the slope $\omega(I)/\gamma(I)$ of this line, so there are only
finitely many pairs $(m,r)$ with $m/r \ge c$ below this line.
The second statement is now immediate.
\end{proof}
\subsection{Constructions showing Optimality}\label{Optsubsect}
To prove Corollary \ref{LESHHmaxThm}, it suffices to find
subschemes $Z\subseteq {\bf P}^N$ for which
$\rho(Z)$ is large. Lemma \ref{postcrit1} suggests where to look.
We want a scheme $Z$ such that
$\alpha(I(Z))$ is as large as possible, which means that $I(Z)$
should behave generically, from a postulational point of view.
On the other hand, we want $\gamma(Z)$ to be small, so among all $I(Z)$
with generic Hilbert function we want to examine those
for which the Hilbert function of $I(Z)^{(m)}$ is as large as possible
(and hence $\alpha(I(Z)^{(m)})$ is as small as possible).
This problem was studied in \cite{refGMS} in characteristic 0
in the case that $N=m=2$ with $Z=p_1+\cdots+p_n$
a reduced set of points $p_i$; i.e., double points in the plane.
They prove that the the set of singular points of a union of
$s$ general lines (i.e., the pair-wise intersections of $s$ general
lines) is a configuration of points in the plane
having generic Hilbert function but for which
the Hilbert function of the symbolic square of the ideal is as
large as possible. This suggests, more generally, to look at the set of
$N$-wise intersections of $s\ge N+1$ general hyperplanes in ${\bf P}^N$.
More generally yet, for $1\le e\le N$ and $s\ge e$,
let $S_N(e, s, {\bf d})$ denote the
reduced scheme consisting of the $e$-wise intersections of
$s$ general hypersurfaces $H_1,\ldots,H_s$ in ${\bf P}^N$
of respective degrees $d_i$ where ${\bf d} = (d_1,\ldots,d_s)$.
If $d_i=d$ for all $i$, we will write $S_N(e, s, d)$ for $S_N(e, s, {\bf d})$.
If $d=1$, we will write simply $S_N(e, s)$.
Thus $S_N(N,N+1)$ can be taken to be
the set of coordinate vertices of ${\bf P}^N$,
and $S_N(1,N+1)$ to be the union of the coordinate hyperplanes.
In this notation, the examples of Ein having large $\rho$
are the codimension $e$ skeleta $S_N(e,N+1)$
of the coordinate simplex
in ${\bf P}^N$ (hence $d_i=1$ for all $i$); i.e.,
the $e$-wise intersections of $s=N+1$ general hyperplanes
in ${\bf P}^N$. The case $e=N$ (i.e., of the coordinate vertices
in ${\bf P}^N$) is treated by Arsie and Vatne (see Theorem 4.5
of \cite{refAV}).
It is easy to see that a general hyperplane section
$H\cap S_N(e, s,{\bf d})$ is $S_{N-1}(e, s, {\bf d})$, defined
by the $e$-wise intersections of the hypersurfaces $H\cap H_i\subseteq H$.
We will denote $\alpha(I(mS_N(e,s,{\bf d})))$ by $\alpha_N(m,e,s,{\bf d})$, where
$mS_N(e,s,{\bf d})\subseteq {\bf P}^N$ is the subscheme consisting of the $e$-wise intersections of the
$s$ hypersurfaces $H_i$, where each $e$-wise intersection is taken with multiplicity $m$.
In order to apply our bounds to $S_N(e, s,{\bf d})$, we need to determine
the least degree among hypersurfaces that vanish on $mS_N(e, s, {\bf d})$.
\begin{lem}\label{alphaLemma} Let $1\le e\le N$, $s\ge e$,
and let ${\bf d}=(d_1,d_2,\ldots,d_s)$. Let
$I=I(mS_N(e, s, {\bf d}))\subseteq k[{\bf P}^N]$.
If $m=re$ for some $r$, then $r(d_1+\cdots+d_s)\ge \alpha(I)$.
If $d_1=\cdots=d_s=1$, then for any $m\ge1$ we have
$ms/e\le \alpha(I)$, and hence we have equality if
$m=re$.
\end{lem}
\begin{proof}
First consider the case that $m=re$ is a multiple of $e$.
Then the divisor $r(H_1+\cdots+H_s)$ has degree
$r(d_1+\cdots+d_s)=m(d_1+\cdots+d_s)/e$ and vanishes
on each component of $S_N(e,s, {\bf d})$ with multiplicity
$m$ (since each component of $S_N(e,s, {\bf d})$ is contained
in exactly $e$ of the hypersurfaces $H_i$). Thus
$r(d_1+\cdots+d_s)\ge \alpha_N(m,e,s,{\bf d})$.
Now assume $d_1=\cdots=d_s=1$.
To show $\alpha_N(m,e,s,{\bf d})\ge ms/e$, it is enough to show
$\alpha_e(m,e,s,{\bf d})\ge ms/e$, since by taking
general hyperplane sections we have:\newline
$\alpha_N(m,e,s,{\bf d})\ge \alpha_{N-1}(m,e,s,{\bf d})
\ge \cdots \ge \alpha_{e}(m,e,s,{\bf d})$.
Suppose it were true that
$\alpha_{e}(m,e,s,{\bf d})< ms/e$ for some $m$. Let $F$ be a form of
degree $d=\alpha_{e}(m,e,s,{\bf d})$
vanishing with multiplicity at least $m$ at each point of
$S_e(e,s, {\bf d})$.
Then $F$ restricts to give a form on $H_1$
with $d < ms/e \le m(s-1)/(e-1)$, but $H_1\cap S_{e}(e,s,{\bf d})
=S_{e-1}(e-1,s-1,{\bf d}')$, where ${\bf d}'=(d_2,\cdots,d_s)$,
and, by induction on the dimension (where dimension 1 is easy),
we have $m(s-1)/(e-1)\le \alpha_{e-1}(m,e-1,s-1,{\bf d}')$.
Hence $F$ vanishes identically on $H_1$. By symmetry,
$F$ vanishes on all of the hyperplanes $H_i$. Dividing out by the
linear forms defining the hyperplanes gives a form $F'$ of degree
$d-s$ vanishing with multiplicity $m-e$ at each point of
$S_{e}(e,s,{\bf d})$, and hence
$\alpha_{e}(m-e,e,s,{\bf d})\le d-s < ms/e-s = (m-e)s/e$,
hence again $F'$ vanishes on all $H_i$. Continuing in this way,
we eventually obtain a form of degree less than $s$
that vanishes on the $s$ hyperplanes $H_i$, which is a contradiction
unless $F=0$.
\end{proof}
We still need to know $\alpha(I(S_N(e, s, {\bf d}))$.
\begin{lem}\label{regLem} Let $1\le e\le N$, $e\le s$
and $d_1\le d_2\le \cdots\le d_s$.
For $S=S_N(e,s,{\bf d})$ we have $\alpha(I(S)) = d_1+\cdots+d_{s-e+1}$.
If $e=N$ and $d_i=1$ for all $i$, we have
$\alpha(I(S)) =\sigma(I(S)) = s-N+1$.
\end{lem}
\begin{proof} Clearly, $\alpha(I(S)) \le d_1+\cdots+d_{s-e+1}$,
since every intersection of $e$ of the hypersurfaces must involve
one of the hypersurfaces $H_1,\ldots,H_{s-e+1}$.
For the rest, let us refer to the union of the $e$-wise intersections
of the hypersurfaces $H_i$ as the codimension $e$ skeleton of the $H_i$,
or just the $e$-skeleton. We will now show that
any hypersurface $H$ of degree $d<d_1+\cdots+d_{s-e+1}$ which vanishes
on the $e$-skeleton also vanishes on the $(e-1)$-skeleton.
Since $d<d_1+\cdots+d_{s-e+1}\le d_1+\cdots+d_{s-(e-1)+1}$,
this means that $H$ also vanishes on the $(e-1)$-skeleton, and so on,
and thus vanishes on the 1-skeleton and indeed the 0-skeleton
(i.e., the whole space, since a form of degree $d$
cannot contain hypersurfaces whose degrees sum to more than $d$).
Thus $H$ is 0, and this shows
$\alpha(I(S)) \ge d_1+\cdots+d_{s-e+1}$ which gives equality.
So suppose $H$ has degree $d<d_1+\cdots+d_{s-e+1}$ and vanishes
on the $e$-skeleton. Thus for any indices $i_1<\cdots<i_{e-1}$ and any $j$
not one of these indices,
$H$ vanishes on $H_{i_1}\cap \cdots\cap H_{i_{e-1}}\cap H_j$.
By Bertini (Theorem II.8.18 of \cite{refHt}, taking hyperplane sections
after uple embeddings), intersections of general hypersurfaces
are smooth and, in dimension 2 or more, irreducible. Thus
$H_{i_1}\cap \cdots\cap H_{i_{e-1}}$ is irreducible.
If it were not already contained in $H$, we can
intersect with $H$ and do a degree calculation:
$H\cap H_{i_1}\cap \cdots\cap H_{i_{e-1}}$ has degree $dd_{i_1}\cdots d_{i_{e-1}}$
whereas the union of the intersections of $H_{i_1}\cap \cdots\cap H_{i_{e-1}}$ with all
possible $H_j$ (i.e., for all $j$ not among the indices
$i_1,\ldots,i_{e-1}$), has degree $d_{i_1}\cdots d_{i_{e-1}}\sum_jd_j$, where the sum
is over all $j$ not among $i_1,\ldots,i_{e-1}$.
Clearly $d<d_1+\cdots+d_{s-e+1}\le \sum_jd_j$ since the $d_i$ are assumed to be
nondecreasing. Since the total degree of the intersection
$H\cap H_{i_1}\cap \cdots\cap H_{i_{e-1}}$ is less than the sum of the degrees of the
divisors contained in the intersection, it follows that
$H_{i_1}\cap \cdots\cap H_{i_{e-1}}\subseteq H$ for each component
of the $(e-1)$-skeleton, as claimed.
Finally, suppose $e=N$ and $d_i=1$ for all $i$. Then as we have just seen,
$\alpha(S)=s-e+1$. But there are $\binom{s}{e}$ points and
$\binom{(s-e)+N}{N}=\binom{s}{e}$ forms of degree $s-e$ in $N+1$ variables.
Thus the number of conditions imposed by the points equals the number
of points, hence $\tau(I)=s-e$ so $\sigma(I)=\alpha(I)=s-N+1$.
\end{proof}
We now can obtain some results on $\rho(S_N(e,s))$.
As noted above, Theorem \ref{skeletonThm}(b) in the case $s=N+1$
is due to L. Ein; Theorem 4.5 of \cite{refAV} implies
$2-1/N\le \rho(S_N(N,N+1))$, using as Ein
did the fact that the ideal is monomial.
\begin{thm}\label{skeletonThm}
Let $1\le e\le N$ and $e\le s$. Then:
\begin{itemize}
\item[(a)] $\rho(S_N(N,s))=N(s-N+1)/s$; and
\item[(b)] $e(s-e+1)/s\le \rho(S_N(e,s))$.
\item[(c)] More generally, given ${\bf d}=(d_1,\ldots,d_s)$
with $d_1\le \cdots\le d_s$,
we have $$e(d_1+\cdots+d_{s-e+1})/(d_1+\cdots+d_s)\le \rho(S_N(e,s,{\bf d})).$$
\end{itemize}
\end{thm}
\begin{proof} By Lemma \ref{regLem},
$\alpha(I(S_N(e,s)))=s-e+1$, $\sigma(I(S_N(N,s)))=s-N+1$,
and $\alpha(I(S_N(e,s,{\bf d})))=d_1+\cdots+d_{s-e+1}$,
while by Lemma \ref{alphaLemma}, we see that
$$\gamma(I(S_N(e,s)))=\lim_{m\to\infty}\frac{\alpha(I(meS_N(e,s)))}{(me)}=s/e$$
and similarly $\gamma(I(meS_N(e,s,{\bf d})))\le (d_1+\cdots+d_s)/e$.
(a) By Corollary \ref{Bndscor}, we thus have $\rho(S_N(N,s))=N(s-N+1)/s$.
(b) By Lemma \ref{postcrit1} we have $e(s-e+1)/s\le \rho(S_N(e,s))$.
(c) By Lemma \ref{postcrit1} we have $e(d_1+\cdots+d_{s-e+1})/(d_1+\cdots+d_s)
\le \rho(S_N(e,s,{\bf d}))$.
\end{proof}
\subsection{General Facts about $\rho$}
Here we take note of some general behavior of $\rho(I)$.
To state the results, let $R=k[{\bf P}^N]$, let
$x$ be an indeterminate with respect to which we have
$R\subseteq R[x]=k[{\bf P}^{N+1}]$, and let the quotient
$q: R[x]\to R$ correspond to the inclusion ${\bf P}^N\subseteq {\bf P}^{N+1}$.
If $I\subseteq R$ is a homogeneous ideal, let $I'=IR[x]$ be the extended ideal.
In case $I=I(Z)$ for some subscheme $Z\subseteq {\bf P}^N$, we will denote by $C(Z)$
the subscheme defined by $I'$; we note that $C(Z)$ is
just the projective cone over $Z$.
\begin{prop}\label{coneprop} In the notation of the preceeding paragraph, we have:
\begin{itemize}
\item[(a)] $\rho(I)=\rho(I')$, hence $\rho(Z)=\rho(C(Z))$ for any
nontrivial subscheme $Z\subsetneq {\bf P}^N$;
\item[(b)] $\rho(I) =\rho(q^{-1}(I))$, hence if $I=I(Z)$ for a nontrivial subscheme $Z\subsetneq {\bf P}^N\subseteq {\bf P}^{N+1}$,
then $\rho(Z)$ is well defined, whether we regard $Z$ as being in ${\bf P}^N$ or ${\bf P}^{N+1}$; and
\item[(c)] $\rho(mZ)\le \rho(Z)$ for any fat flat subscheme $Z$.
\end{itemize}
\end{prop}
\begin{proof} (a) Since $R\to R[x]$ is flat, primary decompositions
of ideals in $R$ extend to primary decompositions in $R[x]$ (see \cite{refMa}, Theorem 13,
or Exercise 7, \cite{refAM}).
Since $I$ and $I'$ have the same generators,
whenever $I$ and $J$ are ideals in $R$, we have $I\subseteq J$ if and only if
$I'\subseteq J'$. Taken together, this means $I^{(m)}\subseteq I^r$ if and only if
$(I')^{(m)}\subseteq (I')^r$, and hence that $\rho(I)=\rho(I')$.
(b) Note that $q^{-1}(I)=I'+(x)$, and use the facts that $(q^{-1}(I))^r=\sum_i(x^i)(I')^{r-i}$
and $(q^{-1}(I))^{(m)}=\sum_j(x^j)(I')^{(m-j)}$. If $(q^{-1}(I))^{(m)}\subseteq (q^{-1}(I))^{r}$,
setting $x=0$ gives $I^{(m)}\subseteq I^r$, and hence $\rho(q^{-1}(I))\ge \rho(I)$.
And if $m/r>\rho(I)=\rho(I')$, then $(m-j)/(r-j)\ge m/r$ for $0\le j<r$, so
$x^j(I')^{(m-j)}\subseteq x^j(I')^{r-j}$ hence $(q^{-1}(I))^{(m)}\subseteq (q^{-1}(I))^r$,
so $\rho(I)\ge \rho(q^{-1}(I))$.
(c) By definition we can find a ratio $s/r < \rho(I^{(m)})$
arbitrarily close to $\rho(I^{(m)})$ such that $(I^{(m)})^r$ does not contain
$I^{(sm)}$, hence $I^{rm}$ does not contain
$I^{(sm)}$, so $sm/(sr) < \rho(I)$.
\end{proof}
Equality in Proposition \ref{coneprop}(c) can fail.
For example, if $Z$ is the reduced union of three general points in ${\bf P}^2$,
then $\rho(mZ)= 1$ if $m$ is even, while $\rho(mZ)=(3m+1)/(3m)$ if $m$ is odd
\cite{refBH2}.
\section{Proofs}\label{Proofs}
\begin{proof}[Proof of Corollary \ref{LESHHmaxThm}]
The result of \cite{refHH1} shows that
$\rho(N,e)\le e$, while taking the limit as $s\to\infty$
in Theorem \ref{skeletonThm}(b) shows $e\le \rho(N,e)$.
Alternatively, using Theorem \ref{skeletonThm}(a),
$i$ applications of Proposition \ref{coneprop}(a),
and then taking the limit for $s\to \infty$, we conclude
$\rho(N+i,N)=N$ for all $N$ and $i$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{SCthm}]
The upper bound is an immediate
consequence of Corollary \ref{PCcor} and Remark \ref{SCrem}. The lower bound is Lemma \ref{postcrit1}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{SCcor}]
Since the points are generic and the number of points is the binomial coefficient
$n=\binom{s+N}{N}$, we know $\alpha(I)=\sigma(I)=s+1$.
Also, by Lemma \ref{subadd} we know $\gamma(I)=n(\varepsilon(N,Z))^{N-1}$.
Thus the result follows immediately from Theorem \ref{SCthm}.
When $N=2$ and $n$ is a square, we know in addition that $n\varepsilon(2,n)=\sqrt{n}$.
The fact that we can also write $\frac{s+1}{\sqrt{n}}=\sqrt{2}\sqrt{\frac{s+1}{s+2}}$
follows from $n=\frac{(s+1)(s+2)}{2}$.
\end{proof}
\renewcommand{\thethm}{\thesection.\arabic{thm}}
\setcounter{thm}{0}
\section{Additional Examples, Comments and Questions}\label{AECQ}
The inspiration for this paper was a question Huneke asked
the second author: if $S$ is a finite set of points in ${\bf P}^2$
with $I=I(S)$, is it true that $I^{(3)}\subseteq I^2$?
We cannot yet answer this question, but
we can show Huneke's question
has an affirmative answer in many cases.
(Theorems 3.3 and 4.3 of \cite{refTY} give additional cases in which
Huneke's question has an affirmative answer.)
\begin{thm}\label{8}
Let $I=I(S)$, where $S$ is a set of n generic points of ${\bf P}^2$.
Then $I^2$ contains $I^{(3)}$ for every $n \geq 1$.
\end{thm}
\begin{proof} Since for $n=1$, $2$, or $4$, $S$ is a complete intersection
and hence $I^3 = I^{(3)}$, the theorem is true in those cases,
so assume $n$ is not $1$, $2$ or $4$.
If $2\sigma(I) \leq \alpha(I^{(3)})$, then $I^2$ contains $I^{(3)}$
by Lemma \ref{postcrit2}.
Since the points are generic and of multiplicity $1$,
they impose independent conditions in degrees $\alpha(I)$ or more, so $\sigma(I)$ is the
largest $t$ such that $\binom{t}{2} < n$.
Also, the Hilbert functions of ideals of fat point
subschemes supported at 9 or fewer generic points
are known (see, e.g., \cite{refH2}) so we can compute
$\alpha(I^{(3)})$ exactly. Here's what happens for $n \leq 9$:
\[
\begin{array}{c|c|c}
n & \sigma(I) & \alpha(I^{(3)})\\
\hline
3 & 2 & 5\\
5 & 3 & 6 \\
6 & 3 & 8\\
7 & 4 & 8\\
8 & 4 & 9\\
9 & 4 & 9
\end{array}
\]
We see that $2\sigma(I) \leq \alpha(I^{(3)})$, hence $I^2$ contains $I^{(3)}$.
Now assume $n \geq 10$. Let $t = \sigma(I)$; then $\binom{t}{2} < n$
so $t^2-t < 2n$. (Also note that since $n \geq 10$, we must have $t \geq 4$.)
It is known (see \cite{refHi} or \cite{refM}) that $n\ge 10$ points of multiplicity $3$
impose independent conditions on forms of degree at least $\alpha(I^{(3)})$,
so $\alpha(I^{(3)})$ is the least $d$ such that $\binom{d+2}{2} > 6n$.
Thus to show $2t \leq d$, it is enough to show that $\binom{2t+1}{2} \leq 6n$,
which we will do using the fact that $t^2-t < 2n$ and hence
$3t^2-3t < 6n$. In fact, all we need do now is verify that
$\binom{2t+1}{2} \leq 3t^2-3t$, which is easy, keeping in mind that
$t \geq 4$.
\end{proof}
We can also give an affirmative answer to a stronger
version of Huneke's question in the case of $n$ generic points.
\begin{thm}\label{genThm} Let $I=I(S_n)$, where $S_n$ is a
set of n generic points of ${\bf P}^2$.
Then $I^r$ contains $I^{(m)}$ whenever $m/r > 3/2$.
\end{thm}
\begin{proof} This amounts to showing that $\rho(S_n) \leq 3/2$.
Again we can ignore $n = 1$, $2$ and $4$. For $n= 3$, $5$, $6$, $8$ and $9$
we use the upper bounds $\sigma(I)/(n\varepsilon(2,n))$ for $\rho(S_n)$
obtained from Corollary \ref{PCcor} with $c=n\varepsilon(2,n)$;
the values of $n\varepsilon(2,n)$ can be obtained
from Nagata's list of abnormal curves \cite{refNag}, or see
\cite{refH4}. By Corollary \ref{Bndscor},
$\sigma(I)/(n\varepsilon(2,n))=\rho(S_n)$ when $n=3$ and $n=6$. (Using refined methods
that we will present in a subsequent paper, \cite{refBH2},
we can show that equality holds also for $n=8$ and 9, and that $\rho(S_5)=6/5$
and $\rho(S_7)=8/7$.)
\[
\begin{array}{c|c|c|c}
n & n\varepsilon(2,n) & \sigma(I) & \frac{\sigma(I)}{n\varepsilon(2,n)} \\
\hline
3 & \frac{3}{2} & 2 & \frac{4}{3} \\
& & & \\
5 & 2 & 3 & \frac{3}{2}\\
& & & \\
6 & \frac{12}{5} & 3 & \frac{5}{4}\\
& & & \\
7 & \frac{21}{8} & 4 & \frac{32}{21} = 1.52\\
& & & \\
8 & \frac{48}{17} & 4 & \frac{17}{12} = 1.42\\
& & & \\
9 & 3 & 4 & \frac{4}{3}
\end{array}
\]
In order to handle $n = 7$, we see we need a better bound,
which we obtain using Remark \ref{Omegarem}.
We claim $\rho(S_7)\le 6/5$. We must show that
if $m/r > 6/5$, then $\alpha(I^{(m)})\ge r\omega(I)+2({\rm reg}(I)-\omega(I))$,
where here $\omega(I)=3$ and ${\rm reg}(I)=4$
(see \cite{refH3} for the graded Betti numbers for the resolution of the ideal $I^{(m)}$
for any $m\ge 1$). From the Seshadri constant in the table above,
we know $\alpha(I^{(m)})\ge 21m/8$. (In fact, it turns out that
$\alpha(I^{(m)})=\lceil 21m/8\rceil$. Clearly $\alpha(I^{(m)})\ge \lceil 21m/8\rceil$,
and one checks the cases $m\le 8$ directly to see that equality holds.
For $m\ge 8$, write $m=8i+j$ with $0\le j<8$ and
use $\alpha(I^{(8i+j)})\le i\alpha(I^{(8)})+\alpha(I^{(j)})=\lceil 21m/8\rceil$.)
Thus $\lceil 21m/8\rceil\ge 3r+2$ (or, equivalently, $21m/8>3r+1$) implies
$\alpha(I^{(m)})\ge r\omega(I)+2({\rm reg}(I)-\omega(I))$.
But for $r>6$, $m/r\ge 6/5$ implies $21m/8>3r+1$ and hence
$\alpha(I^{(m)})\ge r\omega(I)+2({\rm reg}(I)-\omega(I))$.
We now check $r\le 6$ individually. If $r=1$, clearly for any $m\ge1$ we have
$I^{(m)}\subseteq I^r$. For $r=2$ and $m/r\ge 1.2$, we have $m\ge3$,
$\alpha(I^{(m)})\ge8$, and so $\alpha(I^{(m)})\ge8=3r+2$.
Similarly for $r=3,4,5$ and 6. Thus $\rho(S_7)\le 1.2$.
(We cannot do better than $\rho(S_7)\le 1.2$ using this argument, since $m=6$ and $r=5$
give $m/r=1.2$, yet fail to satisfy $\alpha(I^{(m)})\ge 3r+2$.)
Now consider $n > 9$. It is known that $\varepsilon(n) \ge \sqrt{n-1}/n$.
See [Xu] for characteristic 0. It also follows from \cite{refH1}
in all characteristics, as follows. Let $s = \lfloor\sqrt{n}\rfloor$,
and define $0\le t \le s$ so that either $n = s^2+2t $ or $n = s^2 + 2t + 1$.
Let $d=s$ and $r=s^2 + t$.
First consider the case that $n = s^2+2t$.
Since $r/d \geq \sqrt{n}$, then $\varepsilon(n) \geq d/r$ by
\cite{refH1}, and a little arithmetic shows that $d/r \geq \sqrt{n-1}/n$.
Now let $n = s^2+2t+1$.
Since now $r/d \leq \sqrt{n}$ (keep in mind that $t < s$),
then $\varepsilon(n) \geq r/(nd)$ by \cite{refH1},
and it is easy to see that $r/(nd) \ge \sqrt{n-1}/n$.
So for $n > 9$ it is enough to check that
$\sigma(I)/(\sqrt{n-1}) \leq 3/2$.
Now, $\sigma(I) = t+1$ for the least t such that $\binom{t+2}{2} \geq n$.
Since for $t=(\sqrt{8n+1}-3)/2$ we have $\binom{t+2}{2} = n$, we see
$\sigma(I) \leq (\sqrt{8n+1}-3)/2 + 2$. It is not hard to check that
$((\sqrt{8n+1}-3)/2 + 2)/(\sqrt{n-1}) < 3/2$ for all $n \ge 52$.
We have left to deal with $10 \leq n \leq 51$. For these few cases we can
use the best lower bounds for $\varepsilon(n)$ given in \cite{refH1}
(or the exact value if $n$ is a square) instead of $\sqrt{n-1}/n$,
and we can use the exact value of $\sigma(I)$
instead of $(\sqrt{8n+1}-3)/2 + 2$. Doing so we find that
$\sigma(I)/(n\varepsilon(n)) < 3/2$ except for $n = 11$ (in this case
even taking the conjectural value $\varepsilon(n) = 1/\sqrt{n}$ gives only
that $\sigma(I)/(n\varepsilon(n)) \leq 1.507)$, or $n = 17$, $22$ or $37$, in which case we
have $\varepsilon(n)$ being at least $4/17$, $7/33$ and $6/37$ resp., hence at least we
obtain $\sigma(I)/(n\varepsilon(n)) \leq 3/2$, but this suffices for the statement
of the theorem (however, see the remark that follows).
For $n=11$, argue as for $n=7$. For $n=11$, we have $\omega(I)=4$,
and ${\rm reg}(I)=5$, so $r\omega(I)+2({\rm reg}(I)-\omega(I))=4r+2$.
Now $\rho(S_{11})\le c$ if we pick $c$ such that
$m/r\ge c$ implies $\alpha(I^{(m)})\ge 4r+2$. But
$n\varepsilon(n)\ge\sqrt{10}$ so $\alpha(I^{(m)})\ge m\sqrt{10}\ge rc\sqrt{10}$,
so we just need $c$ such that $rc\sqrt{10}>4r+1$ for $r\ge2$.
We see we need $c>4/\sqrt{10} + 1/(2\sqrt{10})=
(2{\rm reg}(I)-1)/(2\sqrt{n-1})$ so $c=1.43$ suffices;
i.e., $\rho(S_{11})\le 1.43$.
\end{proof}
\begin{Rmk} In a subsequent paper, \cite{refBH2}, we will compute $\rho(S)$ for sets of
points on irreducible plane conics. Our result for the case of 5 points on a smooth conic
is $\rho(S_5)=6/5$. Also, arguing in the case of $n=17, 22$ and 37
generic points as we did for $n=11$, we find, resp., that
$\omega(I)$ and ${\rm reg}(I)$ are 5, 6 and 8, and 6, 7 and 9, and hence that
$(2{\rm reg}(I)-1)/(2\sqrt{n-1})$ is 1.375, 1.418, 1.416, resp., so
$\rho(I)$ is, for example, at most 1.38, 1.42, and 1.42, resp.
Thus in fact we can state a slightly stronger version of
the preceding theorem:
for a generic set $S_n$ of $n$ points of ${\bf P}^2$,
$I^r$ contains $I^{(m)}$ whenever $m/r \geq 3/2$ (rather than just $m/r > 3/2$).
(Alternatively, assuming characteristic 0, we can
handle the cases $n=17, 22$ and 37 simply by using a better
estimate for $\varepsilon(n)$: in characteristic 0, \cite{refB}
shows $\varepsilon(n)$ is at least $8/33$, $42/197$ and $12/73$, resp.)
\end{Rmk}
In fact, it may be possible that $\rho(S) \leq \sqrt{2}$
whenever $S$ is a generic finite set of points in ${\bf P}^2$.
[While this paper was under review we found that $\rho(S_8)=17/12>\sqrt{2}$ \cite{refBH2},
but we know no other cases for which $\rho(S) > \sqrt{2}$.]
In addition to Theorem \ref{SCcor}, the following result gives some evidence for this
possibility.
\begin{prop}\label{genProp} Let $S$ be a set of $n = (d+2)(d+1)/2 + i\ge10$ generic points
of ${\bf P}^2$ and $(d+4)/2 \le i \le d+2$. Then
$m/r \ge \sqrt{2}$ implies $I^{(m)} \subseteq I^r$.
\end{prop}
\begin{proof} By Lemma \ref{postcrit2} (Postulational Criterion 2),
$I^{(m)} \subseteq I^r$ if $r\sigma(I) \le \alpha(I^{(m)})$, and hence if
$m/r \ge (\sigma(I))/(nc)$, where $\varepsilon(S) \ge c$.
But here $\sigma(I)=d+2$ (since by our choice of $i$ we have
$\binom{d+2}{2}<n\le \binom{d+3}{2}$) and, as in the proof of Theorem \ref{genThm},
we can take $c=\sqrt{n-1}/n$ since $n\ge10$. A little arithmetic using $(d+4)/2 \le i$ now shows that
$\sqrt{2}\ge (\sigma(I))/(nc)$.
\end{proof}
\begin{Ex} By the main theorems of \cite{refELS} and \cite{refHH1},
$I^{(4)}\subseteq I^2$ for $I=I(S)$ for any finite subset
$S\subseteq {\bf P}^2$. Thus, in addition to asking, as Huneke did, if
$I^{(3)}\subseteq I^2$, one might also ask if $I^{(4)}\subseteq I^3$ or
if $I^{(6)}\subseteq I^4$. We close by showing that the answer for the latter
two is no.
In particular, let $I=I(S)$ where $S=S_2(2,s)$ is the set of $n=\binom{s}{2}$ points of
pairwise intersection of $s$ general lines
in ${\bf P}^2$. It is easy to check that
$\alpha(I^{(3)})=2s-1$, since any form in $I^{(3)}$ of degree $2s-2$
must, by Bezout, vanish on each of the $s$ lines, giving a form
of degree $s-2$ in $I$, but $\alpha(I)=s-1$, either by Bezout again
or by Lemma \ref{alphaLemma}. (Similarly, it follows
that $\alpha(I^{(m)})=((m+1)/2)s-1$ whenever $m$ is odd.)
Now by Lemma \ref{postcrit2}, using Lemma \ref{regLem},
it follows that $I^{(3)}\subseteq I^2$ for all $s$, and
by Lemma \ref{postcrit1}, using Lemma \ref{alphaLemma},
it follows that $I^3$ does not contain $I^{(4)}$ for $s>3$
and that $I^4$ does not contain $I^{(6)}$ for $s>4$.
\end{Ex}
| {
"timestamp": "2009-06-24T16:20:24",
"yymm": "0706",
"arxiv_id": "0706.3707",
"language": "en",
"url": "https://arxiv.org/abs/0706.3707",
"abstract": "We develop tools to study the problem of containment of symbolic powers $I^{(m)}$ in powers $I^r$ for a homogeneous ideal $I$ in a polynomial ring $k[{\\bf P}^N]$ in $N+1$ variables over an algebraically closed field $k$. We obtain results on the structure of the set of pairs $(r,m)$ such that $I^{(m)}\\subseteq I^r$. As corollaries, we show that $I^2$ contains $I^{(3)}$ whenever $S$ is a finite generic set of points in ${\\bf P}^2$ (thereby giving a partial answer to a question of Huneke), and we show that the containment theorems of Ein-Lazarsfeld-Smith and Hochster-Huneke are optimal for every fixed dimension and codimension.",
"subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC)",
"title": "Comparing powers and symbolic powers of ideals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517424466174,
"lm_q2_score": 0.8080672158638528,
"lm_q1q2_score": 0.7903315484896282
} |
https://arxiv.org/abs/1601.03304 | Intrinsic random walks in Riemannian and sub-Riemannian geometry via volume sampling | We relate some basic constructions of stochastic analysis to differential geometry, via random walk approximations. We consider walks on both Riemannian and sub-Riemannian manifolds in which the steps consist of travel along either geodesics or integral curves associated to orthonormal frames, and we give particular attention to walks where the choice of step is influenced by a volume on the manifold. A primary motivation is to explore how one can pass, in the parabolic scaling limit, from geodesics, orthonormal frames, and/or volumes to diffusions, and hence their infinitesimal generators, on sub-Riemannian manifolds, which is interesting in light of the fact that there is no completely canonical notion of sub-Laplacian on a general sub-Riemannian manifold. However, even in the Riemannian case, this random walk approach illuminates the geometric significance of Ito and Stratonovich stochastic differential equations as well as the role played by the volume. | \section{Volume sampling as Girsanov-type change-of-measure}\label{a:Girsanov}
In both the geodesic and flow random walks defined in Sections~\ref{s:ito-intro} and~\ref{s:strato-intro}, the probability measure used to select the vector $V=\sum\beta_iV_i$ was the uniform probability measure on the unit sphere with respect to the covariance structure of the $w^i_t$ (which gives an inner product on the vector space of such $V$). In the volume sampling scheme we have introduced for the geodesic random walk with respect to an orthonormal frame on a Riemannian manifold (that is, the volume sampling scheme for the isotropic random walk that approximates Brownian motion), the probability measure on the sphere is replaced by a different probability measure, absolutely continuous with respect to the uniform one. In terms of the random walk, the volume-sampled walk is supported on the same set of paths as the original walk, but with a different probability measure, absolutely continuous with respect to the original. In the scaling limit as $\varepsilon\rightarrow 0$, this change in measure produces a drift in the limiting diffusion, and we recognize this as a Girsanov-type phenomenon. We now take a moment to explore this interpretation in a bit more detail.
The standard finite-dimensional model for Girsanov's theorem, as given at the beginning of \cite[Section 3.5]{KaratzasShreve}, is as follows. With slightly loose notation, we let $N(0,\mathbb{I}_n)$ denote the centered (multivariate) normal distribution on $\mathbb{R}^n$ with covariance structure given by the identity matrix (that is, the $n$ Euclidean coordinates are i.i.d.\ normals with expectation 0 and variance 1). Let $Z$ be a random variable (on some probability space with probability denoted $P$) with distribution $N(0,\mathbb{I}_n)$, and let $v\in\mathbb{R}^n$. We have a new probability measure $\tilde{P}$, absolutely continuous with respect to $P$, given by
\[
\tilde{P}(d\lambda) = e^{\ip{v}{Z(\lambda)}-\frac{1}{2}\ip{v}{v}} P(d\lambda) ,
\]
where $\ip{\cdot}{\cdot}$ is the standard inner product on $\mathbb{R}^n$.
Then the random variable $Z-v$ has distribution $N(0,\mathbb{I}_n)$ under $\tilde{P}$. So adjusting the measure in this way compensates for the translation, which equivalently means that one can create a translation by adjusting the measure. The infinite-dimensional version of this (for Brownian motion on Euclidean space) is Girsanov's theorem.
Next, we rephrase this. Another way of determining $\tilde{P}$ is to say that it comes from adjusting the ``likelihood ratios'' for $P$ by
\begin{equation}\label{Eqn:Girsanov}
\frac{\tilde{P}(d\lambda_2)}{\tilde{P}(d\lambda_1)} = e^{\ip{v}{Z(\lambda_2)}-\ip{v}{Z(\lambda_1)}}
\frac{P(d\lambda_2)}{P(d\lambda_1)} .
\end{equation}
This accounts for the $ e^{\ip{v}{Z(\lambda)}}$ in the Radon-Nikodym derivative above, which is the important part; the $e^{-\frac{1}{2}\ip{v}{v}}$ is just the normalizing constant making $\tilde{P}$ a probability measure.
For the isotropic random walk, we have that $P$ is $\mu_q^0$, the uniform probability measure on the sphere of radius $\sqrt{n}$ in $T_qM$, with respect to the Riemannian inner product. (Here we choose to normalize the sphere to include the $\sqrt{n}$ factor in order to make the connection to Girsanov's theorem clearer.) Of course, $\mu_q^0$ is not a multivariate normal on $T_qM \simeq \mathbb{R}^n$. However, $\mu_q^0$ has expectation $0$ and covariance matrix $\mathbb{I}_n$, so that $\mu_q^0$ has the same first two moments as $N(0,\mathbb{I}_n)$. In light of Donsker's invariance principle, it is not surprising that ``getting the first two moments right'' is enough. Now $\mu_q^{c\varepsilon}$ is absolutely continuous with respect to $\mu_q^0$, and, as we have seen in the proof of Theorem \ref{t:limit-Riemannian}, the relationship is given by
\[\begin{split}
\frac{\mu_q^{c\varepsilon}\left( d\lambda_2\right)}{\mu_q^{c\varepsilon}\left( d\lambda_1\right)} &= \frac{\frac{1}{\mathrm{vol}(\mathbb{S}^{n-1})}\left( 1+c\varepsilon
\ip{\grad(h)}{\lambda_2}+O\left(\varepsilon^2\right) \right)}{\frac{1}{\mathrm{vol}(\mathbb{S}^{n-1})}\left( 1+c\varepsilon
\ip{\grad(h)}{\lambda_1}+O\left(\varepsilon^2\right) \right)} \cdot \frac{\mu_q^{0}\left( d\lambda_2\right)}{\mu_q^{0}\left( d\lambda_1\right)} \\
&= e^{c\varepsilon\left( \ip{\grad(h)}{\lambda_2} - \ip{\grad(h)}{\lambda_1}\right)+O\left( \varepsilon^2\right)}
\cdot \frac{\mu_q^{0}\left( d\lambda_2\right)}{\mu_q^{0}\left( d\lambda_1\right)} .
\end{split}\]
Note that, as we have developed things, the random variable that has distribution $\mu_q^{0}$, which is analogous to $Z$ above, is implicitly just the identity on the sphere. (Also, $\mu_q^{c\varepsilon}$ is a probability measure by construction, so there's no need for a normalizing factor, partially explaining our focus on the likelihood ratio.)
Comparing this to \eqref{Eqn:Girsanov}, we see that the role of $v$ is played by the quantity $c\varepsilon\grad(h)+O(\varepsilon^2)$. To take into account the parabolic scaling limit (and, at this stage, also to take into account the analysts' normalization), note that this non-centered measure on the sphere of radius $\sqrt{n}$ (namely $\mu_q^{c\varepsilon}$) is mapped onto geodesics of length $\varepsilon$, and that this step takes place in time $\delta = 2n/\varepsilon^2$, so that the difference quotient (expected spatial displacement over elapsed time) is $2c\grad(h)+O(\varepsilon)$. Thus, in the limit as $\varepsilon\rightarrow 0$, we expect an infinitesimal translation given by the tangent vector $2c\grad(h)$, which is exactly what we see in Theorem \ref{t:limit-Riemannian} (appearing as a first-order differential operator). Namely, the random walk corresponding to $\mu_q^{0}$ has infinitesimal generator $\Delta_{\mathcal{R}}$ in the limit, while the random walk corresponding to $\mu_q^{c\varepsilon}$ has infinitesimal generator $\Delta_{\mathcal{R}}+2c\grad(h)$ in the limit. So this volume sampling gives a natural random walk version of the Girsanov change of measure.
\section{Convergence of random walks}\label{s:convergence}
We recall some preliminaries in sub-Riemannian geometry (see \cite{nostrolibro}, but also \cite{montgomerybook,riffordbook,Jea-2014}).
\begin{definition}
A \emph{(sub-)Riemannian manifold} is a triple $(M,\distr,\metr)$ where $M$ is smooth, connected manifold, $\distr \subset TM$ is a vector distribution of constant rank $k \leq n$ and $\g$ is a smooth scalar product on $\distr$. We assume that $\distr$ satisfies the \emph{H\"ormander's condition}
\begin{equation}
\spn \{ [X_{i_1},[X_{i_2},[\ldots,[X_{i_{m-1}},X_{i_m}]]] \mid m\geq 0, \quad X_{i_\ell} \in \Gamma(\distr)\}_q = T_q M, \qquad \forall q \in M.
\end{equation}
\end{definition}
By the Chow-Rashevskii theorem, any two points in $M$ can be joined by a Lipschitz continuous curve whose velocity is a.e.\ in $\distr$. We call such curves \emph{horizontal}. Horizontal curves $\gamma : I \to M$ have a well-defined length, given by
\begin{equation}
\ell(\gamma) = \int_I \|\gamma(t)\| dt,
\end{equation}
where $\| \cdot \|$ is the norm induced by $\g$. The \emph{sub-Riemannian distance} between $p,q \in M$ is
\begin{equation}
d(p,q) = \inf\{ \ell(\gamma) \mid \gamma \text{ horizontal curve connecting $q$ with $p$} \}.
\end{equation}
This distance turns $(M,\distr,\metr)$ into a metric space that has the same topology of $M$. A sub-Riemannian manifold is \emph{complete} if $(M,d)$ is complete as a metric space. In the following, unless stated otherwise, we always assume that (sub-)Riemannian structures under consideration are complete.
(Sub-)Riemannian structures include Riemannian ones, when $k=n$. We use the term ``sub-Riemannian'' to denote structures that are not Riemannian, i.e.\ $k < n$.
\begin{definition}\label{Def:PathSpace}
If $M$ is a (sub-)Riemannian manifold, (following the basic construction of Stroock and Varadhan \cite{SAndV}) let $\Omega(M)$ be the space of continuous paths from $[0,\infty)$ to $M$. If $\gamma\in\Omega(M)$ (with $\gamma(t)$ giving the position of the path at time $t$), then the metric on $M$ induces a metric $d_{\Omega(M)}$ on $\Omega(M)$ by
\[
d_{\Omega(M)}\left( \gamma^1,\gamma^2\right)= \sum_{i=1}^{\infty}\frac{1}{2^i}
\frac{\sup_{0\leq t\leq i}d\left( \gamma^1(t),\gamma^2(t)\right)}{1+\sup_{0\leq t\leq i}d\left( \gamma^1(t),\gamma^2(t)\right)}
\]
making $\Omega(M)$ into a Polish space. We give $\Omega(M)$ its Borel $\sigma$-algebra. We are primarily interested in the weak convergence of probability measures on $\Omega(M)$.
\end{definition}
A choice of probability measure $P$ on $\Omega(M)$ determines a continuous, random process on $M$, and (in this section) we will generally denote the random position of the path at time $t$ by $q_t$. Moreover, we will use the measure $P$ and the process $q_t$ interchangeably.
We are interested in what one might call bounded-step-size, parabolically-scaled families of random walks, which for simplicity in what follows, we will just call a family of random walks. We will index our families by a ``spatial parameter'' $\varepsilon>0$ (this will be clearer below), and we let $\delta=\varepsilon^2/(2k)$ be the corresponding time step ($k$ is the rank of $\distr$).
\begin{definition}\label{Def:Walk}
A family of random walks on a (sub-)Riemannian manifold $M$, indexed by $\varepsilon>0$ and starting from $q\in M$, is a family of probability measures $P^\varepsilon_q$ on $\Omega(M)$ with $P^{\varepsilon}_q(q^{\varepsilon}_0=q)=1$ and having the following property. For every $\varepsilon$, and every $\tilde{q}\in M$, there exists a probability measure $\Pi_{\tilde{q}}^{\varepsilon}$ on continuous paths $\gamma:[0,\delta]\rightarrow M$ with $\gamma(0)=\tilde{q}$ such that for every $m=0,1,2,\ldots$, the distribution of $q^{\varepsilon}_{[m\delta,(m+1)\delta]}$ under $P^{\varepsilon}_q$ is given by $\Pi_{q^{\varepsilon}_{m\delta}}^{\varepsilon}$, independently of the position of the path $q^{\varepsilon}_t$ prior to time $m\delta$. Further, there exists some constant $\kappa$, independent of $\tilde{q}$ and $\varepsilon$, such that the length of $\gamma_{[0,\delta]}$ is almost surely less than or equal to $\kappa\varepsilon$ under $\Pi_{\tilde{q}}^{\varepsilon}$. (So the position of the path at times $m\delta$ for $m=0,1,2,\ldots$ is a Markov chain, starting from $q$, with transition probabilities $P_q^{\varepsilon}\left( q^{\varepsilon}_{(m+1)\delta}\in A\mid q^{\varepsilon}_{m\delta}=\tilde{q} \right) = \Pi_{\tilde{q}}^{\varepsilon}\left( \gamma_{\delta}\in A\right)$ for any Borel $A\subset M$.
\end{definition}
\begin{rmk}\label{r:remfordefinition}
In what follows $\Pi_{\tilde{q}}^{\varepsilon}$ will, in most cases, be supported on paths of length exactly $\varepsilon$ (allowing us to take $\kappa=1$). For example, on a Riemannian manifold, one might choose a direction at $q^{\varepsilon}_{m\delta}$ at random and then follow a geodesic in this direction for length $\varepsilon$ (and in time $\delta$). Alternatively, on a Riemannian manifold with a global orthonormal frame, one might choose a random linear combination of the vectors in the frame, still having length 1, and then flow along this vector field for length $\varepsilon$. In both of these cases, $\Pi_{\tilde{q}}^{\varepsilon}$ is itself built on a probability measure on the unit sphere in $T_{\tilde{q}}M$ according to a kind of scaling by $\varepsilon$. These walks, and variations and sub-Riemannian versions thereof, form the bulk of what we consider, and should be sufficient to illuminate the definition.
While the introduction of the ``next step'' measure $\Pi^{\varepsilon}_{\tilde{q}}$ is suitable for the general definition and accompanying convergence result, it is overkill for the geometrically natural steps that we consider. Instead, we will describe the steps of our random walks in simpler geometric terms (as in the case of choosing a random geodesic segment of length $\varepsilon$ just mentioned), and leave the specification of $\Pi^{\varepsilon}_{\tilde{q}}$ implicit, though in a straightforward way.
\end{rmk}
\begin{rmk}
All of the random walks we consider will be horizontal, in the sense that $\Pi_{\tilde{q}}^{\varepsilon}$ is supported on horizontal curves. (In the Riemannian case, this, of course, is vacuous.) So while the diffusions we will get below as limits of such random walks will not be horizontal insofar as they are supported on paths that are not smooth enough to satisfy the definition of horizontal given above, they nonetheless are limits of horizontal processes.
\end{rmk}
We note that, for some constructions like that of solutions to a Stratonovich SDE, there need not be a metric on $M$, but instead a smooth structure is sufficient. Unfortunately, the machinery of convergence of random walks in Theorem \ref{t:convergence} below is formulated in terms of metrics, and thus we will generally proceed by choosing some (Riemannian or sub-Riemannian) metric on $M$ when desired. However, note that if $M$ is compact, any two Riemannian metrics induce Lipschitz-equivalent distances on $M$, and thus the induced distances on $\Omega(M)$ are comparable. This means that the resulting topologies on $\Omega(M)$ are the same, and thus statements about the convergence of probability measures on $\Omega(M)$ (which is how we formalize the convergence of random walks) do not depend on what metric on $M$ is chosen. This suggests that a more general framework could be developed, avoiding the need to introduce a metric on $M$ when the smooth structure should suffice, but such an approach will not be pursued here.
\begin{definition}\label{d:operator}
Let $\varepsilon>0$. To the family of random walks $q_t^\varepsilon$ (in the sense of Definition \ref{Def:Walk}, and with the above notation), we associate the family of smooth operators on $C^\infty(M)$
\begin{equation}
(L^\varepsilon\phi)(q) := \frac{1}{\delta}\mathbb{E}[\phi(q_{\delta}^\varepsilon) - \phi(q)\mid q_0^\varepsilon =q], \qquad \forall q \in M.
\end{equation}
\end{definition}
\begin{definition}
Let $L$ be a differential operator on $M$. We say that a family $L^\varepsilon$ of differential operators converge to $L$ if for any $\phi \in C^\infty(M)$ we have $L^\varepsilon \phi \to L \phi$ uniformly on compact sets. In this case, we write $L^\varepsilon \to L$.
\end{definition}
Let $L$ be a smooth second-order differential operator with no zeroth-order term. If the principal symbol of $L$ is also non-negative definite, then there is a unique diffusion associated to $L$ starting from any point $q\in M$, at least up until a possible explosion time. However, since our analysis in fundamentally local, we will assume that the diffusion does not explode. In that case, this diffusion is given by the measure $P_q$ on $\Omega(M)$ that solves the martingale problem for $L$, so that
\[
\phi(q_t) - \int_0^t L\phi(q_s)\, ds
\]
is a martingale under $P_q$ for any smooth, compactly supported $\phi$, and $P_q\left( q_0=q\right)=1$.
\begin{theorem}\label{t:convergence}
Let $M$ be a (sub-)Riemannian manifold, let $P^\varepsilon_q$ be the probability measures on $\Omega(M)$ corresponding to a sequence of random walks $q_t^\varepsilon$ (in the sense of Definition \ref{Def:Walk}), with $q^\varepsilon_0 =q$, and let $L^\varepsilon$ be the associated family of operators. Suppose that $L^\varepsilon \to L^0$ (in the sense of Definition \ref{d:operator}), where $L^0$ is a smooth second-order operator with non-negative definite principal symbol and without zeroth-order term. Further, suppose that the diffusion generated by $L$, which we call $q_t^0$, does not explode, and let $P_q^0$ be the corresponding probability measure on $\Omega(M)$ starting from $q$. Then $P^\varepsilon_q \to P_q^0$ as $\varepsilon \to 0$, in the sense of weak convergence of probability measures (see Definition~\ref{Def:PathSpace}).
\end{theorem}
\begin{proof}
The theorem is a special case of \cite[Thm. 70, Rmk. 26]{OurUrPaper}. First note that a random walk $q_t^\varepsilon$ as described here corresponds to a random walk $X_{t}^h$ in the notation of \cite{OurUrPaper}, with $h = \varepsilon^2/2k$, and with each step being given either by a continuous curve (which may or may not be a geodesic), as addressed in Remark 26. Every random walk in our class has the property that, during any step, the path never goes more than distance $\kappa\varepsilon$ from the starting point of the step for some fixed $\kappa>0$, by construction, and this immediately shows that every random walk in our class satisfies Eq.~(19) of \cite{OurUrPaper}.
Then all the assumptions of \cite[Thm. 70]{OurUrPaper} are satisfied and the conclusion follows, namely $P^\varepsilon_q \to P_q^0$ as $\varepsilon \to 0$.
\end{proof}
\section{Introduction}
Consider a Riemannian or a sub-Riemannian manifold $M$ and assume that $\{X_1,\ldots,X_k\}$ is a global orthonormal frame. It is well known that, under mild hypotheses, the solution $q_t$ to the stochastic differential equation in Stratonovich sense
\begin{equation}
dq_t=\sum_{i=1}^k X_i(q_t) \circ \left( \sqrt{2}\, dw^i_t\right)
\end{equation}
produces a solution to the heat-like equation
\begin{equation} \label{eq-strat}
\partial_t\varphi=\sum_{i=1}^k X_i^2 \varphi
\end{equation}
by taking $\varphi_t(q) = \mathbb{E}\left[ \varphi_0(q_t)| q_0=q\right]$, where $\varphi_0$ gives the initial condition. (Here the driving processes $w^i_t$ are independent real Brownian motions, and $\sqrt{2}$ factor is there so that the resulting sum-of-squares operator doesn't need a $1/2$, consistent with the convention favored by analysts.) One can interpret \eqref{eq-strat} as the equation satisfied by a random walk with parabolic scaling following the integral curves of the vector fields $X_1,\ldots X_m$, when the step of the walk tends to zero. This construction is very general (works in Riemannian and in the sub-Riemannian case) and does not use any notion of volume on the manifold.\footnote{In the Riemannian case avoiding the use of a volume is not crucial since an intrinsic volume (the Riemannian one) can always be defined. But in the sub-Riemannian case, how to define an intrinsic volume is a subtle question, as discussed below.}
However the operator $\sum_{i=1}^k X_i^2$ is not completely satisfactory to describe a diffusion process for the following reasons:
\begin{itemize}
\item the construction works only if a global orthonormal frame $X_1,\ldots,X_k$ exists;
\item it is not intrinsic in the sense that it depends on the choice of orthonormal frame;
\item it is not essentially self-adjoint w.r.t. a natural volume and one cannot guarantee a priori a ``good'' evolution in $L^2$ (existence and uniqueness of a contraction semigroup, etc...).
\end{itemize}
In the Riemannian context a heat operator that is globally well defined, frame independent and essentially self-adjoint w.r.t.\ the Riemannian volume (at least under the hypotheses of completeness) is the Laplace-Beltrami operator $\Delta=\div\circ\grad$. The heat equation
\begin{equation}
\partial_t \varphi=\Delta \varphi
\end{equation}
has an associated diffusion, namely Brownian motion (with a time change by a factor of 2), given by the solution of the stochastic differential equation
\begin{equation}
dq_t=\sum_{i=1}^k X_i(q_t) \left(\sqrt{2}\, dw^i_t\right) \qquad \text{(in this case $k=n$ is equal to the dimension of $M$)}
\end{equation}
in the Ito sense (for instance using the Bismut construction on the bundle of orthonormal frames \cite{bismut} or Emery's approach \cite{emery}).
Also, this equation can be interpreted as the equation satisfied by the limit of a random walk that, instead of integral curves, follows geodesics. The geodesics starting from a point are weighted with a uniform probability given by the Riemannian metric on the tangent space at the point.
The purpose of this paper is to extend this more invariant construction of random walks to the sub-Riemannian context, to obtain a definition of an intrinsic Laplacian in sub-Riemannian geometry and to compare it with the divergence of the horizontal gradient.
The task of determining the appropriate random walk is not obvious for several reasons. First, in sub-Riemannian geometry geodesics starting from a given point are always parameterized by a non-compact subset of the cotangent space at the point, on which there is no canonical ``uniform'' probability measure. Second, in sub-Riemannian geometry for every $\varepsilon$ there exist geodesics of length $\varepsilon$ that have already lost their optimality, and one has to choose between a construction involving all geodesics (including the non-optimal ones) or only those that are optimal. Third, one should decide what to do with abnormal extremals. Finally, there is the problem of defining an intrinsic volume in sub-Riemannian geometry, to compute the divergence.
It is not the first time that this problem has been attacked. In \cite{GordLae,OurUrPaper,Grong1,Grong2}, the authors compare the divergence of the gradient with the Laplacian corresponding to a random walk induced by a splitting of the cotangent bundle (see \cite[Section 1.4]{OurUrPaper} for a detailed summary of this literature). In this paper we take another approach, trying to induce a measure on the space of geodesics from the ambient space by ``sampling'' the volume at a point a fraction $c$ of the way along the geodesic, see Section~\ref{s:riem-RW}. In a broader context, discrete approximations to diffusions have a long history, with, for example, Wong-Zakai approximations being widespread. The present paper follows in a related tradition, going back to \cite{PinksyRiem}, of developing geometrically meaningful approximations to diffusions on Riemannian or sub-Riemannian manifolds, in part with the aim of elucidating the connection between the diffusion and more fundamental geometric features of the manifold and/or the dynamics of which the diffusion is an idealization. This direction has seen a fair amount of activity lately; besides the papers on random walks arising from splittings in sub-Riemannian geometry listed above, we mention the kinetic Brownian motion of \cite{IsmaelKinetic} (which gives a family of $C^1$ approximations to Riemannian Brownian motion with random velocity vector) and the homogenization of perturbations of the geodesic flow discussed in \cite{XueMei} (which also gives an approximation to Riemannian Brownian motion).
This idea works very well in the Riemannian case, permitting a random walk interpretation of the divergence of the gradient also when the divergence is computed w.r.t.\ an arbitrary volume. More precisely, the limiting diffusion is generated by the divergence of the gradient if and only if at least one of the two conditions are satisfied: (i) one is using the Riemannian volume; (ii) the parameter $c$ used to realize the ``volume sampling'' is equal to $1/2$, evoking reminiscences of the Stratonovich integral. From these results one also recognizes a particular role played by the Riemannian volume (see Section~\ref{s:riem-RW} and Corollary~\ref{c:self-adj-Riem}). (In this Riemannian case, $c=0$ corresponds to no sampling of the volume, and the limiting diffusion is just Brownian motion as above.)
In the sub-Riemannian case the picture appears richer and more complicated. Even for contact Carnot groups (see Section~\ref{s:SR-RW}) the volume sampling procedure is non-trivial, as one requires an explicit knowledge of the exponential map. For Heisenberg groups, one gets a result essential identical to the Riemannian case, i.e. that the limiting diffusion is generated by the divergence of the horizontal gradient if and only if at least one of the following is satisfied: (i) one is using the Popp volume; (ii) the parameter $c=1/2$. For general contact Carnot groups, the results are more surprising, since the generator of the limiting diffusion is not the expected divergence of the horizontal gradient (even the second-order terms are not the expected ones); however, the generator will be the divergence of the horizontal gradient with respect to a different metric on the same distribution, as shown in Section \ref{Sect:IntrinsicFormula}. Moreover, the result just described applies to two somewhat different notions of a geodesic random walk, one in which we walk along all geodesics, and one in which we walk only along minimizing ones. An important difference between these two approaches is that only the walk along minimizing geodesics gives a non-zero operator in the limit as the volume sampling parameter $c$ goes to 0 (see Section \ref{s:altconstr}). Moreover, this non-zero limiting operator turns out to be independent of the volume, so that it becomes a possible, if slightly unusual (the principle symbol is not the obvious one), candidate for a choice of intrinsic sub-Laplacian. This may be an interesting direction to explore.
Motivated by these unexpected results and difficulties in manipulating the exponential map in more general sub-Riemannian cases, in Section \ref{s:RW-flow} we try another construction in the general contact case (that we call the flow random walk with volume sampling), inspired by the classical Stratonovich integration and also including a volume sampling procedure. This construction, a priori not intrinsic (it depends on a choice of vector fields), gives rise in the limit to an intrinsic operator showing the particular role played by the Popp volume. This construction also gives some interesting hints in the Riemannian case;
unfortunately it cannot be easily generalized to situations of higher step or corank.
On the stochastic side, in Section~\ref{s:convergence}, we introduce a general scheme for the convergence of random walks of a sufficiently general class to include all our constructions, based on the results of \cite{OurUrPaper}. Further, in the process of developing the random walks just described, we naturally obtain an intuitively appealing description of the solution to a Stratonovich SDE on a manifold as a randomized flow along the vector fields $V_1,\ldots,V_k$ (which determine the integrand) while the solution to an Ito SDE is a randomized geodesic tangent to the vector fields $V_1,\ldots,V_k$ (as already outlined above for an orthonormal frame). This difference corresponds to the infinitesimal generator being based on second Lie derivatives versus second covariant derivatives. Of course, such an approximation procedure by random walks yields nothing about the diffusions solving these SDEs that is not contained in standard stochastic calculus, but the explicit connection to important geometric objects seems compelling and something that has not been succinctly described before, to the best of our knowledge. Further, it is then natural to round out this perspective on the basic objects of stochastic calculus on manifolds by highlighting the way in which the volume sampling procedure can be viewed as a random walk approximation of the Girsanov change of measure, at least in the Riemannian case (see Appendix~\ref{a:Girsanov}).
For the benefit of exposition, the proofs are collected in Section~\ref{s:proofs}. For the reader's convenience, we collect the results for different structures in Table~\ref{t:table1}; see the appropriate sections for more details and explanation of the notation.
\begin{landscape}
\begin{table}[H]
\centering
\setlength\extrarowheight{1pt}
\begin{tabular}{|m{23mm}||m{81mm}|m{86mm}|} \hline
\multicolumn{1}{|c||}{Structure} & \multicolumn{1}{c|}{ \textbf{Geodesic RW with volume sampling}} & \multicolumn{1}{c|}{ \textbf{Flow RW with volume sampling}} \\ \hline
Riemannian & \begin{tabular}{l} $L_{\omega,c} = \Delta_\omega + (2c-1)\grad(h) \phantom{\sum_{i=1}^n}$ \\[.2cm] $c=\tfrac{1}{2}$ or $h = \text{const}$ : $L_{\omega,c} = \Delta_\omega$ \\[.2cm] $c=0$ : $L_{\omega,0} = \displaystyle \lim_{c \to 0} L_{\omega,c} = \Delta_{\mathcal{R}}$ \\[.2cm] (see Theorem~\ref{t:limit-Riemannian})\end{tabular} & \begin{tabular}{l} $L_{\omega,c} = \Delta_\omega + c \grad(h) + (c-1) \sum_{i=1}^{n} \dive_\omega(X_i) X_i$ \\[.2cm] $c=1$ and $h = \text{const}$ : $L_{\omega,1} = \Delta_\omega$ \\[.2cm] $c= 0$ : $L_{\omega,0} = {\displaystyle \lim_{c \to 0} L_{\omega,c}} = \sum_{i=1}^{n} X_i^2$ \\[.2cm] (see Theorem~\ref{t:limit-Riemannian-fake}) \end{tabular} \\[.2cm] \hline
Heisenberg group $\mathbb{H}_{2d+1}$ & \begin{tabular}{l} $ L_{\omega,c} = \sigma(c) \left(\Delta_{\omega}+ (2c-1) \grad(h) \right) \phantom{\sum_{i=1}^n} \phantom{\sum_{i=1}^n}$ \\[.2cm] $c=\tfrac{1}{2}$ or $h =\text{const}$ : $L_{\omega,c} = \sigma(c) \Delta_\omega$ \\[.2cm] $c \to 0$ : $ \displaystyle\lim_{c \to 0} L_{\omega,c} = 0$ \quad ($\star$) \\[.2cm] (see Theorem~\ref{t:limit-contact-Heis}) \end{tabular} & \begin{center}\begin{tabular}{c} (see below) \end{tabular}\end{center} \\[.2cm] \hline
Contact Carnot group & \begin{tabular}{l} $L_{\omega,c} = \div_{\omega}\circ\grad'+ (2c-1) \grad'(h) \phantom{\sum_{i=1}^n}$ \\[.2cm] $c=\tfrac{1}{2}$ or $h =\text{const}$ : $L_{\omega,c} = \div_{\omega} \circ \grad'$ \\[.2cm] $c \to 0$ : $ \displaystyle \lim_{c \to 0} L_{\omega,c} = 0$ \quad ($\star$) \\[.2cm] (see Theorem~\ref{t:limit-contact-carnot} and Corollary~\ref{cor:intrinsicformula}) \end{tabular} & \begin{center}\begin{tabular}{c} (see below) \end{tabular}\end{center} \\[.2cm] \hline
General contact & \begin{center}\begin{tabular}{c} Open problem \end{tabular}\end{center} & \begin{tabular}{l} $ L_{\omega,c} = \Delta_\omega + c \grad(h)+ (c-1) \sum_{i=1}^{k} \dive_\omega(X_i) X_i$ \\[.2cm] $c=1$ and $h = \text{const}$ : $L_{\omega,1} = \Delta_\omega$ \\[.2cm] $c= 0$ : $L_{\omega,0} = { \displaystyle \lim_{c \to 0} L_{\omega,c}} = \sum_{i=1}^{k} X_i^2$ \\[.2cm] (see Theorem~\ref{t:limit-contact-fake}) \end{tabular} \\[.2cm] \hline
\end{tabular}
\caption{In each cell, $L_{\omega,c}$ is the generator of the limit diffusion associated with the corresponding construction. Here $c \in [0,1]$ is the ratio of the volume sampling, $n= \dim M$ and $k = \rank \distr$. (i) In the Riemannian case $\omega = e^h \mathcal{R}$, where $\mathcal{R}$ is the Riemannian volume. (ii) In the sub-Riemannian case $\omega = e^h \mathcal{P}$, where $\mathcal{P}$ is Popp volume. (iii) Recall that $\Delta_\omega = \dive_\omega \circ \grad$, and is essentially self-adjoint in $L^2(M,\omega)$ if $M$ is complete. (iv) $X_1,\ldots,X_k$ is a local orthonormal frame ($k=n$ in the Riemannian case). (v) For the definition of the constant $\sigma(c)$, see the appropriate theorem. (vi) $\grad'$ is the gradient computed w.r.t.\ to a modified sub-Riemannian structure on the same distribution (see Section~\ref{Sect:IntrinsicFormula}). (vii) The case of $\mathbb{H}_{2d+1}$ is a particular case of contact Carnot groups, where $\grad' = \sigma(c) \grad$. ($\star$) See Section~\ref{s:altconstr} for an alternative construction where one walks only along minimizing geodesics and which, in the limit for $c \to 0$, gives a non-zero operator.} \label{t:table1}
\end{table}
\end{landscape}
\section{Proof of the results}\label{s:proofs}
\subsection{Proof of Theorem \ref{t:limit-Riemannian}}
Let $\varepsilon \leq \varepsilon_0$ and $q \in M$. Fix normal coordinates $(x_1,\ldots,x_n)$ on a neighborhood of $q$. In these coordinates, length parametrized geodesics are straight lines $\varepsilon v$, with $v \in S_q M \simeq \mathbb{S}^{n-1}$. In particular
\begin{equation}
\phi(\exp_q(\varepsilon,v)) - \phi(q) = \varepsilon \sum_{i=1}^n v_i \partial_i \phi + \frac{1}{2} \varepsilon^2 \sum_{i,j=1}^n v_i v_j \partial_{ij}^2 \phi + \varepsilon^3 O_q,
\end{equation}
where all derivatives are computed at $q$. The term $O_q$ denotes a remainder term which is uniformly bounded on any compact set $K \subset M$ by a constant $|O_q| \leq M_K$. When $\omega = \mathcal{R}$ is the Riemannian volume, well-known asymptotics (see, for instance, \cite{GallotLafontaine}) gives
\begin{equation}
\mu_q^{c\varepsilon}(v) = (1 +\varepsilon^2 O_q) d\Omega,
\end{equation}
where $d\Omega$ is the normalized euclidean measure on $\mathbb{S}^{n-1}$. When $\omega = e^h\mathcal{R}$, the above formula is multiplied by a factor $e^{h(\exp_q(c\varepsilon,v))}$, and taking into account the normalization we obtain
\begin{equation}
\mu_q^{c\varepsilon}(v) = \left(1 + \varepsilon c \sum_{i=1}^n v_i\partial_i h + \varepsilon^2 O_q\right) d\Omega.
\end{equation}
Then, for the operator $L_{\omega,c}^\varepsilon\phi$, evaluated at $q$, we obtain
\begin{align}
(L_{\omega,c}^\varepsilon \phi )|_q& = \frac{2n}{\varepsilon^2} \int_{S_q M} [\phi(\exp_q(\varepsilon,v)) - \phi(q)] \mu_q^{c\varepsilon}(v) \\
& =\frac{2n}{\varepsilon} \sum_{i=1}^n \partial_i \phi \int_{\mathbb{S}^{n-1}} v_i d \Omega + 2n \sum_{i,j=1}^n \left( c \partial_i h \partial_j\phi +\frac{1}{2} \partial_{ij}^2 \phi \right) \int_{\mathbb{S}^{n-1}} v_j v_i d\Omega + \varepsilon O_q.
\end{align}
The first integral is zero, while $\int_{\mathbb{S}^{n-1}} v_i v_j d\Omega = \delta_{ij}/n$. Thus we have
\begin{equation}
(L_{\omega,c}^\varepsilon \phi)|_q=\sum_{i=1}^n \partial_{ii}^2 \phi + 2c (\partial_i h)( \partial_i \phi) + \varepsilon O_q.
\end{equation}
The first term is the Laplace-Beltrami operator applied to $\phi$, written in normal coordinates, while the second term coincides with the action of the derivation $2c \grad(h)$ on $\phi$, evaluated at $q$. Since the l.h.s.\ is invariant under change of coordinates, we have $L_{\omega,c}^\varepsilon \to L_{\omega,c}$, where
\begin{equation}
L_{\omega,c} = \Delta_{\mathcal{R}} + 2c \grad(h),
\end{equation}
and the convergence is uniform on compact sets. The alternative forms of the statement follow from the change of volume formula $\Delta_{e^h \omega} = \Delta_{\omega} + \grad(h)$. The convergence $P^\varepsilon_{\omega,c} \to P_{\omega,c}$ follows from Theorem~\ref{t:convergence}. \hfill $\qed$
\subsection{Proof of Theorem \ref{t:limit-contact-carnot}}
We start with the case $h=0$ and $q=0$. The Hamilton equations for a contact Carnot groups are readily solved, and the geodesic with initial covector $(p_x,p_z) \in T_0^*M \simeq \R^{2d} \times \R$ is
\begin{equation}
x(t) = \int_0^t e^{s p_z A} p_x ds, \qquad z(t) = -\frac{1}{2} \int_0^t \dot{x}^*(s) A x(s) ds.
\end{equation}
It is convenient to split $p_x$ as $p_x = (p_x^1,\ldots,p_x^d)$, with $p_x^i = (p_{x_{2i-1}},p_{x_{2i}}) \in \R^2$ the projection of $p_x$ on the real eigenspace of $A$ corresponding to the singular value $\alpha_i$. We get
\begin{equation}\label{eq:expcorank1}
\exp_0(t;p_x,p_z) = \begin{pmatrix} B(t;\alpha_1 p_z) p^1_x \\
\vdots \\
B(t;\alpha_d p_z) p^d_x \\
\sum_{i=1}^d b(t;\alpha_i p_z) \alpha_i \|p_x^i\|^2
\end{pmatrix},
\end{equation}
where
\begin{equation}
B(t; y ):=\frac{\sin(t y)}{y} \mathbb{I} + \frac{\cos(t y)-1}{y} J, \qquad b(t;y) := \frac{t y - \sin(ty)}{2 y^2}.
\end{equation}
If $p_z =0$, the equations above must be understood in the limit, thus $\exp_0(t;p_x,0) = (tp_x,0)$. The Jacobian determinant is computed in \cite{ABB-Hausdorff} (see also \cite{R-MCP} for the more general case of a corank 1 Carnot group with a notation closer to that of this paper):
\begin{equation}
\det(d_{p_x,p_z}\exp_0(t;\cdot)) = \frac{t^{2d+3}}{4\alpha^2}\sum_{i=1}^d g_i(t p_z) \|p_x^i\|^2,
\end{equation}
where $\alpha = \prod_{i=1}^d \alpha_i$ and
\begin{equation}
g_i(y):= \left( \prod_{j\neq i} \sin\left(\tfrac{\alpha_j y}{2}\right)\right)^2 \frac{\sin\left(\tfrac{\alpha_i y}{2}\right) \left(\tfrac{\alpha_i y}{2} \cos\left(\tfrac{\alpha_i y}{2}\right)- \sin\left(\tfrac{\alpha_i y}{2}\right)\right) }{(y/2)^{2d+2}}.
\end{equation}
\begin{lemma}\label{l:switch}
For any $\lambda \in T_q^*M$ and $t>0$, we have (up to the normalization)
\begin{equation}
(\exp_q(t;\cdot)^* \circ \iota_{\dot\gamma_\lambda(t)}\omega)(\lambda) = \frac{1}{t} \iota_\lambda \circ (\exp_q(t;\cdot)^* \omega)(\lambda).
\end{equation}
\end{lemma}
\begin{proof}
It follows from the homogeneity property $\exp_q(t;\alpha \lambda) = \exp_q(\alpha t; \lambda)$, for all $\alpha \in \R$:
\begin{equation}
\dot{\gamma}_\lambda(t) = \left.\frac{d}{d\tau}\right|_{\tau = 0} \exp_q(t+\tau;\lambda) = \frac{1}{t} \left.\frac{d}{d\tau}\right|_{\tau = 0} \exp_q(t;(1+\tau)\lambda) = \frac{1}{t} d_\lambda \exp_q(t;\cdot) \lambda,
\end{equation}
where we used the standard identification $T_\lambda(T_q^*M) = T_q^*M$.
\end{proof}
The cylinder is $\cyl_0 = \{(p_x,p_z)\mid \|p_x\|^2 = 1\} \subset T_0^*M$ and $\lambda \simeq p_z \partial_{p_z} + p_x \partial_{p_x}$. The Lebesgue volume is $\mathscr{L}=dx \wedge dz$. By Lemma~\ref{l:switch} and reintroduction of the normalization factor, we obtain that the restriction to $\cyl_0$ of $\mu_0^t$ is
\begin{equation}\label{eq:misuraproof}
\mu_0^t = \frac{1}{N(t)}\sum_{i=1}^d | g_i(p_z t) | \|p_x^i\|^2 | d\Omega \wedge dp_z|,
\end{equation}
where $d\Omega$ is the normalized volume of $\mathbb{S}^{2d-1}$. Observe that each $|g_i| \in L^1(\R)$. Thus
\begin{equation}
N(t) = \sum_{i=1}^d \int_{\mathbb{S}^{2d-1}} \|p_x^i\|^2 d\Omega\int_{\R} dp_z |g_i(p_z t)| = \frac{1}{dt} \sum_{i=1}^d \int_{\R} dy |g_i(y)|.
\end{equation}
To compute $\mathbb{E}[\phi(\exp_q(\varepsilon;\lambda))-\phi(q)]$, we can assume $\phi(q) = 0$. Hence
\begin{align}
\int_{\cyl_0}\phi(\exp_0(\varepsilon;\lambda)) \mu_0^{c\varepsilon}(\lambda) & = \frac{1}{N(c\varepsilon)}\sum_{i=1}^d \int_{\mathbb{S}^{2d-1}} d\Omega \int_{\R} dp_z |g_i(p_z c \varepsilon)|\|p_x^i\|^2 \phi(\exp_0(\varepsilon;p_x,p_z)) \\
& = \frac{c}{\varepsilon N(\varepsilon)}\sum_{i=1}^d \int_{\mathbb{S}^{2d-1}} \|p_x^i\|^2 d\Omega \int_{\R} d y |g_i(cy)|\phi(\exp_0(\varepsilon;p_x,y/\varepsilon)) \\
& = \frac{c d}{\sum_{i=1}^d \int_{\R} |g_i(y)| dy} \sum_{i=1}^d \int_{\mathbb{S}^{2d-1}} \|p_x^i\|^2 d\Omega \int_{\R} dp_z |g_i(c p_z)|\phi(\exp_0(1;\varepsilon p_x,p_z)), \label{eq:directcompt}
\end{align}
where we used the rescaling property of the exponential map. From~\eqref{eq:expcorank1} we get
\begin{equation}
\exp_0(1;\varepsilon p_x,p_z) = \left(\begin{pmatrix}
B(\alpha_1 p_z) & & \\
& \ddots & \\
& & B(\alpha_d p_z)
\end{pmatrix} \varepsilon p_x, \sum_{i=1}^d b(\alpha_i p_z) \alpha_i \|p_x^i\|^2 \varepsilon^2\right),
\end{equation}
where, with a slight abuse of notation
\begin{equation}
B(y)=\frac{\sin(y)}{y} \mathbb{I} + \frac{\cos(y)-1}{y} J, \qquad b(y) = \frac{y - \sin(y)}{2 y^2}.
\end{equation}
We observe here that
\begin{equation}\label{eq:observation}
B(y) B(y)^* = \frac{\sin(y/2)^2}{(y/2)^2} \mathbb{I}.
\end{equation}
It is convenient to rewrite
\begin{equation}
\exp_0(1;\varepsilon p_x,p_z) = (\mathbf{B}(p_z) \varepsilon p_x, \varepsilon^2 p_x^* \mathbf{b}(p_z) p_x),
\end{equation}
where $\mathbf{B}(p_z)$ is a block-diagonal $2d\times 2d$ matrix, whose blocks are $B(\alpha_i p_z)$, and $\mathbf{b}$ is a $2d\times 2d$ diagonal matrix. Notice that $\exp_0(1;\varepsilon p_x,p_z)$ is contained in the compact metric ball of radius $\varepsilon$. Hence, we have
\begin{equation}\label{eq:directcomptexpansion}
\begin{aligned}
\phi(\exp_0(\varepsilon;tp_x,p_z)) & = (\partial_x \phi)(\mathbf{B}(p_z) \varepsilon p_x) + (\partial_z \phi) p_x^* \mathbf{b}(p_z) p_x \varepsilon^2 \\
& \quad + \frac{1}{2} \varepsilon^2 (\mathbf{B}(p_z)p_x)^* (\partial^2_{x} \phi) (\mathbf{B}(p_z) p_x) + \varepsilon^3 R_{(p_x,p_z)}(\varepsilon).
\end{aligned}
\end{equation}
All derivatives are computed at $0$. Let $\varepsilon \leq \varepsilon_0$. A lengthy calculation using the explicit form of the remainder (and Hamilton's equations) shows that the remainder term is uniformly bounded (i.e. independently on $\varepsilon$) by an order two polynomial in $p_z$, that is $R_{(p_x,p_z)}(\varepsilon) \leq A + B p_z + Cp_z^2$, where the constants depend on the derivatives of $\phi$ (up to order three) on the compact ball of radius $\varepsilon_0$ centered at the origin. Plugging~\eqref{eq:directcomptexpansion} back in~\eqref{eq:directcompt}, we observe that the term proportional to
\begin{equation}
\int_{\mathbb{S}^{2d-1}} \|p_x^i\|^2 d\Omega \int_{\R} dp_z |g_i(c p_z)| (\partial_x \phi) \mathbf{B}(p_z) \varepsilon p_x
\end{equation}
vanishes, as the integral of any odd-degree monomial in $p_x$ on the sphere is zero. Furthermore, the term proportional to
\begin{equation}
\int_{\mathbb{S}^{2d-1}} \|p_x^i\|^2 d\Omega \int_{\R} dp_z |g_i(c p_z)| (\partial_z \phi) p_x^* \mathbf{b}(p_z) p_x t^2
\end{equation}
vanishes, as the integrand is an odd function of $p_z$. The last second-order (in $\varepsilon$) term is
\begin{equation}\label{eq:remainingterm}
\frac{cd}{\sum_{i=1}^d \int_{\R} |g_i(y)| dy} \sum_{i=1}^d \int_{\mathbb{S}^{2d-1}} \|p_x^i\|^2 d\Omega \int_{\R} dp_z |g_i(c p_z)| \frac{1}{2} \varepsilon^2 (\mathbf{B}(p_z)p_x)^* (\partial^2_{x} \phi) (\mathbf{B}(p_z) p_x).
\end{equation}
If all the $\alpha_i$ are equal, then all $g_i = g$, and \eqref{eq:remainingterm} the sum $\sum_{i=1}^d \|p_x^i\|^2 = \|p_x\|^2 = 1$ simplifies. In this case we have a simple average of a quadratic form on $\mathbb{S}^{2d-1}$. When the $\alpha_i$ are distinct, we need the following results.
\begin{lemma}[see~\cite{Folland}]\label{l:intpolsphere}
Let $P(x) = x_1^{a_1} \ldots x_n^{a_n}$ a monomial in $\R^n$, with $a_i,\ldots,a_n \in \{0,1,2,\ldots\}$. Set $b_i:= \tfrac{1}{2}(a_i + 1)$ Then
\begin{equation}
\int_{\mathbb{S}^{n-1}} P(x) d\Omega = \frac{\Gamma(n/2)}{2 \pi^{n/2}} \begin{cases}
0 & \text{if some $a_j$ is odd}, \\
\frac{2 \Gamma(b_1)\Gamma(b_2)\cdots \Gamma(b_n)}{\Gamma(b_1+ b_2 + \cdots + b_n)} & \text{if all $a_j$ are even},
\end{cases}
\end{equation}
where $d\Omega$ is the normalized measure on the sphere $\mathbb{S}^{n-1} \subset \R^n$.
\end{lemma}
\begin{lemma}\label{l:prodquad}
Let $Q(x) = x^* Q x$ and $R(x) = x^* R x$ be two quadratic forms on $\R^n$, such that $QR = RQ$. Then
\begin{equation}
\int_{\mathbb{S}^{n-1}} Q(x)R(x) d\Omega = \frac{2\tr(QR) + \tr(Q) \tr(R)}{n(n+2)}.
\end{equation}
If $R=\mathbb{I}$, we recover the usual formula $\int_{\mathbb{S}^{n-1}} Q d \Omega = \tfrac{1}{n}\tr(Q)$.
\end{lemma}
\begin{proof}
Up to an orthogonal transformation, we can assume that $Q$ and $R$ are diagonal. Hence (for brevity we omit the domain of integration and the measure),
\begin{equation}
\int Q(x)R(x) = \sum_{i,j=1}^n Q_{ii} R_{jj} \int x_i^2 x_j^2.
\end{equation}
By Lemma~\ref{l:intpolsphere}, we have
\begin{equation}
\int x_i^2 x_j^2 = \begin{cases}
\frac{3}{n(n+2)} & i = j, \\
\frac{1}{n(n+2)} & i \neq j.
\end{cases}
\end{equation}
Thus
\begin{align}
\int Q(x)R(x) & = \sum_{i,j=1}^n Q_{ii} R_{jj} \int x_i^2 x_j^2 \left( \delta_{i j} + (1-\delta_{i j})\right) \\
& = \frac{1}{n(n+2)}\sum_{i,j}^n Q_{ii} R_{jj} (3\delta_{ij} + (1-\delta_{ij})) = \frac{2\tr(QR) + \tr(Q) \tr(R)}{n(n+2)}.\qedhere
\end{align}
\end{proof}
We can write~\eqref{eq:remainingterm}, as the sum of integrals of products of quadratic forms over $\mathbb{S}^{2d-1}$
\begin{equation}\label{eq:remainingterm2}
\frac{1}{2} \varepsilon^2\frac{cd}{\sum_{i=1}^d \int_{\R}| g_i(y)| dy} \sum_{i=1}^d \int_{\R} dp_z |g_i(c p_z)| \int_{\mathbb{S}^{2d-1}} Q_i(p_x) R(p_x) d\Omega,
\end{equation}
where the quadratic forms are (we omit the explicit dependence on $p_z$)
\begin{equation}
Q_i(p_x) := \|p_x^i\|^2, \qquad R(p_x) := (\mathbf{B}(p_z)p_x)^* (\partial^2_{x} \phi) (\mathbf{B}(p_z) p_x).
\end{equation}
A direct check shows that $Q$ and $R$ are commuting, block diagonal matrices. Thus, applying Lemma~\ref{l:prodquad} to~\eqref{eq:remainingterm2}, we obtain
\begin{multline}
\frac{1}{2} \varepsilon^2\frac{cd}{\sum_{i=1}^d \int_{\R} |g_i(y)| dy} \sum_{i=1}^d \int_{\R} dp_z |g_i(c p_z)| \int_{\mathbb{S}^{2d-1}} Q_i(p_x) R(p_x) d\Omega =\\
= \frac{1}{2} \varepsilon^2\frac{c d}{\sum_{i=1}^d \int_{\R} |g_i(y)| dy} \sum_{i=1}^d \int_{\R} dp_z |g_i(c p_z)| \frac{2\tr(Q_i R) + \tr(Q_i)\tr(R)}{2d(2d+2)}. \label{eq:remainingterm3}
\end{multline}
Observe that $\tr(Q_i) = 2$, and $\sum_{\ell=1}^d Q_\ell = \mathbb{I}$. Therefore we rewrite~\eqref{eq:remainingterm3} as
\begin{equation}\label{eq:remainingterm4}
\varepsilon^2\frac{c}{\sum_{i=1}^d \int_{\R} |g_i(y)| dy} \sum_{i,\ell=1}^d \int_{\R} dp_z |g_i(c p_z)| \frac{(1+\delta_{i\ell})\tr(Q_\ell R)}{4(d+1)}.
\end{equation}
To compute $\tr(Q_\ell R)$ denote, for $\ell=1,\ldots,d$
\begin{equation}
D^2_\ell \phi:=\begin{pmatrix}
\partial^2_{x_{2\ell-1}}\phi & \partial_{x_{2\ell-1}}\partial_{x_{2\ell}}\phi \\
\partial_{x_{2\ell}} \partial_{x_{2\ell-1}}\phi & \partial^2_{x_{2\ell}}\phi
\end{pmatrix}, \qquad B_\ell:= B(\alpha_\ell p_z).
\end{equation}
We thus obtain
\begin{equation}
\tr(Q_\ell R ) = \tr(B_\ell^* (D^2_\ell \phi) B_\ell) = \trace(B_\ell B_\ell^* (D^2_\ell \phi)) = \frac{\sin(\tfrac{\alpha_\ell p_z}{2})^2}{(\alpha_\ell p_z/2)^2} (\partial^2_{x_{2\ell-1}} \phi + \partial^2_{x_{2\ell}} \phi),
\end{equation}
where we used~\eqref{eq:observation}. Thus~\eqref{eq:remainingterm4} becomes
\begin{equation}
\frac{\varepsilon^2}{4d} \sum_{i =1}^d \sigma_{i}(c)(\partial_{x_{2i-1}} \phi + \partial_{x_{2i}} \phi),
\end{equation}
where the constants $\sigma_{i}(c)$ are as in the statement of Theorem~\ref{t:convergence}.
Taking in account the remainder term as well, we obtain
\begin{equation}
\frac{4d}{\varepsilon^2} \int_{\cyl_0}\phi(\exp_0(\varepsilon;p_x,p_z)) \mu_0^{c\varepsilon}(p_x,p_z) = \sum_{i=1}^d \sigma_i(c) (\partial^2_{x_{2i-1}}\phi + \partial^2_{x_{2i}}\phi )|_0 + 4d \varepsilon O_0,
\end{equation}
where $|O_0| \leq M_0$ is a remainder term that, when $\varepsilon \leq \varepsilon_0$, is bounded by a constant that depends only on the derivatives of $\phi$ in a compact metric ball of radius $\varepsilon_0$ centered in $0$. A straightforward left-invariance argument shows that, for any other $q \in M$
\begin{equation}
\frac{4d}{\varepsilon^2} \int_{\cyl_q}[f(\exp_q(\varepsilon;\lambda))-f(q)]\mu_q^{c \varepsilon}(\lambda) = \sum_{i=1}^d \sigma_i(c) (X^2_{2i-1}\phi + X^2_{2i}\phi)|_q + 4d \varepsilon O_q(1),
\end{equation}
where $O_q\leq M_q$ is a remainder term bounded by a constant that depends only on the derivatives of $\phi$ in a compact metric ball of radius $\varepsilon_0$ centered in $q$. Thus
\begin{equation}
(L_{c,\mathscr{L}}\phi)|_{q} = \lim_{\varepsilon\to 0} \frac{4d}{\varepsilon^2} \int_{\cyl_q}[\phi(\exp_q(\varepsilon;\lambda))-\phi(q)]\mu_q^{c \varepsilon}(\lambda) = \sum_{i=1}^d \sigma_i(c) (X_{2i-1}\phi + X_{2i}\phi )|_q,
\end{equation}
and the convergence is uniform on compact sets. This completes the proof for $\omega = \mathscr{L}$.
Let, instead, $\omega = e^h \mathscr{L}$ for some $h \in C^\infty(M)$. This leads to an extra factor $e^{h(\exp_q(c \varepsilon;\lambda))}$ in front of $\mu_q^{c\varepsilon}(\lambda)$ (up to re-normalization). After a moment of reflection one realizes that
\begin{equation}
(L^\varepsilon_{\omega,c} \phi)|_q = (L^\varepsilon_{\mathscr{L},c} \tilde{\phi})|_q + \varepsilon O_q, \qquad \text{ with } \qquad \tilde\phi = e^{c(h-h(q))}(\phi-\phi(q)).
\end{equation}
This observation yields the general statement, after noticing that
\begin{equation}
X_i^2(\tilde{\phi}) = X_i^2(\phi) + 2c X_i(h) X_i(\phi), \qquad \forall i =1,\ldots,2d,
\end{equation}
where everything is evaluated at the fixed point $q$. \hfill $\qed$
\subsection{Proof of Theorem \ref{t:limit-Riemannian-fake}}
We expand the function $\phi$ along the path $\gamma_u(\varepsilon) = E_{q,\varepsilon}(u)$:
\begin{equation}
\phi(E_{q,\varepsilon}(u)) -\phi(q)= \varepsilon X_u (\phi) + \frac{1}{2} \varepsilon^2 X_u( X_u(\phi)) + O(\varepsilon^3),
\end{equation}
where everything on the r.h.s.\ is computed at $q$ (as a convention, in the following when the evaluation point is not explicitly displayed, we understand it as evaluation at $q$).
\begin{lemma}\label{l:pullbacknu}
For any one-form $\nu \in T_q^*M$ and any vector $v \in T_u\mathbb{S}^{n-1}$
\begin{equation}
(E^*_{q,\varepsilon} \nu)|_u(v) = \varepsilon \nu(X_v) + \frac{1}{2}\varepsilon^2\nu([X_v,X_u]) + O(\varepsilon^3).
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{l:pullbacknu}]
Since $u$ is constant, the differential of the endpoint map is
\begin{equation}
d_u E_{q,\varepsilon}(v) = e^{\varepsilon X_u}_* \int_0^\varepsilon e^{-\tau X_u}_* X_v d\tau, \qquad v \in \R^n,
\end{equation}
where $e^{\varepsilon Y}$ is the flow of the field $Y$ (see \cite{nostrolibro}). By definition of the Lie derivative $\mathcal{L}$ we get
\begin{equation}
\begin{aligned}
\frac{d}{d\varepsilon}(E^*_{q,\varepsilon} \nu)|_u(v) & = \frac{d}{d\varepsilon} (e^{\varepsilon X_u *}\nu)|_q\left(\int_0^\varepsilon e^{-\tau X_u}_* X_v d\tau\right) \\
& = (e^{\varepsilon X_u *}\mathcal{L}_{X_u} \nu)|_q\left(\int_0^\varepsilon e^{-\tau X_u}_* X_v d\tau\right) + (e^{\varepsilon X_u *}\nu)|_q\left(e^{-\varepsilon X_u}_* X_v \right).
\end{aligned}
\end{equation}
Taking another derivative, and evaluating at $t=0$, we get
\begin{align}
\left.\frac{d^2}{d\varepsilon^2}\right|_{\varepsilon=0}(E^*_{q,\varepsilon} \nu)|_u(v) & = 2(\mathcal{L}_{X_u} \nu)|_q(X_v) + \nu|_q(\mathcal{L}_{X_u}(X_v)) = \nu([X_v,X_u]),\\
\left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0}(E^*_{q,\varepsilon} \nu)|_u(v) & = \nu|_q(X_v). \qedhere
\end{align}
\end{proof}
\begin{lemma}\label{l:expmeasureriem}
We have the following Taylor expansion for the measure
\begin{equation}
\mu_q^\varepsilon(u) = \left(1+\frac{\varepsilon}{2} \dive_{\mathcal{R}}(X_u) + \varepsilon X_u(h) + O(\varepsilon^2)\right)d\Omega(u),
\end{equation}
where $d\Omega$ is the normalized Euclidean measure on $\mathbb{S}^{n-1}$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{l:expmeasureriem}]
Let $\nu_1,\ldots,\nu_n$ be the dual frame to $X_1,\ldots,X_n$, that is $\nu_i(X_j) =\delta_{ij}$. Since $\omega = e^h \mathcal{R} = e^h \nu_1\wedge\ldots\wedge \nu_n$, we obtain (ignoring normalization factors)
\begin{align}\label{eq:measurefakeriem}
\mu_q^\varepsilon(u) \propto D_q(\varepsilon) e^{h(\gamma_u(\varepsilon))} d\Omega(u), \qquad u \in \mathbb{S}^{n-1},
\end{align}
where $D_q(\varepsilon)$ is the determinant of the matrix $(E_{q,\varepsilon}^* \nu_i)(e_j)$, for $i,j=1,\ldots,n$. Using Lemma~\ref{l:pullbacknu}, since $X_{e_j} = X_j$, we obtain
\begin{align}
(E_{q,\varepsilon}^* \nu_i)(e_j) = \varepsilon \nu_i(X_j) + \frac{\varepsilon^2}{2}\nu_i([X_j,X_u]) + O(\varepsilon^3),
\end{align}
where everything is computed at $q$. Since $\det(\mathbb{I} + \varepsilon M) = 1+\varepsilon \tr(M) +O(\varepsilon^2)$ for any matrix $M$, we get
\begin{equation}
D_q(\varepsilon) = \varepsilon^{n}\left(1+ \frac{\varepsilon}{2}\sum_{i=1}^n \nu_i([X_i, X_u]) + O(\varepsilon^2)\right) = \varepsilon^{n}\left(1+ \frac{\varepsilon}{2} \dive_{\mathcal{R}}(X_u) + O(\varepsilon^2)\right).
\end{equation}
Plugging this in~\eqref{eq:measurefakeriem}, and expanding the function $e^{h(\gamma_u(\varepsilon))}$, we get
\begin{align}
\mu_q^\varepsilon & \propto \varepsilon^{n}\left(1+\frac{\varepsilon}{2} \dive_{\mathcal{R}}(X_u)+O(\varepsilon^2)\right)e^{h(q)}\left(1+\varepsilon X_u(h) + O(\varepsilon^2)\right) d\Omega(u) \\
& \propto \varepsilon^{n} e^{h(q)} \left(1+\frac{\varepsilon}{2} \dive_{\mathcal{R}}(X_u)+t X_u(h) + O(\varepsilon^2)\right) d\Omega(u).
\end{align}
Taking in account the normalization (recall that $\int_{\mathbb{S}^{n-1}} X_u = 0$), we obtain the result.
\end{proof}
We are ready to compute the expectation value
\begin{multline}
\int_{\mathbb{S}^{n-1}}[\phi(E_{q,\varepsilon}(u))-\phi(q)] \mu_q^{c\varepsilon} =
\int_{\mathbb{S}^{n-1}}\left[\varepsilon X_u (\phi) + \frac{1}{2} \varepsilon^2 X_u( X_u(\phi)) +O(\varepsilon^3)\right] \times \\
\times \left[1+\frac{c\varepsilon}{2} \dive_{\mathcal{R}}(X_u)+c\varepsilon X_u(h) + O(\varepsilon^2)\right]d\Omega(u).
\end{multline}
Since $\int_{\mathbb{S}^{n-1}} X_u =0$ and $\int_{\mathbb{S}^{n-1}} Q_{ij}u_i u_j = \tr (Q)/n$, we get
\begin{align*}
(L_{\omega,c} \phi)(q) & = \lim_{\varepsilon \to 0^+}\frac{2n}{ \varepsilon^2} \left(\frac{c\varepsilon^2}{2n}\dive_\mathcal{R}(X_i)X_i(\phi) + \frac{c\varepsilon^2}{n} X_i(\phi)X_i(h) + \frac{\varepsilon^2}{2n} X_i^2(\phi) + O(\varepsilon^3)\right) \\
& = \sum_{i=1}^n X_i^2(\phi) + c \dive_{\mathcal{R}}(X_i) X_i(\phi)+ 2c X_i(\phi)X_i(h).
\end{align*}
We obtain the different forms of the statements using the change of volume formula $\dive_{\omega}(X_i) = \dive_{e^h \mathcal{R}}(X_i) = \dive_\mathcal{R}(X_i) + X_i(h)$. The convergence is uniform on compact sets since the domain of integration $\mathbb{S}^{n-1}$ is compact and all objects are smooth. \hfill $\qed$
\subsection{Proof of Theorem \ref{t:limit-contact-fake}}
The proof follows the same lines as that of Theorem~\ref{t:limit-Riemannian-fake}. The expansion of the function $\phi$ along the path $\gamma_u(\varepsilon) = E_{q,\varepsilon}(u)$ remains unchanged:
\begin{equation}
\phi(E_{q,\varepsilon}(u)) -\phi(q)= \varepsilon X_u (\phi) + \frac{1}{2} \varepsilon^2 X_u( X_u(\phi)) + O(\varepsilon^3).
\end{equation}
where, this time $X_u = \sum_{i=1}^k u_i X_i$. Also Lemma~\ref{l:pullbacknu} remains unchanged, replacing $n$ with $k$. The following contact version of Lemma~\ref{l:expmeasureriem} also holds.
\begin{lemma}
We have the following Taylor expansion for the measure
\begin{equation}
\mu_q^\varepsilon(u) = \left(1+\frac{\varepsilon}{2} \dive_{\mathcal{P}}(X_u) + \varepsilon X_u(h) + O(\varepsilon^2)\right)d\Omega(u),
\end{equation}
where $d\Omega$ is the normalized Euclidean measure on $\mathbb{S}^{k-1}$.
\end{lemma}
\begin{proof}[Proof of the Lemma]
Since $\omega = e^{h} \mathcal{P} = e^h \nu_0 \wedge \nu_1\wedge \ldots \nu_k$, we have $i_{X_0} \omega = e^h \nu_1\wedge \ldots \wedge \nu_k$. Hence the proof is similar to proof of Lemma~\ref{l:expmeasureriem}, with $n$ replaced by $k$. In fact, up to normalization
\begin{equation}
\mu_q^\varepsilon(u) \propto (E_{\varepsilon,q}^* \circ \iota_{\dot\gamma_u(\varepsilon),X_0} \omega) = D_q(\varepsilon) e^{h(\gamma_u(\varepsilon))} d\Omega(u), \qquad u \in \mathbb{S}^{k-1},
\end{equation}
where $D_q(\varepsilon)$ is the determinant of the matrix $(E_{q,\varepsilon}^* \nu_i)(X_j)$, for $i,j=1,\ldots,k$. This is a $k\times k$ matrix. With a computation analogous to the one in the proof of Lemma~\ref{l:expmeasureriem}, we obtain $D_q(\varepsilon) = \varepsilon^k(1+\varepsilon\tr (M) +O(\varepsilon^2))$, with
\begin{equation}
\tr (M) = \frac{1}{2} \sum_{i=1}^{k} \nu_i([X_i, X_u]) = \frac{1}{2} \sum_{i,j=1}^{k} u_j c_{ij}^i = \frac{1}{2} \sum_{j=1}^k u_j \sum_{i=0}^{k} c_{ij}^i = \frac{1}{2} \dive_{\mathcal{P}}(X_u),
\end{equation}
where we have been able to complete the sum, including the index $0$ since, in the contact case, $c_{0j}^0 = \eta([X_0,X_j]) = -d\eta(X_0,X_j) = 0$ for all $j=1,\ldots,k$. From here, we conclude the proof as in that of Lemma~\ref{l:expmeasureriem}.
\end{proof}
The computation of the limit operator is analogous to the one in the proof of Theorem~\ref{t:limit-Riemannian-fake}, replacing the Riemannian volume $\mathcal{R}$ with the Popp one $\mathcal{P}$. \hfill $\qed$
\section{Geodesic random walks in the Riemannian setting}\label{s:riem-RW}
\subsection{Ito SDEs via geodesic random walks}\label{s:ito-intro}
Let $(M,\g)$ be a Riemannian manifold. We consider a set of smooth vector fields, and since we are interested in local phenomena, we assume that the $V_i$ have bounded lengths and that $(M,\g)$ is complete. We now consider the Ito SDE \begin{equation}\label{Eqn:ItoSDE}
dq_t = \sum_{i=1}^{k} V_i\left( q_t\right) d(\sqrt{2} w_t^i) , \qquad q_0=q,
\end{equation}
for some $q\in M$, where $w_t^1,\ldots,w_t^k$ are independent, one-dimensional Brownian motions\footnote{One approach to interpreting and solving \eqref{Eqn:ItoSDE}, as well as verifying that $q_t$ will be a martingale, is via lifting it to the bundle of orthonormal frames; see the first two chapters of \cite{Hsu} for background on stochastic differential geometry, connection-martingales, and the bundle of orthonormal frames. Alternatively, \cite[Chapter 7]{emery} gives a treatment of Ito integration on manifolds.}. To construct a corresponding sequence of random walks, we choose a random vector $V=\beta_1V_1+\beta_2V_2+\cdots+\beta_kV_k$ by choosing $(\beta_1,\ldots,\beta_k)$ uniformly from the unit sphere. Then, we follow the geodesic $\gamma(s)$ determined by $\gamma(0)=q$ and $\gamma^{\prime}(0)=\frac{2k}{\varepsilon}V$ for time $\delta=\varepsilon^2/(2k)$. Equivalently, we travel a distance of $\varepsilon |V|$ in the direction of $V$ (along a geodesic). This determines the first step, $q^{\varepsilon}_t$ with $t\in[0,\delta]$, of a random walk (and thus, implicitly, the measure $\Pi^{\varepsilon}_q$). Determining each additional step in the same way produces a family of piecewise geodesic random walks $q^{\varepsilon}_t$, $t\in[0,\infty)$, which we call the \emph{geodesic random walk} at scale $\varepsilon$ associated with the SDE \eqref{Eqn:ItoSDE} (in terms of Definition \ref{Def:Walk}, $\kappa=\sup_{q,(\beta_1,\ldots,\beta_k)} V$).
We now study the convergence of this family of walks as $\varepsilon\rightarrow 0$. Let $x_1,\ldots,x_n$ be Riemannian normal coordinates around $q_0^\varepsilon=q$, and write the random vector $V$ as
\begin{equation}\label{Eqn:VExpand}
V(x)= \sum_{m=1}^k \beta_m V_m(x) = \sum_{m=1}^k \beta_m \sum_{i=1}^n V_m^i \partial_{i}
+O(r) = \sum_{i=1}^n A^i \partial_{ i} + O(r),
\end{equation}
where $r = \sqrt{x_1^2+\ldots+x_n^2}$. In normal coordinates, Riemannian geodesics correspond to Euclidean geodesics, and thus $\gamma_V(t)$ has $i$-th coordinate $A^i t$. In particular, for any smooth function $\phi$ we have
\begin{equation}
\phi(\gamma_V(\varepsilon)) - \phi(q) = \sum_{i=1}^n A^i (\partial_{i}\phi)(q) \varepsilon + \frac{1}{2}\sum_{i,j=1}^n A^i A^j (\partial_i \partial_j \phi)(q) \varepsilon^2 + O(\varepsilon^3).
\end{equation}
Averaging w.r.t.\ the uniform probability measure on the sphere $\sum_{i=1}^k \beta_i^2 =1$, we obtain
\begin{equation}\label{Eqn:ItoConv}
\begin{split}
L^{\varepsilon} := \frac{1}{\delta} \mathbb{E}\left[ \phi\left( q^{\varepsilon}_{\delta}\right) -\phi(q)\left| q^{\varepsilon}_0=q \right.\right] & \rightarrow \sum_{m=1}^k \sum_{i,j=1}^n V^i_m V^j_m (\partial_{i}\partial_{j}\phi)(q) \\
&= \sum_{m=1}^k \nabla^2_{V_m,V_m}(q) \qquad \text{as } \varepsilon \to 0,
\end{split}\end{equation}
where $\nabla^2$ denotes the Hessian, with respect to Levi-Civita connection, and where we recall that $\sum_{j=1}^n V_m^j \partial_{j} = V_m(q)$ and the $x_i$ are a system of normal coordinates at $q$. The right-hand side of \eqref{Eqn:ItoConv} determines a second-order operator which is independent of the choice of normal coordinates (and thus depends only on the $V_i$). Moreover, this same construction works at any point, and thus we have a second-order operator $L = \lim_{\varepsilon \to 0} L^\varepsilon$ on all of $M$. Because the $V_i$ are smooth, so is $L$ (and the convergence is uniform on compacts).
We see that the martingale problem associated to $L$ has a unique solution (at least until explosion, but since we are interested in local questions, we can assume that there is no explosion). Further, this solution is the law of the process $q^0_t$ that solves \eqref{Eqn:ItoSDE}. If we again let $P^{\varepsilon}$ and $P^0$ be the probability measures on $\Omega(M)$ corresponding to $q^{\varepsilon}_t$ and $q^0_t$, respectively, Theorem \ref{t:convergence} implies that $P^{\varepsilon}\rightarrow P^0$ (weakly) as $\varepsilon\rightarrow 0$.
Of course, we see that our geodesic random walks, as well as the diffusion $q^0$ and thus the interpretation of the SDE \eqref{Eqn:ItoSDE}, depend on the Riemannian structure. This is closely related to the fact that neither Ito SDEs, normal coordinates, covariant derivatives, nor geodesics are preserved under diffeomorphisms, in general, and to the non-standard calculus of Ito's rule for Ito integrals, in contrast to Stratonovich integrals. Note also that in this construction, it would also be possible to allow $k>n$.
The most important special case of a geodesic random walk is when $k =n$ and the vector fields $V_1,\ldots,V_n$ are an orthonormal frame. In that case, $q^{\varepsilon}_t$ is an isotropic random walk, as described in \cite{OurUrPaper} (see also \cite{PinksyRiem} for a related family of processes) and
\begin{equation}\label{eq:I}
L^\varepsilon \to \Delta,
\end{equation}
where $\Delta = \dive \circ \grad$ is the Laplace-Beltrami operator (here the divergence is computed with respect to the Riemannian volume). In particular $q^0_t$ is Brownian motion on $M$, up to time-change by a factor of 2.
If we further specialize to Euclidean space, we see that the convergence of the random walk to Eucldiean Brownian motion is just a special case of Donsker's invariance principle. The development of Brownian motion on a Riemannian manifold via approximations is also not new; one approach can be found in \cite{StroockGeo}.
\subsection{Volume sampling through the exponential map}\label{s:volumesampling-intro}
Let $(M,\metr)$ be a $n$-dimensional Riemannian manifold equipped with a general volume form $\omega$, that might be different from the Riemannian one $\mathcal{R}$. This freedom is motivated by the forthcoming applications to sub-Riemannian geometry, where there are several choices of intrinsic volumes and in principle there is not a preferred one \cite{ABB-Hausdorff,nostropopp}. Besides, even in the Riemannian case, one might desire to study operators which are symmetric w.r.t.\ a general measure $\omega = e^h \mathcal{R}$.
We recall that the gradient $\grad(\phi)$ of a smooth function depends only on the Riemannian structure, while the divergence $\dive_\omega(X)$ of a smooth vector field depends on the choice of the volume. In this setting we introduce an intrinsic diffusion operator, symmetric in $L^2(M,\omega)$, with domain $C^\infty_c(M)$ as the divergence of the gradient:
\begin{equation}
\Delta_\omega:=\div_\omega\circ\grad = \sum_{i=1}^n X_i^2 + \dive_\omega(X_i)X_i,
\end{equation}
where in the last equality, which holds locally, $X_1,\ldots,X_n$ is a orthonormal frame. Recall that if $\omega$ and $\omega'$ are proportional, then $\Delta_\omega=\Delta_{\omega'}$.
If one would like to define a random walk converging to the diffusion associated to $\Delta_\omega$, one should make the construction in such a way that the choice of the volume enters in the definition of the walk. One way to do this is to ``sample the volume along the walk''. For all $s\geq 0$, consider the Riemannian exponential map
\begin{equation}
\exp_q(s;\cdot): S_q M \to M, \qquad q \in M,
\end{equation}
where $S_q M \subset T_q M$ is the Riemannian tangent sphere. In particular, for $v \in S_q M$, $\gamma_v(s) = \exp_q(s;v)$ is the unit-speed geodesic starting at $q$ with initial velocity $v$. Then $|\iota_{\dot\gamma_v(s)}\omega|$ is a density\footnote{If $\eta$ is an $m$-form on an $m$-dimensional manifold, the symbol $|\eta|$ denotes the associated density, in the sense of tensors.} on the Riemannian sphere of radius $s$. By pulling this back through the exponential map, we obtain a probability measure on $S_qM$ that ``gives more weight to geodesics arriving where there is more volume''.
\begin{definition}\label{d:Riemmes}
For any $q \in M$, and $\varepsilon >0$, we define the family of densities $\mu_q^\varepsilon$ on $S_qM$
\begin{equation}
\mu^{\varepsilon}_q(v) := \frac{1}{N(q,\varepsilon)}\left\lvert(\exp_q(\varepsilon;\cdot)^* \iota_{\dot\gamma_v(\varepsilon)}\omega)(v)\right\rvert, \qquad \forall v \in S_qM,
\end{equation}
where $N(q,\varepsilon)$ is such that $\int_{S_qM} \mu_{q}^\varepsilon = 1$. For $\varepsilon =0$, we set $\mu^0_q$ to be the standard normalized Riemannian density form on $S_qM$.
\end{definition}
\begin{rmk}\label{r:absolutevalue}
For any fixed $q \in M$, and for sufficiently small $\varepsilon>0$, the Jacobian determinant of $\exp_q(\varepsilon;\cdot)$ does not change sign, hence the absolute value in definition~\ref{d:Riemmes} is not strictly necessary to obtain a well defined probability measure on $S_qM$. By assuming that the sectional curvature $\mathrm{Sec} \leq K$ is bounded from above, one can globally get rid of the need for the absolute value, as conjugate lengths are uniformly separated from zero.
\end{rmk}
Now we define a random walk $b^\varepsilon_{t}$ as follows:
\begin{equation}\label{eq:backwards-Ito}
b_{(i+1) \delta}^\varepsilon := \exp_{b_{i \delta}^\varepsilon}(\varepsilon; v), \qquad v \in S_{q}M \text{ chosen with probability } \mu_{q}^\varepsilon.
\end{equation}
(see Definition~\ref{Def:Walk} and Remark~\ref{r:remfordefinition}). Let $P^\varepsilon_{\omega}$ (we drop the $q$ from the notation as the starting point is fixed) be the probability measure on the space of continuous paths on $M$ starting at $q$ associated with $b_t^\varepsilon$, and consider the associated family of operators\begin{align}
(L_{\omega}^\varepsilon\phi)(q) & := \frac{1}{\delta} \mathbb{E}[\phi(b_{\delta}^\varepsilon) - \phi(q) \mid b^\varepsilon_0 =q] \\
& := \frac{1}{\delta}\int_{S_qM} [\phi(\exp_q(\varepsilon;v))-\phi(q)] \mu_q^{\varepsilon}(v), \qquad \forall q \in M,
\end{align}
(see Definition~\ref{d:operator}), for any $\phi \in C^\infty(M)$. A special case of Theorem~\ref{t:limit-Riemannian} gives
\begin{align}\label{eq:BI}
\lim_{\varepsilon \to 0} L_\omega^\varepsilon = \underbrace{\Delta_{\mathcal{R}} + \grad(h)}_{\Delta_\omega} + \grad (h),
\end{align}
where $\grad(h)$ is understood as a derivation. By Theorem~\ref{t:convergence}, $P^\varepsilon_\omega$ converges to a well-defined diffusion generated by the r.h.s.\ of \eqref{eq:BI}. This result is not satisfactory, as one would prefer $L_\omega^\varepsilon \to \Delta_\omega$. Indeed, in \eqref{eq:BI}, we observe that the correction $2\grad(h)$ provided by the volume sampling construction is twice the desired one (except when $\omega$ is proportional to $\mathcal{R}$).
To address this problem we introduce a parameter $c \in [0,1]$ and consider, instead, the family $\mu^{c\varepsilon}_q$. This corresponds to sampling the volume not at the final point of the geodesic segment, but at an intermediate point. We define a random walk as follows:
\begin{equation}
b_{(i+1)\delta ,c}^\varepsilon := \exp_{q_{i \delta ,c}^\varepsilon}(\varepsilon, v), \qquad v \in S_qM \text{ with probability } \mu_q^{c\varepsilon},
\end{equation}
that we call the \emph{geodesic random walk with volume sampling} (with volume $\omega$ and sampling ratio $c$).
\begin{rmk}
The case $c = 0$ does not depend on the choice of $\omega$ and reduces to the construction of Section~\ref{s:ito-intro}. The case $c=1$ corresponds to the process of Equation \eqref{eq:backwards-Ito}.
\end{rmk}
For $\varepsilon>0$, let $P^\varepsilon_{\omega,c}$ be the probability measure on the space of continuous paths on $M$ associated with the process $b_{t,c}^\varepsilon$, and consider the family of operators
\begin{equation}
\begin{aligned}
(L_{\omega,c}^\varepsilon\phi)(q) & := \frac{1}{\delta} \mathbb{E}[\phi(b_{\delta}^\varepsilon) - \phi(q) \mid b^\varepsilon_0 =q] \\
& := \frac{1}{\delta}\int_{S_qM} [\phi(\exp_q(\varepsilon;v))-\phi(q)] \mu_q^{c\varepsilon}(v), \qquad \forall q \in M,
\end{aligned}
\end{equation}
for any $\phi \in C^\infty(M)$. The family of Riemannian geodesic random walks with volume sampling converges to a well-defined diffusion, as follows.
\begin{theorem}\label{t:limit-Riemannian}
Let $(M,\g)$ be a complete Riemannian manifold with volume $\omega = e^h\mathcal{R}$, where $\mathcal{R}$ is the Riemannian one, and $h \in C^\infty(M)$. Let $c \in [0,1]$. Then $L_{\omega,c}^\varepsilon \to L_{\omega,c}$, where
\begin{equation}\label{eq:ito-c}
L_{\omega,c} = \Delta_\omega + (2c-1)\grad(h).
\end{equation}
Moreover $P^\varepsilon_{\omega,c} \to P_{\omega,c}$ weakly, where $P_{\omega,c}$ is the law of the diffusion associated with $L_{\omega,c}$ (which we assume does not explode).
\end{theorem}
\begin{rmk}
We have these alternative forms of \eqref{eq:ito-c}, obtained by unraveling the definitions:
\begin{align}
L_{\omega,c} &= \Delta_{e^{(2c-1)h }\omega} = \Delta_{e^{2c h}\mathcal{R}} =\Delta_{\mathcal{R}} + 2c\grad(h) \\
& = \sum_{i=1}^n X_i^2+ \left( 2c \dive_\omega(X_i) +(1- 2c)\dive_{\mathcal{R}}(X_i)\right)X_i ,
\end{align}
where, in the last line, $X_1,\ldots,X_n$ is a local orthonormal frame.
\end{rmk}
\begin{figure}\label{Fig:VolSampling}
\includegraphics[scale=1.4]{fig1.pdf}
\caption{Geodesic random walk with sampling of the volume $\omega$ and ratio $c$. For each $\varepsilon$, the paths of the walk are piecewise-smooth geodesics.}
\end{figure}
As a simple consequence of \eqref{eq:ito-c} or its alternative formulations, we have the following statement, which appears to be new even in the Riemannian case.
\begin{cor}\label{c:self-adj-Riem}
Let $(M,\g)$ be a complete Riemannian manifold. The operator $L_{\omega,c}$ with domain $C^\infty_c(M)$ is essentially self-adjoint in $L^2(M,\omega)$ if and only at least one of the following two conditions hold:
\begin{itemize}
\item[(i)] $c=1/2$;
\item[(ii)] $\omega$ is proportional to the Riemannian volume (i.e.\ $h$ is constant).
\end{itemize}
\end{cor}
The previous discussion stresses the particular role played by the Riemannian volume. Not only does it coincide with the Hausdorff measure, but according to the above construction, it is the only volume (up to constant rescaling) that gives the correct self-adjoint operator for \emph{any} choice of the parameter $c$.
\begin{rmk}
If we want the volume-sampling scheme to produce the Laplacian w.r.t. the volume $\omega$ being sampled, we should take $c=1/2$. With hindsight, this might not be surprising. By linearity, we see that sampling with $c=1/2$ is equivalent to sampling the volume along the entire step, uniformly w.r.t time (recall that the geodesics are traversed with constant speed), rather than privileging any particular point along the path.
\end{rmk}
\begin{rmk}
One can prove that the limiting operator corresponding to the geodesic random walk with volume sampling ratio $c=1$ is equal, up to a constant (given by the ratio of the area of the Euclidean unit sphere and the volume of the unit ball in dimension $n$), to the limiting operator corresponding to a more general class of random walk where we step to points of the metric ball $B_{q}(\varepsilon)$ of radius $\varepsilon$, uniformly w.r.t. its normalized volume $\omega/\omega(B_{q}(\varepsilon))$. This kind of random walk for the Riemannian volume measure has also been considered in \cite{LebeauRiem}, in relation with the study of its spectral properties.
\end{rmk}
\section{Geodesic random walks in the sub-Riemannian setting}\label{s:SR-RW}
We want to define a sub-Riemannian version of the geodesic random walk with volume sampling, extending the Riemannian construction of the previous section. Recall the definition of (sub-)Riemannian manifold in Section~\ref{s:convergence}.
\subsection{Geodesics and exponential map} As in Riemannian geometry, \emph{geode\-sics} are horizontal curves that have constant speed and locally minimize the length between their endpoints. Define the \emph{sub-Riemannian Hamiltonian} $H: T^*M \to \R$ as
\begin{equation}
H(\lambda) := \frac{1}{2} \sum_{i=1}^k \langle\lambda,X_i\rangle^2,
\end{equation}
for any local orthonormal frame $X_1,\ldots,X_k \in \Gamma(\distr)$. Let $\sigma$ be the natural symplectic structure on $T^*M$, and $\pi: T^*M \to M$. The \emph{Hamiltonian vector field} $\vec{H}$ is the unique vector field on $T^*M$ such that $dH = \sigma(\cdot,\vec{H})$. Then the Hamilton equations are
\begin{equation}\label{eq:hamilton}
\dot{\lambda}(t) = \vec{H}(\lambda(t)).
\end{equation}
Solutions of \eqref{eq:hamilton} are smooth curves on $T^*M$, and their projections $\gamma(t):=\pi(\lambda(t))$ on $M$ will be geodesics. In the Riemannian setting, all geodesics can be recovered uniquely in this way. In the sub-Riemannian one, this is no longer true, as \emph{abnormal geodesics} can appear, which are geodesics that might not come from projections of solutions to \eqref{eq:hamilton}.
For any $\lambda \in T^*M$ we consider the geodesic $\gamma_\lambda(t)$, obtained as the projection of the solution of \eqref{eq:hamilton} with initial condition $\lambda(0) = \lambda$. Observe that the Hamiltonian function, which is constant on $\lambda(t)$, measures the speed of the associated geodesic:
\begin{equation}
2H(\lambda) = \|\dot\gamma_\lambda(t)\|^2,\qquad \lambda \in T^*M.
\end{equation}
Since $H$ is fiber-wise homogeneous of degree $2$, we have the following rescaling property:
\begin{equation}
\gamma_{\alpha \lambda}(t) = \gamma_{\lambda}(\alpha t),\qquad \alpha > 0.
\end{equation}
This justifies the restriction to the subset of initial covectors lying in the level set $2H=1$.
\begin{definition}
The \emph{unit cotangent bundle} is the set of initial covectors such that the associated geodesic has unit speed, namely
\begin{equation}
\cyl := \{\lambda \in T^*M \mid 2H(\lambda) = 1\} \subset T^*M.
\end{equation}
\end{definition}
For any $\lambda \in \cyl$, the geodesic $\gamma_\lambda(t)$ is \emph{parametrized by arc-length}, namely $\ell(\gamma|_{[0,T]}) = T$.
\begin{rmk}
We stress that, in the genuinely sub-Riemannian case, $H|_{T_q^*M}$ is a degenerate quadratic form. It follows that the fibers $\cyl_q$ are non-compact cylinders, in sharp contrast with the Riemannian case (where the fibers $\cyl_q$ are spheres).
\end{rmk}
For any $\lambda \in \cyl$, the \emph{cut time} $t_c(\lambda)$ is defined as the time at which $\gamma_\lambda(t)$ loses optimality
\begin{equation}
t_c(\lambda) := \sup \{t>0 \mid d(\gamma_\lambda(0),\gamma_\lambda(t)) = t\}.
\end{equation}
In particular, for a fixed $\varepsilon>0$ we define
\begin{equation}\label{eq:cotingj}
\cyl_q^\varepsilon := \{\lambda \in \cyl_q \mid t_c(\lambda) \geq \varepsilon \} \subset \cyl_q,
\end{equation}
as the set of unit covector such that the associated geodesic is optimal up to time $\varepsilon$.
\begin{definition}
Let $D_q \subseteq [0,\infty) \times \cyl_q$ the set of the pairs $(t,\lambda)$ such that $\gamma_\lambda$ is well defined up to time $t$. The \emph{exponential map} at $q \in M$ is the map $\exp_q: D_q \to M$ that associates with $(t,\lambda)$ the point $\gamma_\lambda(t)$.
\end{definition}
Under the assumption that $(M,d)$ is complete, by the (sub-)Riemannian Hopf-Rinow Theorem (see, for instance, \cite{nostrolibro,riffordbook}), we have that any closed metric ball is compact and normal geodesics can be extended for all times, that is $D_q = [0,\infty) \times \cyl_q$, for all $q \in M$.
\subsection{Sub-Laplacians}
For any function $\phi \in C^\infty(M)$, the \emph{horizontal gradient} $\grad(\phi) \in \Gamma(\distr)$ is, at each point, the horizontal direction of steepest slope of $\phi$, that is
\begin{equation}\label{eq:grad}
\g(\grad(\phi),X) = \langle d\phi, X\rangle, \qquad \forall X \in \Gamma(\distr).
\end{equation}
Since in the Riemannian case this coincides with the usual gradient, this notation will cause no confusion. If $X_1,\ldots,X_k$ is a local orthonormal frame, we have
\begin{equation}
\grad(\phi) = \sum_{i=1}^k X_i(\phi) X_i.
\end{equation}
For any fixed volume form $\omega \in \Lambda^n M$ (or density if $M$ is not orientable), the \emph{divergence} of a smooth vector field $X$ is defined by the relation $\mathcal{L}_X \omega = \dive_{\omega}(X)$, where $\mathcal{L}$ denotes the Lie derivative. Notice that the sub-Riemannian structure does not play any role in the definition of $\dive_\omega$. Following \cite{montgomerybook,laplacian}, the \emph{sub-Laplacian} on $(M,\distr,\g)$ associated with $\omega$ is
\begin{equation}\label{eq:sublap}
\Delta_\omega := \div_{\omega}\circ \grad = \sum_{i=1}^k X_i^2 + \dive_\omega(X_i)X_i,
\end{equation}
where in the last equality, which holds locally, $X_1,\ldots,X_k$ is a orthonormal frame. Again, if $\omega$ and $\omega'$ are proportional, then $\Delta_\omega=\Delta_{\omega'}$.
The sub-Laplacian is symmetric on the space $C^\infty_c(M)$ of smooth functions with compact support with respect to the $L^2(M,\omega)$ product.
If $(M,d)$ is complete and there are no non-trivial abnormal minimizers, then $\Delta_\omega$ is essentially self-adjoint on $C^\infty_c(M)$ and has a smooth positive heat kernel \cite{strichartz,strichartzerrata}.
The sub-Laplacian will be intrinsic if we choose an intrinsic volume. See \cite[Sec. 3]{OurUrPaper} for a discussion of intrinsic volumes in sub-Riemannian geometry. A natural choice, at least in the equiregular setting, is Popp volume \cite{nostropopp,montgomerybook}, which is smooth. Other choices are possible, for example the Hausdorff or the spherical Hausdorff volume which, however, are not always smooth \cite{ABB-Hausdorff}. For the moment we let $\omega$ be a general smooth volume.
\subsection{The sub-Riemannian geodesic random walk with volume sampling}
In contrast with the Riemannian case, where $S_qM$ has a well defined probability measure induced by the Riemannian structure, we have no such construction on $\cyl_q$. Thus, it is not clear how to define a geodesic random walk in the sub-Riemannian setting.
For $\varepsilon>0$, consider the sub-Riemannian exponential map
\begin{equation}
\exp_q(\varepsilon;\cdot): \cyl_q \to M, \qquad q \in M.
\end{equation}
If $\lambda \in \cyl_q$, then $\gamma_\lambda(\varepsilon) = \exp_q(\varepsilon;\lambda)$ is the associated unit speed geodesic starting at $q$.
One wishes to repeat Definition~\ref{d:Riemmes}, using the exponential map to induce a density on $\cyl_q$, through the formula $\mu_q^\varepsilon(\lambda) \propto |(\exp_q(\varepsilon;\cdot)^* \iota_{\dot\gamma_\lambda(\varepsilon)}\omega)(\lambda)|$.
However, there are non-trivial difficulties arising in the genuine sub-Riemannian setting.
\begin{itemize}
\item The exponential map is not a local diffeomorphism at $\varepsilon =0$, and Riemannian normal coordinates are not available. This tool is used for proving the convergence of walks in the Riemannian setting;
\item Due to the presence of zeroes in the Jacobian determinant of $\exp_q(\varepsilon;\cdot)$ for arbitrarily small $\varepsilon$, the absolute value in the definition of $\mu_q^\varepsilon$ is strictly necessary (in contrast with the Riemannian case, see Remark~\ref{r:absolutevalue}).
\item Since $\cyl_q$ is not compact, there is no guarantee that $\int_{\cylsmall_q} \mu_q^\varepsilon < +\infty$;
\end{itemize}
Assuming that $\int_{\cylsmall_q} \mu_q^\varepsilon < + \infty$, we generalize Definition~\ref{d:Riemmes} as follows.
\begin{definition}\label{d:mesIto}
For any $q \in M$, and $\varepsilon >0$, we define the family of densities $\mu_q^\varepsilon$ on $\cyl_q$
\begin{equation}
\mu^{\varepsilon}_q(\lambda) := \frac{1}{N(q,\varepsilon)}\left\lvert(\exp_q(\varepsilon;\cdot)^* \iota_{\dot\gamma_\lambda(\varepsilon)}\omega)(\lambda)\right\rvert, \qquad \forall \lambda \in \cyl_q,
\end{equation}
where $N(q,\varepsilon)$ is fixed by the condition $\int_{\cylsmall_q} \mu_{q}^{\varepsilon} = 1$.
\end{definition}
As we did in Section~\ref{s:volumesampling-intro}, and for $c \in (0,1]$, we build a random walk
\begin{equation}
b_{(i+1)\delta,c}^{\varepsilon}:= \exp_{b_{i \delta,c}^{\varepsilon}}(\varepsilon;\lambda), \qquad \lambda \in \cyl_q \text{ chosen with probability } \mu_q^{c\varepsilon}.
\end{equation}
Let $P^\varepsilon_{\omega,c}$ be the associated probability measure on the space of continuous paths on $M$ starting from $q$, and consider the corresponding family of operators, which in this case is
\begin{equation}\label{eq:operatoreps}
\begin{aligned}
(L_{c,\omega}^\varepsilon\phi)(q) & =\frac{1}{\delta}\mathbb{E}[\phi(b_{\delta,c}^\varepsilon)-\phi(q)\mid b_{0,c}^\varepsilon = q]\\
&= \frac{1}{\delta}\int_{\cyl_q} [\phi(\exp_q(\varepsilon;\lambda))-\phi(q)] \mu_q^{c\varepsilon}(\lambda), \qquad \forall q \in M,
\end{aligned}
\end{equation}
for any $\phi \in C^\infty(M)$. Clearly when $k=n$, \eqref{eq:operatoreps} is the same family of operators associated with a Riemannian geodesic random walk with volume sampling discussed in Section~\ref{s:volumesampling-intro}, and this is why - without risk of confusion - we used the same symbol.
\begin{rmk}
As mentioned, in sub-Riemannian geometry abnormal geodesics may appear. More precisely, one may have \emph{strictly abnormal geodesics} that do not arise as projections of solutions of \eqref{eq:hamilton}. The class of random walks that we have defined never walk along these trajectories, but can walk along abnormal segments that are not strictly abnormal.
The (minimizing) Sard conjecture states that the set of endpoints of strictly abnormal (minimizing) geodesics starting from a given point has measure zero in $M$. However, this remains a hard open problem in sub-Riemannian geometry \cite{AAA-openproblems}. See also \cite{Sard-prop,Sard-Rif-Trel,agrasmooth} for recent progress on the subject.
\end{rmk}
Checking the convergence of \eqref{eq:operatoreps} is difficult in the general sub-Riemannian setting ($k< n$), in part due to the difficulties outlined above. We treat in detail the case of contact Carnot groups, where we find some surprising results. These structures are particularly important as they arise as Gromov-Hausdorff tangent cones of contact sub-Riemannian structures \cite{bellaiche,mitchell}, and play the same role of Euclidean space in Riemannian geometry.
\subsection{Contact Carnot groups}\label{s:contactcarnot}
Let $M = \R^{2d+1}$, with coordinates $(x,z) \in \R^{2d}\times \R$. Consider the following global vector fields
\begin{equation}
X_i = \partial_{x_i} - \frac{1}{2} (A x)_i \partial_z, \qquad i=1,\ldots,2d,
\end{equation}
where
\begin{equation}
A = \begin{pmatrix} \alpha_1 J & & \\
& \ddots & \\
& & \alpha_d J
\end{pmatrix}, \qquad J = \begin{pmatrix}
0 & -1 \\
1 & 0
\end{pmatrix},
\end{equation}
is a skew-symmetric, non-degenerate matrix with singular values $0 < \alpha_1 \leq \ldots \leq \alpha_d$. A \emph{contact Carnot group} is the sub-Rieman\-nian structure on $M = \R^{2d+1}$ such that $\distr_q = \spn\{X_1,\ldots,X_{2d}\}_q$ for all $q \in M$, and $\g(X_i,X_j) = \delta_{ij}$. Notice that
\begin{equation}
[X_i,X_j]= A_{ij} \partial_z.
\end{equation}
Set $\mathfrak{g}_1 := \spn\{X_1,\ldots,X_{2d}\}$ and $\mathfrak{g}_2 := \spn \{\partial_z\}$. The algebra $\mathfrak{g}$ generated by the $X_i$'s and $\partial_z$ admits a nilpotent stratification of step $2$, that is
\begin{equation}
\mathfrak{g} = \mathfrak{g}_1 \oplus \mathfrak{g}_2, \qquad \mathfrak{g}_1,\mathfrak{g}_2 \neq \{0\},
\end{equation}
with
\begin{equation}
[\mathfrak{g}_1,\mathfrak{g}_1] = \mathfrak{g}_{2}, \qquad \text{and} \qquad [\mathfrak{g}_1,\mathfrak{g}_2] = [\mathfrak{g}_2,\mathfrak{g}_2]= \{0\}.
\end{equation}
There is a unique connected, simply connected Lie group $G$ such that $\mathfrak{g}$ is its Lie algebra of left-invariant vector fields. The group exponential map,
\begin{equation}
\mathrm{exp}_{G} : \mathfrak{g} \to G,
\end{equation}
associates with $v \in \mathfrak{g}$ the element $\gamma(1)$, where $\gamma: [0,1] \to G$ is the unique integral curve of the vector field $v$ such that $\gamma(0) = 0$. Since $G$ is simply connected and $\mathfrak{g}$ is nilpotent, $\mathrm{exp}_G$ is a smooth diffeomorphism. Thus we can identify $G \simeq \R^{2d+1}$ equipped with a polynomial product law $\star$ given by
\begin{equation}
(x,z) \star (x',z') = \left(x+x',z+z' + \tfrac{1}{2} x^* A x'\right).
\end{equation}
Denote by $L_q$ the left-translation $L_q (p) := q \star p$. The fields $X_i$ are left-invariant, and as a consequence the sub-Riemannian distance is left-invariant as well, in the sense that $d(L_q(p_1),L_q(p_2)) = d(p_1,p_2)$.
\begin{rmk}
As consequence of left-invariance, contact Carnot groups are complete as metric spaces. Moreover all abnormal minimizers are trivial. Hence, for each volume $\omega$, the operator $\Delta_\omega$ with domain $C_c^\infty(M)$ is essentially self-adjoint in $L^2(M,\omega)$.
\end{rmk}
\begin{example}\label{ex:heis}
The $2d+1$ dimensional \emph{Heisenberg group} $\mathbb{H}_{2d+1}$, for $d \geq 1$, is the contact Carnot group with $\alpha_1=\ldots=\alpha_d = 1$.
\end{example}
\begin{example}\label{ex:biheis}
The \emph{bi-Heisenberg group} is the $5$-dimensional contact Carnot group with $0<\alpha_1 < \alpha_2$. That is, $A$ has two distinct singular values.
\end{example}
A natural volume is the Popp volume $\mathcal{P}$. By the results of \cite{nostropopp}, we have the formula
\begin{equation}
\mathcal{P} = \frac{1}{2\sum_{i=1}^{d} \alpha_i^2}dx_1 \wedge \ldots \wedge dx_{2d} \wedge dz.
\end{equation}
In particular $\mathcal{P}$ is left-invariant and, up to constant scaling, coincides with the Lebesgue volume of $\mathbb{R}^{2d+1}$. One can check that $\dive_{\mathcal{P}}(X_i) = 0$, hence the sub-Laplacian w.r.t.\ $\mathcal{P}$ is the sum of squares\footnote{This is the case for any sub-Riemannian left-invariant structure on a unimodular Lie group \cite{laplacian}.}:
\begin{equation}
\Delta_{\mathcal{P}} = \sum_{i=1}^{2d} X_{i}^2,
\end{equation}
In this setting, we are able to prove the convergence of the sub-Riemannian random walk with volume sampling.
\begin{theorem}\label{t:limit-contact-Heis}
Let $\mathbb{H}_{2d+1}$ be the Heisenberg group, equipped with a general volume $\omega = e^h\mathcal{P}$. Then $L_{\omega,c}^\varepsilon \to L_{\omega,c}$, where
\begin{equation}
L_{\omega,c} = \sigma(c) \left(\sum_{i=1}^{2d} X_i^2 + 2c X_i(h)\right) = \sigma(c)\left(\dive_{\omega}\circ \grad + (2c-1) \grad(h)\right),
\end{equation}
and $\sigma(c)$ is a constant (see Remark~\ref{r:constantHeis}).
\end{theorem}
In particular $L_{\omega,c}$ is essentially self-adjoint in $L^2(M,\omega)$ if and only if $c=1/2$ or $\omega = \mathcal{P}$ (i.e. $h$ is constant). The proof of the above theorem is omitted, as it is a consequence of the next, more general, result. In the general case, the picture is different, and quite surprising, since not even the principal symbol is the expected one.
\begin{theorem}\label{t:limit-contact-carnot}
Let $(\R^{2d+1},\distr,\g)$ be a contact Carnot group, equipped with a general volume $\omega = e^h\mathcal{P}$ and let $c \in (0,1]$. Then $L_{\omega,c}^\varepsilon \to L_{\omega,c}$, where
\begin{equation}
L_{\omega,c} = \sum_{i=1}^d \sigma_{i}(c) \left( X_{2i-1}^2 + X_{2i}^2\right) + 2c \sum_{i=1}^d \sigma_i(c) \left(X_{2i-1}(h)X_{2i-1} + X_{2i}(h)X_{2i}\right),
\end{equation}
where $\sigma_1(c),\ldots,\sigma_d(c) \in \R$ are
\begin{equation}
\sigma_{i}(c) := \frac{c d }{(d+1)\sum_{i=1}^d \int_{-\infty}^{+\infty} |g_i(y)| dy} \sum_{\ell=1}^d (1+\delta_{\ell i}) \int_{-\infty}^{+\infty} |g_\ell(c p_z)|\frac{\sin(\tfrac{\alpha_i p_z}{2})^2}{(\alpha_i p_z/2)^2}dp_z,
\end{equation}
and, for $i=1,\ldots,d$
\begin{equation}
g_i(y)= \left( \prod_{j\neq i} \sin\left(\tfrac{\alpha_j y}{2}\right)\right)^2 \frac{\sin\left(\tfrac{\alpha_i y}{2}\right) \left(\tfrac{\alpha_i y}{2} \cos\left(\tfrac{\alpha_i y}{2}\right)- \sin\left(\tfrac{\alpha_i y}{2}\right)\right) }{(y/2)^{2d+2}}.
\end{equation}
Moreover, $P^\varepsilon_{\omega,c} \to P_{\omega,c}$ weakly, where $P_{\omega,c}$ is the law of the process associated with $L_{\omega,c}$.
\end{theorem}
\begin{rmk}[Heisenberg]\label{r:constantHeis}
If $\alpha_1=\ldots=\alpha_d = 1$, the functions $g_i = g$ are equal and
\begin{equation}
\sigma(c) := \sigma_i(c) = \frac{c}{\int_{\R} |g(y)| dy} \int_{\R} |g(c y)| \frac{\sin(y/2)^2}{(y/2)^2},
\end{equation}
does not depend on $i$. In general, however, $\sigma_i \neq \sigma_j$ (see Figure~\ref{f:coeff}).
\end{rmk}
\begin{figure}[h]
\includegraphics[scale=0.4]{fig2.pdf}
\caption{Plots of $\sigma_i(c)$ for $d=3$ and $\alpha_1=1$, $\alpha_2 = 2$ and $\alpha_3 =3$.}\label{f:coeff}
\end{figure}
\subsubsection{An intrinsic formula}\label{Sect:IntrinsicFormula}
We rewrite the operator of Theorem \ref{t:limit-contact-carnot} in an intrinsic form. Define a new contact Carnot structure ($\R^{2d+1},\distr,\g'$) on the same distribution, by defining
\begin{equation}\label{eq:primedfields}
X'_{2i-1}:= \sqrt{\sigma_i(c)} X_{2i-1}, \qquad X'_{2i}:=\sqrt{\sigma_i(c)} X_{2i}, \qquad i=1,\ldots,d,
\end{equation}
to be a new orthonormal frame. Observe that this construction does not depend on the choice of $\omega$. Let $\grad$ and $\grad'$ denote the horizontal gradients w.r.t.\ the sub-Riemannian metrics $\g$ and $\g'$, respectively. Then the following is a direct consequence of Theorem \ref{t:limit-contact-carnot} and the definition of this ``primed'' structure.
\begin{cor}\label{cor:intrinsicformula}
The limit operator $L_{\omega,c}$ of Theorem \ref{t:limit-contact-carnot} is
\begin{equation}
L_{\omega,c} = \dive_\omega\circ \grad' + (2c-1) \grad'(h),
\end{equation}
where $\grad'(h) =\sum_{i=1}^{2d} X'_i(h)X'_i$ is understood as a derivation.
\end{cor}
Again $L_{\omega,c}$ is essentially self-adjoint in $L^2(M,\omega)$ if and only if $c=1/2$ or $\omega = \mathcal{P}$ (i.e.\ $h$ is constant). In both cases it is a ``divergence of the gradient'', i.e.\ a well-defined, intrinsic and symmetric operator but, surprisingly, not the expected one. In particular, the behavior of associated heat kernel (e.g.\ its asymptotics) depends not on the original sub-Riemannian metric $\g$, but on the new one $\g'$.
\subsubsection{On the symbol}\label{Sect:Symbol}
We recall that the (principal) symbol of a smooth differential operator $D$ on a smooth manifold $M$ can be seen as a function $\Sigma(D) : T^*M \to \R$. The symbol associated with the sub-Riemannian geodesic random walk with volume sampling is
\begin{equation}
\Sigma(L_{\omega,c})(\lambda) = \sum_{i=1}^d \sigma_i(c) (\langle \lambda, X_{2i-1}\rangle^2 + \langle \lambda, X_{2i}\rangle^2), \qquad \lambda \in T^*M,
\end{equation}
and does not depend on $\omega$. On the other hand, the principal symbol of $\Delta_\omega$ is
\begin{equation}
\Sigma(\Delta_{\omega})(\lambda) = \sum_{i=1}^{2d} \langle \lambda,X_i\rangle^2 = 2H(\lambda), \qquad \lambda \in T^*M.
\end{equation}
In general, the two symbols are different, for any value of the sampling ratio $c>0$. The reason behind this discrepancy is that the family of operators $L_{\omega,c}^\varepsilon$ keeps track of the different eigenspace associated with the generically different singular values $\alpha_i \neq \alpha_j$, through the Jacobian determinant of the exponential map.
\subsection{Alternative construction for the sub-Riemannian random walk}\label{s:altconstr}
An alternative construction of the sub-Riemannian random walk of Section~\ref{s:SR-RW} is the following. For any fixed step length $\varepsilon >0$, one follows only minimizing geodesics segments, that is $\lambda \in \cyl^\varepsilon_q$, as defined in \eqref{eq:cotingj}. In other words, for $\varepsilon >0$, and $c \in (0,1]$, we consider the restriction of $\mu_q^{c\varepsilon}$ to $\cyl_q^\varepsilon$ (which we normalize in such a way that $\int_{\cyl_q^\varepsilon} \mu_q^{c\varepsilon} = 1$).
\begin{rmk}
In the the original construction the endpoints of the first step of the walk lie on the \emph{front of radius $\varepsilon$ centered at $q$}, that is the set $F_q(\varepsilon)=\exp_q(\varepsilon;\cyl_q)$. With this alternative construction, the endpoints lie on the \emph{metric sphere of radius $\varepsilon$ centered at $q$}, that is the set $S_q(\varepsilon)=\exp_q(\varepsilon;\cyl_q^\varepsilon)$.
\end{rmk}
\begin{rmk}
In the Riemannian setting, locally, for $\varepsilon>0$ sufficiently small, all geodesics starting from $q$ are optimal at least up to length $\varepsilon$, and the two constructions coincide.
\end{rmk}
This construction requires the explicit knowledge of $\cyl_q^\varepsilon$, which is known for contact Carnot groups \cite{ABB-Hausdorff}. We obtain the following convergence result, whose proof is similar to that of Theorem~\ref{t:limit-contact-carnot}, and thus omitted.
\begin{theorem}\label{t:limit-contact-carnot-alternative}
Consider the geodesic sub-Riemannian random walk with volume sampling, with volume $\omega$ and ratio $c$, defined according to the alternative construction. Then the statement of Theorem~\ref{t:limit-contact-carnot} holds, replacing the constants $\sigma_i(c)\in \R$ with
\begin{equation}
\sigma_{i}^{alt}(c) := \frac{c d }{(d+1)\sum_{j=1}^d \int_{-2\pi/\alpha_d c}^{2\pi/\alpha_dc} |g_j(y)| dy} \sum_{\ell=1}^d (1+\delta_{\ell i}) \int_{-2\pi/\alpha_d}^{2\pi/\alpha_d} |g_\ell(c p_z)|\frac{\sin(\tfrac{\alpha_i p_z}{2})^2}{(\alpha_i p_z/2)^2} dp_z,
\end{equation}
for $i=1,\ldots,d$. We call $L_{\omega,c}^{alt}$ the corresponding operator.
\end{theorem}
\begin{rmk}[The case $c = 0$]
In the Riemannian setting the case $c=0$ represents the geodesic random walk with no volume sampling of Section~\ref{s:ito-intro}. In fact, by Theorem~\ref{t:limit-Riemannian},
\begin{equation}
L_{\omega,0} = \lim_{c \to 0^+} L_{\omega,c} = \dive_{\mathcal{R}} \circ \grad, \qquad \text{(Riemannian geodesic RW)},
\end{equation}
is the Laplace-Beltrami operator, for any choice of $\omega$. In the sub-Riemannian setting the case $c=0$ is not defined, but we can still consider the limit for $c \to 0^+$ of the operator. In the original construction, $\lim_{c\to 0^+} \sigma_i(c) = 0$ and by Theorem~\ref{t:limit-contact-carnot} we have:
\begin{equation}
\lim_{c \to 0^+} L_{\omega,c} = 0, \qquad \text{(sub-Riemannian geodesic RW)}.
\end{equation}
For the alternative sub-Riemannian geodesic random walk discussed above, we have:
\begin{equation}
\lim_{c\to 0^+} \sigma_i^{alt}(c) = \frac{d}{4\pi(d+1)} \left(1 + \frac{\alpha_i^2}{\sum_{\ell=1}^d \alpha_\ell^2}\right) \int_{-2\pi}^{2\pi} \sinc\left(\frac{\alpha_i x}{2\alpha_d}\right)^2 dx, \qquad \forall i = 1,\ldots,d.
\end{equation}
As in Section~\ref{Sect:IntrinsicFormula}, we can define a new metric $\g''$, on the same distribution, such that
\begin{equation}
X_{2i-1}'':= \sqrt{\sigma_i^{alt}(0)} X_{2i-1}, \qquad X_{2i}'':= \sqrt{\sigma_i^{alt}(0)} X_{2i}, \qquad i=1,\ldots,d
\end{equation}
are a global orthonormal frame, where $\sigma_i^{alt}(0):=\lim_{c \to 0^+} \sigma_i^{alt}(c) > 0$. Then, by Theorem~\ref{t:limit-contact-carnot-alternative} we obtain a formula similar to the one of Corollary~\ref{cor:intrinsicformula}:
\begin{equation}
L_{\omega,0}^{alt}:=\lim_{c \to 0^+} L_{\omega,c}^{alt} = \dive_{\mathcal{P}} \circ \grad'', \qquad \text{(alternative sub-Riemannian geodesic RW)}.
\end{equation}
where $\grad''$ is the horizontal gradient computed w.r.t. $\g''$. Unless all the $\alpha_i$ are equal, in general $\sigma_i^{alt}(0) \neq \sigma_j^{alt}(0)$ and $\grad''$ is not proportional to $\grad$.
Notice that $L_{\omega,0}^{alt}$ is a non-zero operator, , symmetric w.r.t.\ Popp volume, and it does not depend on the choice of the initial volume $\omega$. This makes $L_{\omega,0}^{alt}$ (and the corresponding diffusion) an intriguing candidate for an intrinsic sub-Laplacian (and an intrinsic Brownian motion) for contact Carnot groups.
For the Heisenberg group $\mathbb{H}_{2d+1}$, where $\alpha_i = 1$ for all $i$, by Theorem~\ref{t:limit-contact-Heis} we have:
\begin{equation}
L_{\omega,0}^{alt} = \sigma^{alt}(0) \dive_{\mathcal{P}} \circ \grad, \quad \text{where} \quad \sigma^{alt}(0) = \frac{1}{4\pi}\int_{-2\pi}^{2\pi} \sinc(x)^2 dx.
\end{equation}
\end{rmk}
\begin{rmk}[Signed measures]
A further alternative construction is one in which we remove the absolute value in the definition~\ref{d:mesIto} of $\mu_q^\varepsilon$ on $\cyl_q$. In this case we lose the probabilistic interpretation, and we deal with a signed measure; still, we have an analogue of Theorem~\ref{t:limit-contact-carnot} for the operators themselves, replacing the constants $\sigma_1(c),\ldots,\sigma_d(c)$ with
\begin{equation}
\widetilde{\sigma}_i(c)= \frac{c d}{(d+1)\sum_{j=1}^d \int_{-\infty}^{+\infty} g_j(y) dy} \sum_{\ell=1}^d (1+\delta_{\ell i}) \int_{-\infty}^{+\infty} g_\ell(c p_z)\frac{\sin(\tfrac{\alpha_i p_z}{2})^2}{(\alpha_i p_z/2)^2} dp_z.
\end{equation}
We observe the same qualitative behavior as the initial construction highlighted in Section~\ref{Sect:IntrinsicFormula} and~\ref{Sect:Symbol}.
\end{rmk}
\subsection{The 3D Heisenberg group}
We give more details for the sub-Riemannian geodesic random walk in the 3D Heisenberg group. This is a contact Carnot group with $d=1$ and $\alpha_1=1$. The identity of the group is $(x,z) = 0$. In coordinates $(p_x,p_z) \in T_0^*M$ we have
\begin{align}
\cyl_0 & = \{(p_x,p_z) \in \R^2 \times \R \mid \|p_x\|^2 = 1\},\\
\cyl_0^\varepsilon & = \{(p_x,p_z) \in \R^2 \times \R \mid \|p_x\|^2 = 1, \quad |p_z|\leq 2\pi/\varepsilon \}.
\end{align}
see \cite{ABB-Hausdorff}. For instance, we set $\omega$ equal to the Lebesgue volume. From the proof of Theorem~\ref{t:limit-contact-carnot}, we obtain, in cylindrical coordinates $(\theta,p_z) \in \mathbb{S}^1 \times \R \simeq T_0^*M$
\begin{equation}
\mu_0^{c\varepsilon} = \begin{dcases}
\frac{c\varepsilon |g(c \varepsilon p_z)|}{2\pi\int_{-\infty}^{\infty} |g(y)|dy} d\theta \wedge dp_z & \text{original construction}, \\
\frac{c\varepsilon |g(c \varepsilon p_z)|}{2\pi\int_{-2\pi c}^{2\pi c} |g(y)| dy} d\theta \wedge dp_z & \text{alternative construction}, \\
\end{dcases}
\end{equation}
where
\begin{equation}
g(y)= \frac{\sin\left(\tfrac{y}{2}\right) \left(\tfrac{y}{2} \cos\left(\tfrac{y}{2}\right)- \sin\left(\tfrac{y}{2}\right)\right) }{(y/2)^{4}}.
\end{equation}
The normalization is determined by the conditions
\begin{equation}
\begin{cases}
\int_{\cylsmall_0} |\mu_0^{c\varepsilon}| = 1 & \text{original construction}, \\
\int_{\cylsmall_0^\varepsilon} |\mu_0^{c\varepsilon}| = 1 & \text{alternative construction}.
\end{cases}
\end{equation}
The density corresponding to $\mu_0^{c\varepsilon}$, in coordinates $(p_x,p_z)$, depends only on $p_z$. For any fixed $c>0$, the density (for either construction) spreads out as $\varepsilon \to 0$, and thus the probability to follow a geodesic with large $p_z$ increases (see Fig.~\ref{f:spread}).
\begin{figure}
\includegraphics[scale=1]{fig3.pdf}
\caption[meas]{Measures on $\cyl$ for $c=1$ in the Heisenberg group $\mathbb{H}_3$ for the original construction. Each zero corresponds to a conjugate point.}\label{f:spread}
\end{figure}
\section{Flow random walks}\label{s:RW-flow}
The main difficulties to deal with in the convergence of the sub-Riemannian geodesic random walk with volume sampling scheme were related to the non-compactness of $\cyl_q$, and the lack of a general asymptotics for $\mu_q^\varepsilon$. To overcome these difficulties, we discuss a different class of walks. This approach is inspired by the classical integration of a Stratonovich SDE, and can be implemented on Riemannian and sub-Riemannian structures alike (the only requirement being a set of vector fields $V_1,\ldots,V_k$ on a smooth manifold $M$, and a volume $\omega$ for volume sampling).
\subsection{Stratonovich SDEs via flow random walks} \label{s:strato-intro}
Let $M$ be a smooth $n$-dimensional manifold, and let $V_1,\ldots,V_k$ be smooth vector fields on $M$. Since SDEs are fundamentally local objects (at least in the case of smooth vector fields, where the SDE has a unique, and thus Markov, solution), we do not worry about the global behavior of the $V_i$, and thus we assume, without loss of generality, that the flow along any vector field $V=\beta_1V_1+\beta_2V_2+\cdots+\beta_kV_k$ for any constants $\beta_i$ exists for all time. Further, we can assume that there exists a Riemannian metric $\g$ on $M$ such that the $V_i$ all have bounded norm.
We consider the Stratonovich SDE
\begin{equation}\label{Eqn:StratSDE}
dq_t = \sum_{i=1}^k V_i\left( q_t\right) \circ d(\sqrt{2} w_t^i) , \qquad q_0=q,
\end{equation}
for some $q\in M$, where $w_t^1,\ldots,w_t^k$ are independent, one-dimensional Brownian motions. We recall that solving this SDE is essentially equivalent to solving the martingale problem for the operator $\sum_{i=1}^k V_i^2$. (See \cite[Chapter 5]{KaratzasShreve} for the precise relationship between solutions to SDEs and solutions to martingale problems, although in this case, because of strong uniqueness of the solution to the \eqref{Eqn:StratSDE}, the situation is relatively simple.) We also assume that the solution to \eqref{Eqn:StratSDE}, which we denote $q^0_t$, does not explode.
The sequence of random walks which we associate to \eqref{Eqn:StratSDE} is as follows. We take $\varepsilon>0$. Consider the $k$-dimensional vector space of all linear combinations $\beta_1V_1+\beta_2V_2+\cdots+\beta_kV_k$. Then we can naturally identify $\mathbb{S}^{k-1}$ with the set $\sum_{i=1}^k \beta_i^2=1$, and thus choose a $k$-tuple $(\beta_1,\ldots,\beta_k)$ from the sphere according to the uniform probability measure. This gives a random linear combination $V=\beta_1V_1+\beta_2V_2+\cdots+\beta_kV_k$. Now, starting from $q$, we flow along the vector field $\frac{2k}{\varepsilon} V$ for time $\delta= \varepsilon^2/(2k)$, traveling a curve of length $\varepsilon\|V\|_{\g}$. This determines the first step of a random walk (and the measure $\Pi^{\varepsilon}_q$). Determining each additional step in the same way produces a family of random walks $q^{\varepsilon}_t$, that we call the \emph{flow random walk} at scale $\varepsilon$ associated with the SDE \eqref{Eqn:StratSDE}.
We associate to each process $q^{\varepsilon}_t$ and $q^0_t$ the corresponding probability measures $P^{\varepsilon}$ and $P^0$ on $\Omega(M)$. The operator induced by the walks converges to the sum-of-squares operator $\sum_{i=1}^k V_i^2$, uniformly on compact sets, by smoothness. Then, by Theorem \ref{t:convergence}, the measures $P^{\varepsilon} \to P^0$ weakly as $\varepsilon\rightarrow 0$. Note that since this holds for any metric $\g$ as described above, this is really a statement about processes on $M$ as a smooth manifold, and the occurrence of $\g$ is just an artifact of the formalism of Theorem~\ref{t:convergence}. Also, we again note that in this construction, it would be possible to allow $k>n$.
The relationship of Stratonovich integration to ODEs, and thus flows of vector fields, is not new. Approximating the solution to a Stratonovich SDE by an ODE driven by an approximation to Brownian motion is considered in \cite{Doss} and \cite{Sussmann}. Here, we have tried to give a simple, random walk approach emphasizing the geometry of the situation. Nonetheless, because $M$ is locally diffeomorphic to $\mathbb{R}^n$ (or a ball around the origin in $\mathbb{R}^n$, depending on one's preferences) and the entire construction is preserved by diffeomorphism, there is nothing particularly geometric about the above, except perhaps the observation that the construction is coordinate independent.
\subsection{Volume sampling through the flow}
The random walk defined in the previous section, which depends only on the choice of $k$ smooth vector fields $V_1,\ldots,V_k$ fits in the general class of walks of Section~\ref{s:convergence}. Moreover, the construction can be generalized to include a volume sampling technique, as we now describe.
Here $V_1,\ldots,V_k$ are a fixed set of global orthonormal fields of a complete (sub-)Rieman\-nian structure, and for this reason we rename them $X_1,\ldots,X_k$. We will discuss in which cases the limit diffusion does not depend on this choice. Notice that, as a consequence of our assumption on completeness of the (sub-)Riemannian structure, any linear combination of the $X_i$'s with constant coefficients is complete.
\begin{rmk}
If $TM$ is not trivial, clearly such a global frame does not exist. To overcome this difficulty, one can consider a locally finite cover $\{U_i\}_{i \in I}$, each one equipped with a preferred local orthonormal frame. For each $q \in M$, there exists a finite set of indices $I_q$ such that $q \cap U_i \neq \emptyset$. Hence, one can easily generalize the forthcoming construction by choosing with uniform probability one of the finite number of available local orthonormal frames available at $q$. Another possibility is to consider an overdetermined set $X_1,\ldots,X_N$ of global vector fields generating the same (sub-)Rieman\-nian structure, as explained in \cite[Sec. 3.1.4]{nostrolibro}. Either choice leads to equivalent random walks hence, for simplicity, we restrict in the following to the case of trivial $TM$.
\end{rmk}
\begin{definition}
For any $q \in M$, and $\varepsilon>0$, the \emph{endpoint map} $E_{q,\varepsilon} : \R^k \to M$ gives the point $E_{q,\varepsilon}(u)$ at time $\varepsilon$ of integral curve of the vector field $X_u:=\sum_{i=1}^k u_i X_i$ starting from $q \in M$. Moreover, let $S_{q,\varepsilon}:=E_{q,\varepsilon}(\mathbb{S}^{k-1})$.
\end{definition}
\begin{rmk}
For small $\varepsilon\geq 0$, $E_{q,\varepsilon} : \mathbb{S}^{k-1} \to S_{q,\varepsilon}$ is a diffeomorphism, and for any unit $u \in \mathbb{S}^{k-1}$, note that $\gamma_u(\varepsilon + \tau):=E_{q,\varepsilon+\tau}(u)$ is a segment of the flow line transverse to $S_{q,\varepsilon}$.
\end{rmk}
The next step is to induce a probability measure $\mu_{q}^\varepsilon$ on $\mathbb{S}^{k-1}$ via volume sampling through the endpoint map. We start with the Riemannian case.
\subsection{Flow random walks with volume sampling in the Riemannian setting}
In this case $k=n$, and the specification of the volume sampling scheme is quite natural.
\begin{definition}\label{d:fakeRiem}
Let $(M,\g)$ be a Riemannian manifold. For any $q \in M$ and $\varepsilon >0$, we define the family of densities on $\mu_q^\varepsilon$ on $\mathbb{S}^{n-1}$
\begin{equation}
\mu_q^\varepsilon(u):= \frac{1}{N(q,\varepsilon)} \left\lvert E_{q,\varepsilon}^* \circ \iota_{\dot\gamma_u(\varepsilon)} \omega)(u)\right\rvert, \qquad \forall u \in \mathbb{S}^{n-1}.
\end{equation}
where $N(q,\varepsilon)$ is fixed by the condition $\int_{\mathbb{S}^{n-1}}\mu_q^\varepsilon = 1$. For $\varepsilon =0$, we set $\mu_q^0$ the standard normalized density on $\mathbb{S}^{n-1}$.
\end{definition}
Then, we define a random walk by choosing $u \in \mathbb{S}^{k-1}$ according to $\mu_q^\varepsilon$, and following the corresponding integral curve. That is, for $\varepsilon > 0$
\begin{equation}\label{eq:process-fakeriem}
r_{(i+1)\delta,c}^\varepsilon:=E_{r_{i\delta,c}^\varepsilon,\varepsilon}(u), \qquad u \in \mathbb{S}^{n-1} \text{ chosen with probability } \mu_q^{c\varepsilon},
\end{equation}
where we have also introduced the parameter $c \in [0,1]$ for the volume sampling. This class of walks includes the one described in the previous section (by setting $c=0$).
Let $P_{\omega,c}^\varepsilon$ be the probability measure on the space of continuous paths on $M$ associated with $r_{t,c}^\varepsilon$ and consider the associated family of operators that, in this case, is
\begin{equation}\label{eq:operator-fakeriem}
(L_{\omega,c}^\varepsilon \phi)(q) := \frac{1}{ \delta}\int_{\mathbb{S}^{n-1}} [\phi(E_{q,\varepsilon}(u))-\phi(q)] \mu_q^{c\varepsilon}(u), \qquad \forall q \in M,
\end{equation}
for any $\phi \in C^\infty(M)$.
\begin{theorem}\label{t:limit-Riemannian-fake}
Let $(M,\g)$ be a complete Riemannian manifold and $X_1,\ldots,X_n$ be a global set of orthonormal vector fields. Let $c \in [0,1]$ and $\omega = e^h \mathcal{R}$ be a fixed volume on $M$, for some $h \in C^\infty(M)$. Then $L_{c,\omega}^\varepsilon \to L_{c,\omega}$, where
\begin{align}
L_{\omega,c} = \Delta_{\omega} + c \grad(h) + (c-1) \sum_{i=1}^n \div_\omega(X_i)X_i.
\end{align}
Moreover $P_{\omega,c}^\varepsilon \to P_{\omega,c}$ weakly, where $P_{\omega,c}$ is the law of the process associated with $L_{\omega,c}$ (which we assume does not explode).
\end{theorem}
The limiting operator is not intrinsic in general, as it clearly depends on the choice of the orthonormal frame. However, thanks to this explicit formula, we have the following.
\begin{cor}\label{c:corflow1}
The operator $L_{\omega,c}$ does not depend on the choice of the orthonormal frame if and only if $c=1$. In this case
\begin{align}
L_{\omega,1} = \Delta_{\omega} + \grad(h) = \Delta_{e^h \omega} = \Delta_{e^{2h} \mathcal{R}} .
\end{align}
\end{cor}
Even though $L_{\omega,1}$ is intrinsic and depends only on the Riemannian structure and the volume $\omega$, it is not symmetric in $L^2(M,\omega)$ unless we choose $h$ to be constant. This selects a preferred volume $\omega = \mathcal{R}$, up to a proportionality constant.
\begin{cor}\label{c:corflow2}
The operator $L_{\omega,c}$ with domain $C^\infty_c(M)$ is essentially self-adjoint in $L^2(M, \omega)$ if and only if $c=1$ and $\omega$ is proportional to the Riemannian volume.
\end{cor}
On the other hand, by setting $c=0$, we recover the ``sum of squares'' generator of the solution of the Stratonovich SDE \eqref{Eqn:StratSDE}.
\begin{cor}\label{c:corflow3}
The operator $L_{\omega,0}$ depends on the choice of the vector fields $X_1,\ldots,X_n$, but not on the choice of the volume $\omega$, in particular
\begin{align}
L_{\omega,0} = \sum_{i=1}^n X_i^2.
\end{align}
\end{cor}
\subsection{Flow random walks with volume sampling in the sub-Riemannian setting}
To extend the flow random walk construction to the sub-Riemannian setting we need vector fields $Z_1,\ldots,Z_{n-k}$ on $M$, transverse to $\distr$, in such a way that $\iota_{Z_1,\ldots,Z_{n-k}} \omega$ is a well-defined $k$-form that we can use to induce a measure on $\mathbb{S}^{k-1}$ as in Definition~\ref{d:fakeRiem}.
In general there is no natural choice of these $Z_1,\ldots,Z_{n-k}$. We explain the construction in detail for contact sub-Riemannian structures, where such a natural choice exists. Indeed, this class contains contact Carnot groups.
\subsubsection{Contact sub-Riemannian structures}
A sub-Riemannian structure $(M,\distr,\g)$ is \emph{contact} if there exists a global one-form $\eta$ such that $\distr = \ker \eta$. This forces $\dim(M) = 2d+1$ and $\rank \distr = 2d$, for some $d \geq 1$. Consider the skew-symmetric \emph{contact endomorphism} $J : \Gamma(\distr) \to \Gamma(\distr)$, defined by the relation
\begin{equation}
g(X,JY) = d\eta(X,Y), \qquad \forall X, Y \in \Gamma(\distr).
\end{equation}
We assume that $J$ is non-degenerate. Multiplying $\eta$ by a non-zero smooth function $f$ gives the same contact structure, with contact endomorphism $f J$. We fix $\eta$ up to sign by taking
\begin{equation}
\tr(JJ^*) = 1.
\end{equation}
The Reeb vector field is defined as the unique vector $X_0$ such that
\begin{equation}
\eta(X_0) = 1, \qquad \iota_{X_0} d\eta = 0.
\end{equation}
In this case the Popp density is the unique density such that $\mathcal{P}(X_0,X_1,\ldots,X_{2d}) =1$ for any orthonormal frame $X_1,\ldots,X_{2d}$ of $\distr$ (see \cite{nostropopp}).
The flow random walk with volume sampling, with volume $\omega$ and sampling ratio $c$, can be implemented as follows.
\begin{definition}
Let $(M,\distr,\g)$ be a contact sub-Riemannian structure with Reeb vector field $X_0$. For any $q \in M$ and $\varepsilon >0$ we define the family of densities $\mu_q^\varepsilon$ on $\mathbb{S}^{k-1}$
\begin{equation}
\mu_q^\varepsilon(u):= \frac{1}{N(q,\varepsilon)} \left\lvert(E_{q,\varepsilon}^* \circ \iota_{X_0,\dot\gamma_u(\varepsilon)} \omega)(u)\right\rvert, \qquad \forall u \in \mathbb{S}^{k-1},
\end{equation}
where $N(q,\varepsilon)$ is fixed by the condition $\int_{\mathbb{S}^{k-1}} \mu_q^\varepsilon = 1$. For $\varepsilon = 0$, we set $\mu_q^0$ to be the standard normalized density on $\mathbb{S}^{k-1}$.
\end{definition}
We define a random walk $r_{t,c}^\varepsilon$ as in \eqref{eq:process-fakeriem}, with sampling ratio $c \in [0,1]$, and we call the associated family of operators $L_{\omega,c}^\varepsilon$ as in \eqref{eq:operator-fakeriem}, with no risk of confusion.
\begin{theorem}\label{t:limit-contact-fake}
Let $(M,\distr,\g)$ be a complete contact sub-Riemannian manifold and $X_1,\ldots,X_{2d}$ be a global set of orthonormal vector fields. Let $c \in [0,1]$ and $\omega = e^h \mathcal{P}$ be a fixed volume on $M$, for some $h \in C^\infty(M)$. Then $L_{c,\omega}^\varepsilon \to L_{c,\omega}$, where
\begin{align}
L_{\omega,c} = \Delta_{\omega} + c \grad (h) + (c-1) \sum_{i=1}^k \div_\omega(X_i)X_i.
\end{align}
Moreover $P_{\omega,c}^\varepsilon \to P_{\omega,c}$ weakly, where $P_{\omega,c}$ is the law of the process associated with $L_{\omega,c}$ (which we assume does not explode).
\end{theorem}
This construction, in the contact sub-Riemannian case, has the some properties as the Riemannian one, where the Riemannian volume is replaced by Popp one. In particular we have the following analogues of Corollaries~\ref{c:corflow1},~\ref{c:corflow2}, and \ref{c:corflow3}.
\begin{cor}
The operator $L_{\omega,c}$ does not depend on the choice of the orthonormal frame if and only if $c=1$. In this case
\begin{align}
L_{\omega,1} = \Delta_{\omega} + \grad_H(h) = \Delta_{e^h \omega} = \Delta_{e^{2h} \mathcal{P}} .
\end{align}
\end{cor}
\begin{cor}
The operator $L_{\omega,c}$ with domain $C^\infty_c(M)$ is essentially self-adjoint in $L^2(M,\omega)$ if and only if $c=1$ and $\omega$ is proportional to the Popp volume.
\end{cor}
\begin{cor}
The operator $L_{\omega,0}$ depends on the choice of the vector fields $X_1,\ldots,X_k$, but not on the choice of the volume $\omega$, in particular
\begin{align}
L_{\omega,0} = \sum_{i=1}^k X_i^2.
\end{align}
\end{cor}
\subsection*{Acknowledgments}
This research has been partially supported by the European Research Council, ERC StG 2009 ``GeCoMethods'', contract n.\ 239748, by the ERC POC project ARTIV1 contract number 727283, by the ANR project ``SRGI'' ANR-15-CE40-0018, by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH (in a joint call with Programme Gaspard Monge en Optimisation et Recherche Op\'erationnelle), by the iCODE institute, research project of the Idex Paris-Saclay, and by the SMAI project ``BOUM''. Project sponsored by the National Security Agency under Grant Number H98230-15-1-0171. The United States Government is authorized to reproduce and distribute reprints notwithstanding any copyright notation herein.
\medskip
The authors wish to thank J.-M. Bismuth for helpful discussions.
\bibliographystyle{abbrv}
| {
"timestamp": "2017-05-03T02:03:43",
"yymm": "1601",
"arxiv_id": "1601.03304",
"language": "en",
"url": "https://arxiv.org/abs/1601.03304",
"abstract": "We relate some basic constructions of stochastic analysis to differential geometry, via random walk approximations. We consider walks on both Riemannian and sub-Riemannian manifolds in which the steps consist of travel along either geodesics or integral curves associated to orthonormal frames, and we give particular attention to walks where the choice of step is influenced by a volume on the manifold. A primary motivation is to explore how one can pass, in the parabolic scaling limit, from geodesics, orthonormal frames, and/or volumes to diffusions, and hence their infinitesimal generators, on sub-Riemannian manifolds, which is interesting in light of the fact that there is no completely canonical notion of sub-Laplacian on a general sub-Riemannian manifold. However, even in the Riemannian case, this random walk approach illuminates the geometric significance of Ito and Stratonovich stochastic differential equations as well as the role played by the volume.",
"subjects": "Differential Geometry (math.DG); Probability (math.PR)",
"title": "Intrinsic random walks in Riemannian and sub-Riemannian geometry via volume sampling",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517456453798,
"lm_q2_score": 0.808067204308405,
"lm_q1q2_score": 0.7903315397726172
} |
https://arxiv.org/abs/1705.10079 | A Gronwall inequality for a general Caputo fractional operator | In this paper we present a new type of fractional operator, which is a generalization of the Caputo and Caputo--Hadamard fractional derivative operators. We study some properties of the operator, namely we prove that it is the inverse operation of a generalized fractional integral. A relation between this operator and a Riemann--Liouville type is established. We end with a fractional Gronwall inequality type, which is useful to compare solutions of fractional differential equations. | \section{Introduction}
Fractional calculus is an important subject with numerous applications to different fields outside mathematics like physics \cite{Carpintery,Mainardi,West}, chemistry \cite{Bagley,Douglas,Kaplan}, biology \cite{Arafa,Magin,Sebaa,Xu}, engineering \cite{Duarte,Feliu,Ortiguera,Silva}, etc. It allows us to define derivatives and integrals of non-integer order, which may be more suitable to model real world phenomena, and nowadays this subject is not only important in mathematics, but also by its numerous employments in applicable sciences. We find in the literature several definitions for fractional operators, although the most important ones are the Riemann--Liouville, the Caputo and the Grunwald--Letnikov fractional derivatives \cite{Kilbas,Samko}. The choice of the best operator depends on the analysis of the system, and because of this we find a vast work dealing with different operators, for similar subjects. To overcome this situation, one solution is to consider general definitions for fractional operators, for which we can recover the classical ones as particular cases. For example, using general Kernels we can obtain some of the most important fractional operators \cite{Odzijewicz1,Odzijewicz2}. In Section \ref{sec:FC} we do a review of some of the most important notions when dealing with fractional derivatives and integrals.
In \cite{Katugampola1,Katugampola2,Katugampola3}, U. Katugampola presents a general form of fractional operator, by introducing a new parameter $\rho>0$, for which we can obtain the Riemann--Liouville fractional operators when $\rho=1$, and the Hadamard fractional operators as $\rho\to0^+$. Later, in \cite{Almeida}, the authors present a Caputo type fractional derivative of order $\alpha\in(0,1)$, and some properties are proven. In this work, we start by defining a Caputo--Katugampola fractional derivative of arbitrary real order $\alpha>0$, and as we shall see it is the inverse operator of the Katugampola fractional integral. Several properties of the new fractional derivative operator are studied in Section \ref{sec:CK}. To end, in Section \ref{sec:Gronwall}, we present and prove a fractional Gronwall inequality type, generalizing the ones presented in \cite{Lin,Qian,Ye}.
\section{Preliminaries on fractional calculus}\label{sec:FC}
Let $x:[a,b]\to\mathbb R$ be an integrable function.
Starting with the Cauchy's formula for an n-fold integral
$$\int_a^t d\tau_1\,\int_a^{\tau_1}d\tau_2\,\ldots \int_a^{\tau_{n-1}}x(\tau_n) \, d\tau_n=\frac{1}{(n-1)!}\int_a^t (t-\tau)^{n-1}x(\tau) d\tau,$$
we find a direct generalization for integrals of arbitrary real order $\alpha>0$. The Riemann-Liouville fractional integral of order $\alpha$ of $x$ is defined as
$${I_{a+}^\alpha}x(t)=\frac{1}{\Gamma(\alpha)}\int_a^t (t-\tau)^{\alpha-1}x(\tau) d\tau,$$
where $\Gamma(\cdot)$ denotes the Gamma function.
Later, by considering the formula
$$\int_a^t \frac{1}{\tau_1}\, d\tau_1\,\int_a^{\tau_1} \frac{1}{\tau_2}\, d\tau_2\,\ldots \int_a^{\tau_{n-1}} \frac{1}{\tau_n}x(\tau_n)\, d\tau_n
=\frac{1}{(n-1)!}\int_a^t \left(\ln\frac{t}{\tau}\right)^{n-1}\frac{x(\tau)}{\tau} d\tau,$$
Hadamard defined a new type of fractional operator, known nowadays as Hadamard fractional integral \cite{Hadamdard}:
$${^HI_{a+}^\alpha}x(t)=\frac{1}{\Gamma(\alpha)}\int_a^t \left(\ln\frac{t}{\tau}\right)^{\alpha-1}x(\tau) \frac{d\tau}{\tau}.$$
Fractional derivatives are defined using the fractional integral operators. Beginning with the Riemann--Liouville fractional integral, we find the most important definitions for fractional derivatives. Let $\alpha>0$ and $n\in\mathbb N$ be such that $\alpha\in(n-1,n)$. The Riemann--Liouville fractional derivative of order $\alpha$ of a function $x$ is defined as
$${D_{a+}^\alpha}x(t)=\left(\frac{d}{dt}\right)^n{I_{a+}^{n-\alpha}}x(t)=\frac{1}{\Gamma(n-\alpha)}\left(\frac{d}{dt}\right)^n\int_a^t (t-\tau)^{n-\alpha-1}x(\tau) d\tau,$$
while the Caputo fractional derivative is defined as
$${^CD_{a+}^\alpha}x(t)={I_{a+}^{n-\alpha}}\left(\frac{d}{dt}\right)^n x(t)=\frac{1}{\Gamma(n-\alpha)}\int_a^t (t-\tau)^{n-\alpha-1}\left(\frac{d}{d\tau}\right)^nx(\tau) d\tau.$$
For what concerns the Hadamard fractional derivative, we have
$${^H D_{a+}^\alpha}x(t)=\left(t\frac{d}{dt}\right)^n{^HI_{a+}^{n-\alpha}}x(t)=\frac{1}{\Gamma(n-\alpha)}\left(t\frac{d}{dt}\right)^n\int_a^t
\left(\ln\frac{t}{\tau}\right)^{n-\alpha-1}x(\tau) \frac{d\tau}{\tau}.$$
For more on the subject, we advice the reader to \cite{Kilbas,Samko}.
Finally, in \cite{Baleanu1,Baleanu2}, the Caputo--Hadamard fractional derivative is presented and some properties studied, and the definition is
$${^{CH}D_{a+}^\alpha}x(t)={^HI_{a+}^{n-\alpha}} \left(t\frac{d}{dt}\right)^nx(t)=\frac{1}{\Gamma(n-\alpha)}\int_a^t
\left(\ln\frac{t}{\tau}\right)^{n-\alpha-1}\left(\tau\frac{d}{d\tau}\right)^nx(\tau) \frac{d\tau}{\tau}.$$
The previous notions can be generalized by introducing a new parameter in the definitions, and for some particular cases we recover the classical ones.
In \cite{Katugampola1}, starting with the formula
$$\int_a^t \tau_1^{\rho-1}\, d\tau_1\,\int_a^{\tau_1} \tau_2^{\rho-1}\, d\tau_2\,\ldots \int_a^{\tau_{n-1}} \tau_n^{\rho-1}x(\tau_n)\, d\tau_n
=\frac{\rho^{1-n}}{(n-1)!}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-1}x(\tau) d\tau,$$
Katugampola suggest a new type of fractional integral, which includes the Riemann--Liouville type by considering $\rho=1$ and the Hadamard integral when $\rho\to0^+$.
\begin{definition} Let $a,b>0$ be two reals, and $x:[a,b]\rightarrow\mathbb{R}$ be an integrable function. The left-sided and right-sided Katugampola fractional integrals of order $\alpha>0$ and parameter $\rho>0$ are defined respectively by
$${\mathcal{I}_{a+}^{\a,\r}} x(t)=\frac{\rho^{1-\alpha}}{\Gamma(\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}x(\tau) d\tau$$
and
$${\mathcal{I}_{b-}^{\a,\r}} x(t)=\frac{\rho^{1-\alpha}}{\Gamma(\alpha)}\int_t^b \tau^{\rho-1}(\tau^\rho-t^\rho)^{\alpha-1}x(\tau) d\tau.$$
\end{definition}
Also, in \cite{Katugampola2}, a differential operator of order $\alpha>0$ with dependence on a parameter $\rho>0$ is defined as
$${\mathcal{D}_{a+}^{\a,\r}} x(t)=\left(t^{1-\rho}\frac{d}{dt}\right)^n{\mathcal{I}_{a+}^{n-\alpha,\rho}}x(t)=
\frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\left(t^{1-\rho}\frac{d}{dt}\right)^n\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}x(\tau) d\tau,$$
for the left-sided fractional derivative, and for the right-sided fractional derivative we have
$${\mathcal{D}_{b-}^{\a,\r}} x(t)=\left(-t^{1-\rho}\frac{d}{dt}\right)^n{\mathcal{I}_{b-}^{n-\alpha,\rho}}x(t)=
\frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\left(-t^{1-\rho}\frac{d}{dt}\right)^n\int_t^b \tau^{\rho-1}(\tau^\rho-t^\rho)^{n-\alpha-1}x(\tau) d\tau.$$
\section{Caputo--Katugampola fractional derivative}\label{sec:CK}
Having in mind the different definitions for fractional operators, a notion of Caputo--Katugampola fractional derivative is immediate.
\begin{definition}\label{CKFD}
Let $0< a<b<\infty$ be two reals, $\rho$ be a positive real number, $\alpha\in\mathbb R^+$ and $n\in\mathbb N$ be such that $\alpha\in(n-1,n)$, and $x:[a,b]\rightarrow\mathbb{R}$ a function of class $C^n$. The left-sided and right-sided Caputo--Katugampola fractional derivatives of order $\alpha$ and parameter $\rho$ are defined respectively by
$${^C\mathcal{D}_{a+}^{\a,\r}} x(t)= {\mathcal{I}_{a+}^{n-\alpha,\rho}}\left(t^{1-\rho}\frac{d}{dt}\right)^nx(t)
=\frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}\left(\tau^{1-\rho}\frac{d}{d\tau}\right)^nx(\tau) d\tau$$
and
$${^C\mathcal{D}_{b-}^{\a,\r}} x(t)= {\mathcal{I}_{b-}^{n-\alpha,\rho}}\left(-t^{1-\rho}\frac{d}{dt}\right)^nx(t)
=\frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\int_t^b \tau^{\rho-1}(\tau^\rho-t^\rho)^{n-\alpha-1}\left(-\tau^{1-\rho}\frac{d}{d\tau}\right)^nx(\tau) d\tau.$$
\end{definition}
We refer to \cite{Almeida} for a detailed study when $\alpha\in(0,1)$. Also, since the left-sided and right-sided Katugampola fractional integrals are bounded linear operators, it is clear that the left-sided and right-sided Caputo--Katugampola fractional derivatives are continuous operators on the closed interval $[a,b]$.
From the definition, it is obvious that the fractional derivative of a constant is zero.
In order to simplify the writing, we introduce the notation
$$x_{(n)}(t):=\left(t^{1-\rho}\frac{d}{dt}\right)^nx(t).$$
Let $C^n[a,b]$ be the set of functions $x$ such that $x^{(n)}$ exists and is continuous on $[a,b]$. We define on $C^n[a,b]$ the norms
$$\|x\|^\rho_{C^n}=\sum_{k=0}^n\max_{t\in[a,b]}|x^{(n)}(t)| \quad \mbox{and} \quad \|x\|_{C}=\max_{t\in[a,b]}|x(t)|.$$
\begin{theorem} The following relations hold:
$$\lim_{\alpha\to n^-}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=x_{(n)}(t), \quad \lim_{\alpha\to (n-1)^+}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=x_{(n-1)}(t)-x_{(n-1)}(a),$$
$$\lim_{\alpha\to n^-}{^C\mathcal{D}_{b-}^{\a,\r}} x(t)=(-1)^nx_{(n)}(t), \quad \lim_{\alpha\to (n-1)^+}{^C\mathcal{D}_{b-}^{\a,\r}} x(t)=(-1)^n(x_{(n-1)}(b)-x_{(n-1)}(t)),$$
\end{theorem}
\begin{proof} Integrating by parts, we deduce
$$\begin{array}{ll}
\displaystyle{^C\mathcal{D}_{a+}^{\a,\r}} x(t)&=\displaystyle\frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}x_{(n)}(\tau) d\tau\\
&=\displaystyle\frac{\rho^{-n+\alpha}}{\Gamma(n+1-\alpha)}(t^\rho-a^\rho)^{n-\alpha}x_{(n)}(a)+\frac{\rho^{-n+\alpha}}{\Gamma(n+1-\alpha)}\int_a^t (t^\rho-\tau^\rho)^{n-\alpha}
\frac{d}{d\tau}x_{(n)}(\tau) d\tau.\\
\end{array}$$
Thus,
$$\lim_{\alpha\to n^-}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=x_{(n)}(a)+[x_{(n)}(t)-x_{(n)}(a)]=x_{(n)}(t).$$
For the second formula, starting with the definition, we obtain directly that
$$\begin{array}{ll}
\displaystyle \lim_{\alpha\to (n-1)^+}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)&=\displaystyle \int_a^t \frac{d}{d\tau}x_{(n-1)}(\tau) d\tau\\
&=\displaystyle x_{(n-1)}(t)-x_{(n-1)}(a).
\end{array}$$
The other two formulas are proven in a similar way.
\end{proof}
The following result is easily proven, and we omit the proof here.
\begin{theorem}\label{teo:bound} Given a function $x\in C^n[a,b]$ and $t\in[a,b]$, we have
$$|{^C\mathcal{D}_{a+}^{\a,\r}} x(t)|\leq \frac{\rho^{\alpha-n}}{\Gamma(n+1-\alpha)}\max_{\tau\in[a,t]}|x_{(n)}(\tau)|(t^\rho-a^\rho)^{n-\alpha}$$
and
$$|{^C\mathcal{D}_{b-}^{\a,\r}} x(t)|\leq \frac{\rho^{\alpha-n}}{\Gamma(n+1-\alpha)}\max_{\tau\in[t,b]}|x_{(n)}(\tau)|(b^\rho-t^\rho)^{n-\alpha}.$$
In particular, ${^C\mathcal{D}_{a+}^{\a,\r}} x(a)=0$ and ${^C\mathcal{D}_{b-}^{\a,\r}} x(b)=0$.
\end{theorem}
\begin{theorem} The fractional derivatives ${^C\mathcal{D}_{a+}^{\a,\r}}$ and ${^C\mathcal{D}_{b-}^{\a,\r}}$ are bounded operators from $C^n[a,b]$ to $C[a,b]$, with
$$\|{^C\mathcal{D}_{a+}^{\a,\r}} x\|_C\leq K \|x\|^\rho_{C^n} \quad \mbox{and} \quad \|{^C\mathcal{D}_{b-}^{\a,\r}} x\|_C\leq K \|x\|^\rho_{C^n},$$
where
$$K= \frac{\rho^{\alpha-n}}{\Gamma(n+1-\alpha)}(b^\rho-a^\rho)^{n-\alpha}.$$
\end{theorem}
\begin{proof} Given $t\in[a,b]$ and $x\in C^n[a,b]$, using the fact that $|x_{(n)}(t)|\leq \|x\|^\rho_{C^n}$ and Theorem \ref{teo:bound}, the result follows.
\end{proof}
\begin{lemma} Consider the functions $x,y:[a,b]\to\mathbb R$ given by
$$x(t)=(t^\rho-a^\rho)^v, \quad y(t)=(b^\rho-t^\rho)^v, \quad \mbox{with} \, v>n-1.$$
Then
$${^C\mathcal{D}_{a+}^{\a,\r}} x(t)=\frac{\rho^{\alpha}\Gamma(v+1)}{\Gamma(v-\alpha+1)}(t^\rho-a^\rho)^{v-\alpha}\quad \mbox{and} \quad {^C\mathcal{D}_{b-}^{\a,\r}} y(t)=\frac{\rho^{\alpha}\Gamma(v+1)}{\Gamma(v-\alpha+1)}(b^\rho-t^\rho)^{v-\alpha}.$$
\end{lemma}
\begin{proof} We prove only the first one. It is easy to conclude that
$$x_{(n)}(t)=\frac{\rho^n\Gamma(v+1)}{\Gamma(v-n+1)}(t^\rho-a^\rho)^{v-n}.$$
Then,
$${^C\mathcal{D}_{a+}^{\a,\r}} x(t)=\frac{\rho^{1+\alpha}\Gamma(v+1)}{\Gamma(n-\alpha)\Gamma(v-n+1)}(t^\rho-a^\rho)^{n-1-\alpha}
\int_a^t\tau^{\rho-1}\left(1-\frac{\tau^\rho-a^\rho}{t^\rho-a^\rho}\right)^{n-1-\alpha}(\tau^\rho-a^\rho)^{v-n}d\tau.$$
With the change of variables
$u=(\tau^\rho-a^\rho)/(t^\rho-a^\rho)$ and with the help of the Beta function
$$B(x,y)=\int_0^1u^{x-1}(1-u)^{y-1}du, \quad x,y>0,$$
we obtain
$${^C\mathcal{D}_{a+}^{\a,\r}} x(t)=\frac{\rho^{\alpha}\Gamma(v+1)}{\Gamma(n-\alpha)\Gamma(v-n+1)}(t^\rho-a^\rho)^{v-\alpha}B(n-\alpha,v-n+1).$$
Using the useful property
$$B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)},$$
we prove the formula
$${^C\mathcal{D}_{a+}^{\a,\r}} (t^\rho-a^\rho)^v=\frac{\rho^{\alpha}\Gamma(v+1)}{\Gamma(v-\alpha+1)}(t^\rho-a^\rho)^{v-\alpha}.$$
\end{proof}
Using these relations, we deduce the fractional derivative of the Mittag--Leffler function
$$E_\alpha(t)=\sum_{k=0}^\infty\frac{t^k}{\Gamma(\alpha k+1)}, \, t\in\mathbb R.$$
For all $\lambda \in\mathbb R$, we have
$$\begin{array}{ll}
\displaystyle{^C\mathcal{D}_{a+}^{\a,\r}} E_\alpha(\lambda(t^\rho-a^\rho)^\alpha)&\displaystyle=\sum_{k=0}^\infty\frac{\lambda^k}{\Gamma(\alpha k+1)}{^C\mathcal{D}_{a+}^{\a,\r}} (t^\rho-a^\rho)^{\alpha k}
=\sum_{k=1}^\infty\frac{\lambda^k}{\Gamma(\alpha k+1)}{^C\mathcal{D}_{a+}^{\a,\r}} (t^\rho-a^\rho)^{\alpha k}\\
&\displaystyle=\sum_{k=1}^\infty\frac{\lambda^k}{\Gamma(\alpha k+1)}\frac{\rho^\alpha\Gamma(\alpha k+1)}{\Gamma(\alpha k+1-\alpha)} (t^\rho-a^\rho)^{\alpha k-\alpha}=\lambda \rho^\alpha E_\alpha(\lambda(t^\rho-a^\rho)^\alpha)\end{array}$$
and
$${^C\mathcal{D}_{b-}^{\a,\r}} E_\alpha(\lambda(b^\rho-t^\rho)^\alpha)=\lambda \rho^\alpha E_\alpha(\lambda(b^\rho-t^\rho)^\alpha).$$
The next two results justify our Definition \ref{CKFD}, since the Caputo--Katugampola fractional derivative is an inverse operation of the Katugampola fractional integral.
\begin{theorem}\label{thm:DerInt} Given a function $x\in C^n[a,b]$, we have
\begin{equation}\label{DerInt}{\mathcal{I}_{a+}^{\a,\r}}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=x(t)-\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^kx_{(k)}(a)\end{equation}
and
$${\mathcal{I}_{b-}^{\a,\r}}{^C\mathcal{D}_{b-}^{\a,\r}} x(t)=x(t)-\sum_{k=0}^{n-1}\frac{\rho^{-k}(-1)^k}{k!}(b^\rho-t^\rho)^kx_{(k)}(b).$$
\end{theorem}
\begin{proof} Using Theorem 4.1 in \cite{Katugampola1}, we have
$$
\begin{array}{ll}
\displaystyle{\mathcal{I}_{a+}^{\a,\r}}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)&={\mathcal{I}_{a+}^{\a,\r}} {\mathcal{I}_{a+}^{n-\alpha,\rho}}x_{(n)}(t)={\mathcal{I}_{a+}^{n,\rho}}x_{(n)}(t)\\
&\\
&\displaystyle=\frac{\rho^{1-n}}{(n-1)!}\int_a^t(t^\rho-\tau^\rho)^{n-1}\frac{d}{d\tau}x_{(n-1)}(\tau)d\tau.
\end{array}$$
Using integration by parts, we deduce
$${\mathcal{I}_{a+}^{\a,\r}}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=\frac{\rho^{2-n}}{(n-2)!}\int_a^t(t^\rho-\tau^\rho)^{n-2}\frac{d}{d\tau}x_{(n-2)}(\tau)d\tau-\frac{\rho^{1-n}}{(n-1)!}(t^\rho-a^\rho)^{n-1}x_{(n-1)}(a).$$
Integrating again by parts, we have
$${\mathcal{I}_{a+}^{\a,\r}}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=\frac{\rho^{3-n}}{(n-3)!}\int_a^t(t^\rho-\tau^\rho)^{n-3}\frac{d}{d\tau}x_{(n-3)}(\tau)d\tau-\sum_{k=n-2}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^{k}x_{(k)}(a).$$
Repeating this procedure $n-3$ times, we arrive at
$$ \begin{array}{ll}
\displaystyle{\mathcal{I}_{a+}^{\a,\r}}{^C\mathcal{D}_{a+}^{\a,\r}} x(t)&=\displaystyle\int_a^t\frac{d}{d\tau}x(\tau)d\tau-\sum_{k=1}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^{k}x_{(k)}(a)\\
&\\
&\displaystyle=x(t)-\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^kx_{(k)}(a).
\end{array}$$
The second formula is proven is a similar way.
\end{proof}
Taking $\rho=1$, formula \eqref{DerInt} reduces to the Caputo case (see e.g. Lemma 2.22 \cite{Kilbas}):
$${I_{a+}^\alpha}{^CD_{a+}^\alpha} x(t)=x(t)-\sum_{k=0}^{n-1}\frac{1}{k!}(t-a)^kx^{(k)}(a),$$
and as $\rho\to0^+$, having in mind that $\lim_{\rho\to0^+}(t^\rho-a^\rho)/\rho=\ln(t/a)$,
we obtain Lemma 2.5 in \cite{Baleanu2}:
$${^H I_{a+}^\alpha}{^{CH}D_{a+}^\alpha} x(t)=x(t)-\sum_{k=0}^{n-1}\frac{1}{k!}\left(\ln\frac{t}{a}\right)^k \left[\left(t\frac{d}{dt}\right)^kx(t)\right]_{t=a}.$$
\begin{theorem}\label{thm:IntDer} Given a function $x\in C^1[a,b]$, we have
$${^C\mathcal{D}_{a+}^{\a,\r}}{\mathcal{I}_{a+}^{\a,\r}} x(t)=x(t) \quad \mbox{and} \quad {^C\mathcal{D}_{b-}^{\a,\r}}{\mathcal{I}_{b-}^{\a,\r}} x(t)=x(t).$$
\end{theorem}
\begin{proof} We prove the formula for the left-sided fractional operators only. By definition,
\begin{equation}\label{aux1}{^C\mathcal{D}_{a+}^{\a,\r}}{\mathcal{I}_{a+}^{\a,\r}} x(t)={\mathcal{I}_{a+}^{n-\alpha,\rho}}y_{(n)}(t), \quad \mbox{with} \quad y_{(n)}(t)=\left(t^{1-\rho}\frac{d}{dt}\right)^n{\mathcal{I}_{a+}^{\a,\r}} x(t).
\end{equation}
Computing directly, and since $\alpha\in(n-1,n)$, we get
$$ \begin{array}{ll}
\displaystyle y_{(1)}(t)&=\displaystyle t^{1-\rho}\frac{d}{dt}\frac{\rho^{1-\alpha}}{\Gamma(\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}x(\tau) d\tau\\
&\\
&=\displaystyle\frac{\rho^{2-\alpha}}{\Gamma(\alpha-1)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-2}x(\tau) d\tau.\\
\end{array}$$
Repeating the process, we arrive at the expression
$$ \begin{array}{ll}
\displaystyle y_{(n-1)}(t)&=\displaystyle\frac{\rho^{n-\alpha-1}}{\Gamma(\alpha-n+1)}\int_a^t \rho \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-n}x(\tau) d\tau.\\
&\\
&=\displaystyle\frac{\rho^{n-\alpha-1}}{\Gamma(\alpha-n+2)}\left[(t^\rho-a^\rho)^{\alpha-n+1}x(a)+\int_a^t (t^\rho-\tau^\rho)^{\alpha-n+1}\frac{d}{d\tau}x(\tau) d\tau\right].\\
\end{array}$$
using integration by parts. Then, we finally arrive to
$$y_{(n)}(t)=t^{1-\rho}\frac{d}{dt}y_{(n-1)}(t)=\frac{\rho^{n-\alpha}}{\Gamma(\alpha-n+1)}\left[(t^\rho-a^\rho)^{\alpha-n}x(a)+\int_a^t (t^\rho-\tau^\rho)^{\alpha-n}\frac{d}{d\tau}x(\tau) d\tau\right].$$
Then, replacing this last expression into equation \eqref{aux1}, we obtain
$${^C\mathcal{D}_{a+}^{\a,\r}}{\mathcal{I}_{a+}^{\a,\r}} x(t)=\frac{\rho}{\Gamma(n-\alpha)\Gamma(\alpha-n+1)}\left[x(a)\int_a^t\tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}(\tau^\rho-a^\rho)^{\alpha-n}d\tau\right.$$
$$\left.+\int_a^t \int_a^\tau \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}(\tau^\rho-s^\rho)^{\alpha-n}\frac{d}{ds}x(s) ds\,d\tau\right].$$
With the change of variables $u=(\tau^\rho-a^\rho)/(t^\tau-a^\rho)$, we get
$$\int_a^t\tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}(\tau^\rho-a^\rho)^{\alpha-n}d\tau
=\int_a^t\tau^{\rho-1}(t^\rho-a^\rho)^{n-\alpha-1}\left(1-\frac{\tau^\rho-a^\rho}{t^\rho-a^\rho}\right)^{n-\alpha-1}(\tau^\rho-a^\rho)^{\alpha-n}d\tau$$
$$=\frac1\rho\int_0^1(1-u)^{n-\alpha-1}u^{\alpha-n}du=\frac1\rho B(n-\alpha,\alpha-n+1)=\frac{\Gamma(n-\alpha)\Gamma(\alpha-n+1)}{\rho}.$$
In a similar way, and using the Dirichlet's formula, we get
$$\int_a^t \int_a^\tau \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}(\tau^\rho-s^\rho)^{\alpha-n}\frac{d}{ds}x(s) ds\,d\tau
=\int_a^t \int_\tau^t s^{\rho-1}(t^\rho-s^\rho)^{n-\alpha-1}(s^\rho-\tau^\rho)^{\alpha-n}\frac{d}{d\tau}x(\tau) ds\,d\tau$$
$$=\int_a^t\frac{d}{d\tau}x(\tau)\frac{\Gamma(n-\alpha)\Gamma(\alpha-n+1)}{\rho}d\tau=\frac{\Gamma(n-\alpha)\Gamma(\alpha-n+1)}{\rho}(x(t)-x(a)).$$
Then,
$${^C\mathcal{D}_{a+}^{\a,\r}}{\mathcal{I}_{a+}^{\a,\r}} x(t)=\frac{\rho}{\Gamma(n-\alpha)\Gamma(\alpha-n+1)}\left[x(a)\frac{\Gamma(n-\alpha)\Gamma(\alpha-n+1)}{\rho}\right.$$
$$\left.+\frac{\Gamma(n-\alpha)\Gamma(\alpha-n+1)}{\rho}(x(t)-x(a))\right]=x(t).$$
\end{proof}
Again, for $\rho=1$ and $\rho\to0^+$, we recover the classical formulas as in Lemma 2.21 of \cite{Kilbas} and in Lemma 2.4 of \cite{Baleanu2}, respectively:
$${^CD_{a+}^\alpha}{I_{a+}^\alpha} x(t)={^{CH}D_{a+}^\alpha}{^H I_{a+}^\alpha} x(t)=x(t).$$
We now establish a relation between the Katugampola and the Caputo--Katugampola fractional derivatives.
\begin{theorem}\label{thm:DerR-C} Let $x\in C^n[a,b]$ be a function. Then
$$ {^C\mathcal{D}_{a+}^{\a,\r}} x(t)={\mathcal{D}_{a+}^{\a,\r}} \left[x(t)-\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^{k}x_{(k)}(a)\right]$$
and
$$ {^C\mathcal{D}_{b-}^{\a,\r}} x(t)={\mathcal{D}_{b-}^{\a,\r}} \left[x(t)-\sum_{k=0}^{n-1}\frac{\rho^{-k}(-1)^k}{k!}(b^\rho-t^\rho)^{k}x_{(k)}(b)\right].$$
\end{theorem}
\begin{proof} Starting with the definition of the Katugampola fractional derivative, and integrating by parts, one deduces
$$\left(t^{1-\rho}\frac{d}{dt}\right)^n\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}
\left[x(\tau)-\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(\tau^\rho-a^\rho)^{k}x_{(k)}(a)\right] d\tau$$
$$=\left(t^{1-\rho}\frac{d}{dt}\right)^n \int_a^t \frac{\tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha}}{\rho(n-\alpha)}
\left[x_{(1)}(\tau)-\sum_{k=1}^{n-1}\frac{\rho^{1-k}}{(k-1)!}(\tau^\rho-a^\rho)^{k-1}x_{(k)}(a)\right] d\tau$$
$$=\left(t^{1-\rho}\frac{d}{dt}\right)^{n-1} \int_a^t\tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}
\left[x_{(1)}(\tau)-\sum_{k=1}^{n-1}\frac{\rho^{1-k}}{(k-1)!}(\tau^\rho-a^\rho)^{k-1}x_{(k)}(a)\right] d\tau.$$
Repeating the process $n-2$ more times, we arrive at the equivalent expression
$$t^{1-\rho}\frac{d}{dt}\int_a^t \frac{\tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha}}{\rho(n-\alpha)}x_{(n)}(\tau) d\tau
=\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{n-\alpha-1}x_{(n)}(\tau) d\tau,$$
proving the first formula. The second one is obtained in a similar way.
\end{proof}
Formula obtained in Theorem \ref{thm:DerR-C} allows us to deduce a direct relation between the two types of fractional derivative operators. In fact, similar as done before, we have
$${\mathcal{D}_{a+}^{\a,\r}} (t^\rho-a^\rho)^{k}=\frac{\rho^{-n+\alpha}k!}{\Gamma(n-\alpha+k+1)}\left(t^{1-\rho}\frac{d}{dt}\right)^n(t^\rho-a^\rho)^{n-\alpha+k},$$
and since
$$\left(t^{1-\rho}\frac{d}{dt}\right)^n(t^\rho-a^\rho)^{n-\alpha+k}=\rho^n\frac{\Gamma(n-\alpha+k+1)}{\Gamma(k+1-\alpha)}(t^\rho-a^\rho)^{k-\alpha},$$
we get the following relation
$$ {^C\mathcal{D}_{a+}^{\a,\r}} x(t)={\mathcal{D}_{a+}^{\a,\r}} x(t)-\sum_{k=0}^{n-1}\frac{\rho^{\alpha-k}}{\Gamma(k+1-\alpha)}(t^\rho-a^\rho)^{k-\alpha}x_{(k)}(a).$$
Analogously, we obtain
$$ {^C\mathcal{D}_{b-}^{\a,\r}} x(t)={\mathcal{D}_{b-}^{\a,\r}} x(t)-\sum_{k=0}^{n-1}\frac{\rho^{\alpha-k}(-1)^k}{\Gamma(k+1-\alpha)}(b^\rho-t^\rho)^{k-\alpha}x_{(k)}(b).$$
The following result establishes an integration by parts formula, and generalizes the formula proven in \cite{Almeida2} for arbitrary real $\alpha>0$.
\begin{theorem} Let $x\in C[a,b]$ and $y\in C^n[a,b]$ be two functions. Then,
$$\int_a^b x(t) \, {^C\mathcal{D}_{a+}^{\a,\r}} y(t) \, dt=\int_a^b {\mathcal{D}_{b-}^{\a,\r}}(t^{1-\rho}x(t))\, t^{\rho-1}y(t) \, dt$$
$$+\left[\sum_{k=0}^{n-1}\left(-t^{1-\rho}\frac{d}{dt}\right)^k {\mathcal{I}_{b-}^{n-\alpha,\rho}}(t^{1-\rho}x(t))\, y_{(n-k-1)}(t) \right]_{t=a}^{t=b},$$
and
$$\int_a^b x(t) \, {^C\mathcal{D}_{b-}^{\a,\r}} y(t) \, dt=\int_a^b {\mathcal{D}_{a+}^{\a,\r}}(t^{1-\rho}x(t))\, t^{\rho-1}y(t) \, dt
$$
$$+\left[\sum_{k=0}^{n-1}(-1)^{n-k}\left(t^{1-\rho}\frac{d}{dt}\right)^k {\mathcal{I}_{a+}^{n-\alpha,\rho}}(t^{1-\rho}x(t))\, y_{(n-k-1)}(t) \right]_{t=a}^{t=b}.$$
\end{theorem}
\begin{proof}
We prove only the first formula; the second one is similar. Applying the Dirichlet's formula and integrating by parts, we get
$$\begin{array}{ll}
\displaystyle \int_a^b x(t) \, {^C\mathcal{D}_{a+}^{\a,\r}} y(t) \, dt & =\displaystyle
\frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\int_a^b\int_a^t x(t)(t^\rho-\tau^\rho)^{n-\alpha-1}\frac{d}{d\tau}y_{(n-1)}(\tau)\, d\tau\,dt\\
&\displaystyle = \frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\int_a^b\int_t^b x(\tau)(\tau^\rho-t^\rho)^{n-\alpha-1}\, d\tau\, \frac{d}{dt}y_{(n-1)}(t)\,dt\\
&\displaystyle = \frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\left[\int_t^b x(\tau)(\tau^\rho-t^\rho)^{n-\alpha-1}\, d\tau\, y_{(n-1)}(t)\right]_{t=a}^{t=b}\\
&\displaystyle - \frac{\rho^{1-n+\alpha}}{\Gamma(n-\alpha)}\int_a^b\frac{d}{dt}\left(\int_t^b x(\tau)(\tau^\rho-t^\rho)^{n-\alpha-1}\, d\tau\right)\, t^{1-\rho}\frac{d}{dt}y_{(n-2)}(t)\,dt\\
&\displaystyle = \left[ {\mathcal{I}_{b-}^{n-\alpha,\rho}}(t^{1-\rho}x(t)) \, y_{(n-1)}(t)\right]_{t=a}^{t=b}\\
&\displaystyle +\int_a^b -t^{1-\rho}\frac{d}{dt} {\mathcal{I}_{b-}^{n-\alpha,\rho}}(t^{1-\rho}x(t)) \, \frac{d}{dt}y_{(n-2)}(t)\,dt.\\
\end{array}$$
Integrating once more by parts, we obtain
$$\begin{array}{ll}
\displaystyle \int_a^b x(t) \, {^C\mathcal{D}_{a+}^{\a,\r}} y(t) \, dt
&\displaystyle = \left[ \sum_{k=0}^1 \left(-t^{1-\rho}\frac{d}{dt}\right)^k{\mathcal{I}_{b-}^{n-\alpha,\rho}}(t^{1-\rho}x(t)) \, y_{(n-k-1)}(t)\right]_{t=a}^{t=b}\\
&\displaystyle + \int_a^b\left(-t^{1-\rho}\frac{d}{dt}\right)^2 {\mathcal{I}_{b-}^{n-\alpha,\rho}}(t^{1-\rho}x(t)) \, \frac{d}{dt}y_{(n-3)}(t)\,dt.
\end{array}$$
If we integrate by parts $n-3$ more times, we get
$$\begin{array}{ll}
\displaystyle \int_a^b x(t) \, {^C\mathcal{D}_{a+}^{\a,\r}} y(t) \, dt
&\displaystyle = \left[ \sum_{k=0}^{n-2} \left(-t^{1-\rho}\frac{d}{dt}\right)^k{\mathcal{I}_{b-}^{n-\alpha,\rho}}(t^{1-\rho}x(t)) \, y_{(n-k-1)}(t)\right]_{t=a}^{t=b}\\
&\displaystyle + \int_a^b\left(-t^{1-\rho}\frac{d}{dt}\right)^{n-1} {\mathcal{I}_{b-}^{n-\alpha,\rho}}(t^{1-\rho}x(t)) \, \frac{d}{dt}y(t)\,dt.
\end{array}$$
The formula follows integrating by parts once more the last integral.
\end{proof}
\section{The Gronwall inequality}\label{sec:Gronwall}
The Gronwall inequality plays a central role in the theory of differential equations, since it allows to estimate the difference between two solutions of two differential equations $\dot{x}(t)=f(t,x(t))$ and $\dot{x}(t)=g(t,x(t))$, in terms of the difference of the two initial conditions for each of the two differential equations, and the difference between the two dynamic equations $f$ and $g$ (see e.g. \cite{Dragomir}).
Recently, the Gronwall inequality has been generalized for the study of fractional differential equations, with dependence on the Riemann--Liouville fractional derivative \cite{Ye} and for the Hadamard fractional derivative \cite{Gong}. Here we present a more general form, valid for the Katugampola fractional derivative.
\begin{theorem} Let $u,v$ be two integrable functions and $g$ a continuous function, with domain $[a,b]$. Assume that
\begin{enumerate}
\item $u$ and $v$ are nonnegative;
\item $g$ is nonnegative and nondecreasing.
\end{enumerate}
If
$$u(t)\leq v(t)+g(t)\rho^{1-\alpha}\int_a^t\tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}u(\tau)\,d\tau, \quad \forall t\in[a,b],$$
then
$$u(t)\leq v(t)+\int_a^t\sum_{k=1}^\infty\frac{\rho^{1-k\alpha}(g(t)\Gamma(\alpha))^k}{\Gamma(k\alpha)}\tau^{\rho-1}(t^\rho-\tau^\rho)^{k\alpha-1}v(\tau)\,d\tau, \quad \forall t\in[a,b].$$
In addition, if $v$ is nondecreasing, then
$$u(t)\leq v(t)E_\alpha\left[g(t)\Gamma(\alpha)\left(\frac{t^\rho-a^\rho}{\rho}\right)^\alpha\right], \quad \forall t\in[a,b].$$
\end{theorem}
\begin{proof} Define the functional
$$\Psi x = g(t)\rho^{1-\alpha}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}x(\tau)\,d\tau.$$
Then $u(t)\leq v(t)+\Psi u(t)$. Iterating consecutively, we obtain for $n\in\mathbb N$,
$$u(t)\leq \sum_{k=0}^{n-1}\Psi^k v(t)+\Psi^n u(t).$$
Let us prove, by mathematical induction, that if $x$ is a nonnegative function, then
$$\Psi^k x(t) \leq \rho^{1-k\alpha}\int_a^t \frac{(g(t)\Gamma(\alpha))^k}{\Gamma(k\alpha)}\tau^{\rho-1}(t^\rho-\tau^\rho)^{k\alpha-1}x(\tau)\,d\tau.$$
For $k=1$ is obvious. Suppose that the formula if valid for $k\in\mathbb N$. Then,
$$\Psi^{k+1} x(t)=\Psi\Psi^k x(t) \leq g(t)\rho^{1-\alpha}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}
\rho^{1-k\alpha}\int_a^\tau \frac{(g(\tau)\Gamma(\alpha))^k}{\Gamma(k\alpha)}s^{\rho-1}(\tau^\rho-s^\rho)^{k\alpha-1}x(s)\,ds\,d\tau.$$
Since $g$ is nondecreasing, $g(\tau)\leq g(t)$, for all $\tau\leq t$, and so
$$\Psi^{k+1} x(t)\leq (g(t))^{k+1}\rho^{2-(k+1)\alpha} \frac{(\Gamma(\alpha))^k}{\Gamma(k\alpha)} \int_a^t \int_a^\tau \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}
s^{\rho-1}(\tau^\rho-s^\rho)^{k\alpha-1}x(s)\,ds\,d\tau.$$
Using the Dirichlet's formula, we get
$$\Psi^{k+1} x(t)\leq (g(t))^{k+1}\rho^{2-(k+1)\alpha} \frac{(\Gamma(\alpha))^k}{\Gamma(k\alpha)} \int_a^t \tau^{\rho-1}x(\tau) \int_\tau^t s^{\rho-1}(t^\rho-s^\rho)^{\alpha-1}
(s^\rho-\tau^\rho)^{k\alpha-1}\,ds\,d\tau.$$
Evaluating the inner integral in a similar way as was done in the proof of Theorem \ref{thm:IntDer}, we obtain
$$\int_\tau^t s^{\rho-1}(t^\rho-s^\rho)^{\alpha-1}(s^\rho-\tau^\rho)^{k\alpha-1}\,ds=\frac{\Gamma(\alpha)\Gamma(k\alpha)}{\rho\Gamma(k\alpha+\alpha)}(t^\rho-\tau^\rho)^{(k+1)\alpha-1}.$$
Then,
$$\Psi^{k+1} x(t) \leq \rho^{1-(k+1)\alpha}\int_a^t \frac{(g(t)\Gamma(\alpha))^{k+1}}{\Gamma((k+1)\alpha)}\tau^{\rho-1}(t^\rho-\tau^\rho)^{(k+1)\alpha-1}x(\tau)\,d\tau,$$
proving the desired. Let us prove now that $\Psi^n u(t)\to0$ as $n\to\infty$. First, using the continuity of $g$ in the interval $[a,b]$, we ensure the existence of a constant $M>0$ such that $g(t)\leq M$, for all $t\in[a,b]$. Then
$$0\leq \Psi^n u(t) \leq \rho^{1-n\alpha}\int_a^t \frac{(M\Gamma(\alpha))^n}{\Gamma(n\alpha)}\tau^{\rho-1}(t^\rho-\tau^\rho)^{n\alpha-1}u(\tau)\,d\tau.$$
Consider the series
$$\sum_{n=1}^\infty \frac{(M\Gamma(\alpha))^n}{\Gamma(n\alpha)}.$$
If we apply the ratio test to the series, and the asymptotic approximation
$$\lim_{n\to\infty}\frac{\Gamma(n\alpha)(n\alpha)^\alpha}{\Gamma(n\alpha+\alpha)}=1,$$
we get
$$ \lim_{n\to\infty}\frac{\Gamma(n\alpha)}{\Gamma(n\alpha+\alpha)}=0.$$
Thus, the series converges and therefore $\Psi^n u(t)\to0$ as $n\to\infty$.
In conclusion, we have
$$u(t)\leq \sum_{k=0}^\infty\Psi^k v(t)\leq v(t)+\int_a^t\sum_{k=1}^\infty\frac{\rho^{1-k\alpha}(g(t)\Gamma(\alpha))^k}{\Gamma(k\alpha)}\tau^{\rho-1}(t^\rho-\tau^\rho)^{k\alpha-1}v(\tau)\,d\tau.$$
For the second case, suppose now that $v$ is nondecreasing. So, for all $\tau\in[a,t]$, we have $v(\tau)\leq v(t)$ and so
$$\begin{array}{ll}
u(t)& \leq \displaystyle v(t)\left[1+\sum_{k=1}^\infty\frac{\rho^{1-k\alpha}(g(t)\Gamma(\alpha))^k}{\Gamma(k\alpha)}\int_a^t\tau^{\rho-1}(t^\rho-\tau^\rho)^{k\alpha-1}\,d\tau\right]\\
& = \displaystyle v(t)\left[1+\sum_{k=1}^\infty\frac{\rho^{-k\alpha}(g(t)\Gamma(\alpha)(t^\rho-a^\rho)^\alpha)^k}{\Gamma(k\alpha+1)}\right]\\
&=\displaystyle v(t)E_\alpha\left[g(t)\Gamma(\alpha)\left(\frac{t^\rho-a^\rho}{\rho}\right)^\alpha\right].
\end{array}$$
\end{proof}
For the right fractional operator, the following result is proven in a similar way.
\begin{theorem} Let $u,v$ be two integrable functions and $g$ a continuous function, with domain $[a,b]$. Assume that
\begin{enumerate}
\item $u$ and $v$ are nonnegative;
\item $g$ is nonnegative and nonincreasing.
\end{enumerate}
If
$$u(t)\leq v(t)+g(t)\rho^{1-\alpha}\int_t^b\tau^{\rho-1}(\tau^\rho-t^\rho)^{\alpha-1}u(\tau)\,d\tau, \quad \forall t\in[a,b],$$
then
$$u(t)\leq v(t)+\int_t^b\sum_{k=1}^\infty\frac{\rho^{1-k\alpha}(g(t)\Gamma(\alpha))^k}{\Gamma(k\alpha)}\tau^{\rho-1}(\tau^\rho-t^\rho)^{k\alpha-1}v(\tau)\,d\tau, \quad \forall t\in[a,b].$$
In addition, if $v$ is nonincreasing, then
$$u(t)\leq v(t)E_\alpha\left[g(t)\Gamma(\alpha)\left(\frac{b^\rho-t^\rho}{\rho}\right)^\alpha\right], \quad \forall t\in[a,b].$$
\end{theorem}
Using the Gronwall inequality, we can relate solutions of two fractional differential equations. Consider the following fractional differential equation
\begin{equation}\label{FDE}\left\{
\begin{array}{l}
{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=f(t,x(t))\\
x^{(i)}(a)=x_a^i, \quad i=0,\ldots,n-1,
\end{array} \right.\end{equation}
where $f:[a,b]\times\mathbb R\to\mathbb R$ is a continuous function, $\alpha\in(n-1,n)$ and $x_a^i$ are fixed reals, for $i=0,\ldots,n-1$. Applying the fractional integral operator ${\mathcal{I}_{a+}^{\a,\r}}$ to both sides of the fractional differential equation in system \eqref{FDE} and using Theorem \ref{thm:DerInt}, we get
\begin{equation}\label{FDE:Volterra}\begin{array}{ll}
x(t)&=\displaystyle\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^kx_{(k)}(a)+{\mathcal{I}_{a+}^{\a,\r}} f(t,x(t))\\
&=\displaystyle\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^kx_{(k)}(a)+\frac{\rho^{1-\alpha}}{\Gamma(\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}f(\tau,x(\tau)) d\tau.
\end{array}\end{equation}
Conversely, if $x$ satisfies Equation \eqref{FDE:Volterra}, then $x$ satisfies system \eqref{FDE}. This is proven applying the fractional derivative operator ${^C\mathcal{D}_{a+}^{\a,\r}}$ to both sides of Equation \eqref{FDE:Volterra}, using Theorem \ref{thm:IntDer} and the formula
$${^C\mathcal{D}_{a+}^{\a,\r}} (t^\rho-a^\rho)^k=0, \quad \forall k\in\{0,1,\ldots,n-1\}.$$
\begin{theorem} Let $f,g:[a,b]\times\mathbb R\to\mathbb R$ be two continuous functions, and $x,y$ solutions of the two following systems
$$\left\{
\begin{array}{l}
{^C\mathcal{D}_{a+}^{\a,\r}} x(t)=f(t,x(t))\\
x^{(i)}(a)=x_a^i, \quad i=0,\ldots,n-1,
\end{array} \right.$$
and
$$\left\{
\begin{array}{l}
{^C\mathcal{D}_{a+}^{\a,\r}} y(t)=g(t,y(t))\\
y^{(i)}(a)=y_a^i, \quad i=0,\ldots,n-1.
\end{array} \right.$$
Suppose that there exist
\begin{enumerate}
\item a positive constant $C$ such that
$$|g(t,y_1)-g(t,y_2)|\leq C |y_1-y_2|, \quad \forall t\in[a,b]\,\forall y_1,y_2\in\mathbb R;$$
\item a continuous function $\psi:[a,b]\to\mathbb R_0^+$ such that
$$|f(t,x(t))-g(t,x(t))|\leq \psi(t), \quad \forall t\in[a,b].$$
\end{enumerate}
Define the function $v:[a,b]\to\mathbb R$ by
$$v(t)=\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^k\left|x_{(k)}(a)-y_{(k)}(a)\right|+
\frac{\rho^{1-\alpha}}{\Gamma(\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}\psi(\tau) d\tau.$$
Then, for all $t\in[a,b]$,
$$|x(t)-y(t)|\leq v(t)+\int_a^t\sum_{k=1}^\infty\frac{\rho^{1-k\alpha}C^k}{\Gamma(k\alpha)}\tau^{\rho-1}(t^\rho-\tau^\rho)^{k\alpha-1}v(\tau)\,d\tau.$$
\end{theorem}
\begin{proof} Define $u(t)=|x(t)-y(t)|$. Then
$$\begin{array}{ll}
u(t)&\displaystyle\leq \sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^k\left|x_{(k)}(a)-y_{(k)}(a)\right|\\
& \quad \displaystyle+\frac{\rho^{1-\alpha}}{\Gamma(\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}|f(\tau,x(\tau))-g(\tau,y(\tau))| d\tau\\
&\displaystyle\leq \sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^k\left|x_{(k)}(a)-y_{(k)}(a)\right|\\
& \quad \displaystyle+\frac{\rho^{1-\alpha}}{\Gamma(\alpha)}\int_a^t \tau^{\rho-1}(t^\rho-\tau^\rho)^{\alpha-1}\left(|f(\tau,x(\tau))-g(\tau,x(\tau))|+|g(\tau,x(\tau))-g(\tau,y(\tau))| \right)d\tau.
\end{array}$$
Using the relations
$$|f(\tau,x(\tau))-g(\tau,x(\tau))|\leq \psi(\tau)\quad \mbox{and} \quad |g(\tau,x(\tau))-g(\tau,y(\tau))|\leq C |x(\tau)-y(\tau)|,$$
and the Gronwall inequality, we prove the result.
\end{proof}
In particular, when $g=f$, we obtain a simpler formula:
\begin{equation}\label{compsolu}\begin{array}{ll}
|x(t)-y(t)|&\displaystyle\leq \sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(t^\rho-a^\rho)^k\left|x_{(k)}(a)-y_{(k)}(a)\right|\\
& \quad \displaystyle+\int_a^t\sum_{k=1}^\infty\frac{\rho^{1-k\alpha}C^k}{\Gamma(k\alpha)}\tau^{\rho-1}(t^\rho-\tau^\rho)^{k\alpha-1}
\sum_{k=0}^{n-1}\frac{\rho^{-k}}{k!}(\tau^\rho-a^\rho)^k\left|x_{(k)}(a)-y_{(k)}(a)\right|\,d\tau.
\end{array}\end{equation}
Also, from Equation \eqref{compsolu}, we see that the solution of system \eqref{FDE} is unique.
\section*{Acknowledgments}
The author is very grateful to an anonymous referee, for valuable remarks and comments that improved this paper.
Work supported by Portuguese funds through the CIDMA - Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (FCT-Funda\c{c}\~ao para a Ci\^encia e a Tecnologia), within project UID/MAT/04106/2013.
| {
"timestamp": "2017-05-30T02:09:10",
"yymm": "1705",
"arxiv_id": "1705.10079",
"language": "en",
"url": "https://arxiv.org/abs/1705.10079",
"abstract": "In this paper we present a new type of fractional operator, which is a generalization of the Caputo and Caputo--Hadamard fractional derivative operators. We study some properties of the operator, namely we prove that it is the inverse operation of a generalized fractional integral. A relation between this operator and a Riemann--Liouville type is established. We end with a fractional Gronwall inequality type, which is useful to compare solutions of fractional differential equations.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A Gronwall inequality for a general Caputo fractional operator",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9901401429507652,
"lm_q2_score": 0.7981867753392728,
"lm_q1q2_score": 0.7903167678358379
} |
https://arxiv.org/abs/1909.02417 | The phaseless rank of a matrix | We consider the problem of finding the smallest rank of a complex matrix whose absolute values of the entries are given. We call this minimum the phaseless rank of the matrix of the entrywise absolute values. In this paper we study this quantity, extending a classic result of Camion and Hoffman and connecting it to the study of amoebas of determinantal varieties and of semidefinite representations of convex sets. As a consequence, we prove that the set of maximal minors of a matrix of indeterminates form an amoeba basis for the ideal they define, and we attain a new upper bound on the complex semidefinite extension complexity of polytopes, dependent only on their number of vertices and facets. We also highlight the connections between the notion of phaseless rank and the problem of finding large sets of complex equiangular lines or mutually unbiased bases. | \section{Introduction}
In this paper we study a basic optimization problem: given the absolute values of the entries of a complex matrix, what is the smallest rank that it can have.
In other words, we want the solution to the rank minimization problem for a matrix under complete phase uncertainty. This defines a natural quantity that we will associate to
the matrix of absolute values and call the \emph{phaseless rank} of the matrix.
\begin{definition}
Given $A \in \mathbb{R}^{n\times m}_+$, the set of matrices equimodular with $A$ is denoted by $$\Omega(A) = \{B \in \mathbb{C}^{n\times m}: |B|=A \text{ i.e., } |B_{ij}|=A_{ij}, \forall i,j \}$$
and its phaseless rank is defined as
$$\textup{rank}_\theta \,(A) = \min \{\textup{rank}\,(B): B \in \Omega(A) \}.$$
\end{definition}
Equivalently, the phaseless rank of $A \in \mathbb{R}^{n\times m}_+$ can be written as $$\textup{rank}_\theta \,(A) = \min \{\textup{rank}\,(A \circ B): B \in \mathbb{C}^{n\times m},\ |B_{ij}|=1, \forall i,j \},$$ where $\circ$ represents the Hadamard product of matrices. It is obvious that $\textup{rank}_\theta \,(A) \leq \textup{rank}\,(A)$, and it is not hard to see that we can have a strict inequality.
\begin{example} \label{ex:gapphaseless}
Consider the $4\times 4$ derangement matrix,
$$D_4=\begin{bmatrix}
0 & 1 & 1 & 1\\
1 & 0 & 1 & 1\\
1 & 1 & 0 & 1\\
1 & 1 & 1 & 0
\end{bmatrix}.$$
We have $\textup{rank}\,(D_4)=4$ and, for any real $\theta$, the matrix
$$\begin{bmatrix}
0 & 1 & 1 & 1\\
1 & 0 & e^{i(\theta + \pi)} & e^{i(\theta + \frac{2\pi}{3})}\\
1 & e^{i\theta} & 0 & e^{i(\theta + \frac{\pi}{3})}\\
1 & e^{i(\theta - \frac{\pi}{3})} & e^{i(\theta - \frac{2\pi}{3})} & 0
\end{bmatrix}$$
has rank $2$. Since this matrix has as entrywise absolute values the entries of $D_4$, $\textup{rank}_\theta \,(D_4) \leq 2$, and in fact we have equality.
With some extra effort one can show that up to row and column multiplication by complex scalars of absolute value one, and conjugation, this is the
only element in the equimodular class of $D_4$ with rank less or equal than two.
\end{example}
The study of this quantity can be traced back to \cite{camion1966nonsingularity}, where the problem of characterizing $A \in \mathbb{R}^{n\times n}_+$ for which we have $\textup{rank}_\theta \,(A)=n$ is solved. In that paper, the question is seen as finding a converse for the diagonal dominance, a sufficient condition for nonsingularity of a matrix. This result was further generalized in \cite{levinger1972generalization}, where a lower bound is derived for $\textup{rank}_\theta \,(A)$ for general $A$, and some special cases are studied, although the rank itself is never formally introduced. While the result of Camion and Hoffman is well known, there was little, if any, further developments in minimizing the rank over an equimodular class. This problem has, however, resurfaced in recent years under different guises in both the theory of semidefinite lifts of polytopes and amoebas of algebraic varieties. In this work we build on the work of these foundational papers, deriving some new results and highlighting the consequences they have in those related areas.
The paper is organized as follows. In the next section we introduce formally the notions of phaseless and signless ranks and show some relations between them and other rank notions found in the literature. In Section 3, we relate the notion of phaseless rank with questions in amoeba theory and semidefinite representability of sets, providing motivation and intuition to what follows. In Section 4 we revisit a result of Camion and Hoffman, reproving it in a language well-suited to our needs, and drawing some simple consequences. Section 5 covers our extensions and complements to this classic result. Finally, in Section 6, we draw implications from those results to those of the connecting areas. Those include proving that the maximal minors form an amoeba basis for the variety they generate and giving an explicit semialgebraic description for those amoebas, as well as deriving a new upper bound for the complex semidefinite rank of polytopes in terms of their number of facets and vertices, and connecting the notion of phaseless rank to the problem of finding large sets of complex equiangular lines or mutually unbiased bases.
\section{Notation, definitions and basic properties} \label{sec:definitions}
Throughout these notes we will use $\mathbb{R}^{n\times m}_+$ and $\mathbb{R}^{n\times m}_{++}$ to denote the sets of $n\times m$ real matrices with nonnegative and positive entries, respectively. We will also use $\S^n$, $\S_+^n$, $\S^n(\mathbb C)$ and $\S^n_+(\mathbb C)$ to denote, in this order, the sets of $n \times n$ real symmetric matrices, $n \times n$ real positive semidefinite matrices, $n \times n$ complex hermitian matrices and $n \times n$ complex positive semidefinite matrices. Given a matrix in $\mathbb{R}^{n\times m}_+$, we defined its phaseless rank as the smallest rank of a complex matrix equimodular with it. If we restrict ourselves to the real case, we still obtain
a sensible definition, and we will denote that quantity by \emph{signless rank}.
\begin{definition}
Let $A\in \mathbb{R}^{n\times m}_+$.
$$\textup{rank}_{\pm} \,(A) = \min \{\textup{rank}\,(B): B \in \Omega(A) \cap \mathbb{R}^{n\times m} \}.$$
\end{definition}
Equivalently, this amounts to minimizing the rank over all possible sign attributions to the entries of $A$.
By construction, it is clear that $\textup{rank}_\theta \,(A) \leq \textup{rank}_{\pm} \,(A) \leq \textup{rank}\,(A)$ for any nonnegative matrix $A$ and all inequalities can be strict.
\begin{example}\label{ex:signlessgap}
Let us revisit Example \ref{ex:gapphaseless}, and note that the signless rank of $D_4$ is $4$. Indeed, if we expand the determinant of that matrix, we get an odd number of nonzero terms, all $1$ or $-1$, so no possible sign attribution can ever make it sum to zero. Thus, $\textup{rank}_\theta \,(D_4)<\textup{rank}_{\pm} \,(D_4)=\textup{rank}\,(D_4)$. On the other hand, if we consider matrix
$$B=\begin{bmatrix}
2 & 1 & 1\\ 1 & 2 & 1 \\ 1 & 1 & 2
\end{bmatrix}$$
it is easy to see that $\textup{rank}\,(B)=3$ but that flipping the signs of all the $1$'s to $-1$'s drops the rank to $2$, as the matrix rows will then sum to zero, so we have $\textup{rank}_\theta \,(B)=\textup{rank}_{\pm} \,(B)<\textup{rank}\,(B)$. If we want all inequalities to be strict simultaneously, it is enough to make a new matrix with $D_4$ and $B$ as its diagonal blocks.
\end{example}
A short remark at the end of \cite{camion1966nonsingularity} points to the fact that the problem seems much harder over the reals, due to the combinatorial nature it assumes in that context. In fact, the signless rank is essentially equivalent to a different quantity, introduced in \cite{gouveia2013polytopes}, denoted by the \emph{square root rank} of a nonnegative matrix. In fact, by definition, $\textup{rank}_{\pm} \,(A)=\textup{rank}_{\! \! {\sqrt{\ }}}\,(A \circ A)$ or, equivalently, $\textup{rank}_{\! \! {\sqrt{\ }}}\,(A)=\textup{rank}_{\pm} \,(\sqrt[\circ]{A})$, where $\circ$ is the Hadamard product and $\sqrt[\circ]{A}$ is the Hadamard square root of $A$. As such, the complexity results proved in \cite{fawzi2015positive} for the square root rank still apply to the signless rank, implying the NP-hardness of the decision problem of checking if an $n \times n$ nonnegative matrix has signless rank equal to $n$. The proof of that complexity result relies on the combinatorial nature of the signless rank and fails in the more continuous notion of phaseless rank (in fact we will see the analogous result to be false for the phaseless rank), offering some hope that this later quantity will prove to be easier to work with. We will focus most of our attention in this latter notion.
The connection to the square root rank can actually be used to derive some lower bounds for both $\textup{rank}_{\pm} \,$ and $\textup{rank}_\theta \,$.
\begin{lemma} \label{lem:inequality}
Let $A \in \mathbb{R}^{n\times m}_+$ and $r=\textup{rank}\,(A \circ A)$. Then, $\textup{rank}_{\pm} \,(A) \geq \frac{\sqrt{1+8r}-1}{2}$ and $\textup{rank}_\theta \,(A) \geq \sqrt{r}$.
\end{lemma}
\begin{proof}
The basic idea is that if we take a matrix $B$ equimodular with $A$ and a minimal factorization $B=UV^t$, and let $u_i$ and $v_j$ be the $i$-th and $j$-th rows of $U$ and $V$, respectively, we have
$$\langle u_i u_i^* , v_j v_j^* \rangle = |\langle u_i, v_j \rangle|^2 = |b_{ij}|^2 = a_{ij}^2 .$$
Now all the $u_i u_i^*$ and $v_j v_j^*$ come from the space of real symmetric matrices of size $\textup{rank}_{\pm} \,(A)$, if we are taking real matrices $B$, and complex hermitian matrices of size $\textup{rank}_\theta \,(A)$, if we are taking complex matrices $B$.
Since the real dimensions of these spaces are, respectively, $\binom{\textup{rank}_{\pm} \,(A)+1}{2}$ and $\textup{rank}_\theta \,(A)^2$, and they give real factorizations of $A \circ A$, we get the inequalities
$$\textup{rank}\,(A \circ A) \leq \binom{\textup{rank}_{\pm} \,(A)+1}{2} \textrm{ \ \ \ and \ \ \ } \textup{rank}\,(A \circ A) \leq \textup{rank}_\theta \,(A)^2,$$
which, when inverted, give us the intended inequalities.
\end{proof}
This result is known in the context of semidefinite rank, and is included here only for the purpose of a unified treatment. An additional very simple property that is worth noting is that a nonnegative matrix has rank one if and only if it has signless rank one, if and only if it has phaseless rank one. This simple fact immediately tells us that the matrices $D_4$ and $B$ in Example \ref{ex:signlessgap} have phaseless rank $2$, since we have proved it is at most $2$ and those matrices have rank greater than one.
Besides the problem of computing or bounding the phaseless rank, we will be interested in the geometry of the set of rank constrained matrices. In order to refer to them we will introduce some notation.
\begin{definition}
Given positive integers $k,n$ and $m$ we define the following subsets of $\mathbb{R}^{n\times m}_+$:
$$P^{n\times m}_{k} = \{ A\in \mathbb{R}^{n\times m}_+: \textup{rank}_\theta \,(A)\leq k \},$$
$$S^{n\times m}_{k} = \{ A\in \mathbb{R}^{n\times m}_+: \textup{rank}_{\pm} \,(A)\leq k \},$$
and
$$R^{n\times m}_{k} = \{ A\in \mathbb{R}^{n\times m}_+: \textup{rank}\,(A)\leq k \}.$$
\end{definition}
It is easy to see that these are all semialgebraic sets. Moreover, the set $R^{n\times m}_{k}$ is well understood, since it is simply the variety of matrices of rank at most $k$, defined by the $k+1$-minors, intersected with the nonnegative orthant. It is also not too hard to get a grasp on the set $S^{n\times m}_{k}$, as this is the union of the variety of matrices of rank at most $k$ with all its $2^{n \times m}$ possible reflections attained by flipping the signs of a subset of variables, intersected with the nonnegative orthant. In particular, we have a somewhat simple algebraic description of both these sets, and they have the same dimension, $k(m+n-k)$.
For $P^{n\times m}_{k}$, all these questions are much more difficult. Clearly we have $R^{n\times m}_{k} \subseteq S^{n\times m}_{k} \subseteq P^{n\times m}_{k}$, which gives us some lower bound on the dimension of the space, but not much else can be immediately derived.
The relations between all these sets are illustrated in Figure \ref{fig:sets}, where we can see a random $2$-dimensional slice of the cone of nonnegative $3\times 3$ matrices (in pink) with the corresponding slice of the region of phaseless rank at most $2$, highlighted in yellow, while the slices of the algebraic closures of the regions of signless rank at most $2$ and usual rank at most $2$ are marked in dashed and solid lines, respectively. Note that Figure \ref{fig:sets} suggests $P^{3\times 3}_{2}$ is full-dimensional. In fact, $P^{n\times n}_{k}$ is full-dimensional in $\mathbb{R}^{n\times n}_+$ for any $k \geq \frac{n+1}{2} $. This observation follows from Corollary \ref{cor:dimension}.
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=0.5\textwidth]{phaselesssignless.png}}
\caption{Slice of the cone of nonnegative $3\times 3$ matrices with $P^{3\times 3}_{2}$, $S^{3\times 3}_{2}$ and $R^{3\times 3}_{2}$ highlighted}
\label{fig:sets}
\end{figure}
\section{Motivation and connections}
As mentioned in the introduction, the concept of phaseless rank is intimately connected to the concept of semidefinite rank of a matrix, used, for instance, to study semidefinite representations of polytopes and amoebas of algebraic varieties. In this section we will briefly introduce each of those areas and establish the connections, as those were the motivating reasons for our study of the subject.
\subsection{Semidefinite extension complexity of a polytope}
The semidefinite rank of a matrix was introduced in \cite{gouveia2013lifts} to study the semidefinite extension complexity of a polytope. Recall that given a $d$-polytope $P$, its \emph{semidefinite extension complexity}
its the smallest $k$ for which one can find $A_0, A_1, \dots, A_m \in \S^k$ such that
$$P=\left\{ (x_1,\dots,x_d) \in \mathbb R^d: \exists \, x_{d+1}, \dots, x_m \in \mathbb R \textrm{ s.t. }A_0 + \sum_{i=1}^m x_i A_i \succeq 0 \right\}.$$
In other words, it is the smallest $k$ for which one can write $P$ as the projection of a slice of the cone of $k\times k$ real positive semidefinite matrices. In order to study this concept one has to introduce the notion of \emph{slack matrix} of a polytope. If $P$ is a polytope with vertices $p_1$,..., $p_v$ and facets cut out by the inequalities $\langle a_1, x\rangle \leq b_1$, ..., $\langle a_f, x\rangle \leq b_f$, then we define its slack matrix to be the nonnegative $v \times f$ matrix $S_P$ with entry $(i,j)$ given by $b_j-\langle a_j, p_i\rangle$.
Additionally, the \emph{semidefinite rank} of a nonnegative matrix $A \in \mathbb R^{n \times m}_+$, $\textup{rank}_{\text{psd}}\,(A)$, is the smallest $k$ for which one can find $U_1 \dots, U_n, V_1, \dots, V_m \in \S_+^k$ such that $A_{ij}=\langle U_i, V_j \rangle$. By the main result in \cite{gouveia2013lifts} one can characterize the semidefinite extension complexity of a $d$-polytope $P$ in terms of the semidefinite rank of its slack matrix.
\begin{proposition}
The extension complexity of a polytope $P$ is the same as the semidefinite rank of its slack matrix, $\textup{rank}_{\text{psd}}\,(S_P)$.
\end{proposition}
For a thorough treatment of the positive semidefinite rank, see \cite{fawzi2015positive}. As noted in \cite{goucha2017ranks,lee2017some}, one can replace real positive semidefinite matrices with complex positive semidefinite matrices and everything still follows through. More precisely, if one defines the \emph{complex semidefinite extension complexity} of $P$ as the smallest $k$ for which one can find $B_0, B_1, \dots, B_m \in \S^k(\mathbb C)$ such that
$$P=\left\{ (x_1,\dots,x_d) \in \mathbb R^d: \exists \, x_{d+1}, \dots, x_m \in \mathbb R \textrm{ s.t. } B_0 + \sum_{i=1}^m x_i B_i \succeq 0 \right\},$$
and the \emph{complex semidefinite rank} of a matrix $A \in \mathbb R^{n \times m}_+$, $\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A)$, as the smallest $k$ for which one can find $U_1 \dots, U_n, V_1, \dots, V_m \in \S_+^k(\mathbb C)$ such that $A_{ij}=\langle U_i, V_j \rangle$, the analogous of the previous proposition still holds.
\begin{proposition}
The complex extension complexity of a polytope $P$ is the same as the complex semidefinite rank of its slack matrix, $\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(S_P)$.
\end{proposition}
The study of the semidefinite extension complexity of polytopes has seen several important recent breakthroughs, and has brought light to this notion of semidefinite rank. It turns out that the notions of signless and phaseless rank give a natural upper bound for these quantities.
\begin{proposition}[\cite{fawzi2015positive,lee2017some}]\label{prop:ineqs}
Given a nonnegative matrix $A$, we have $\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A) \leq \textup{rank}_\theta \,(\sqrt[\circ]{A})$ and $\textup{rank}_{\text{psd}}\,(A) \leq \textup{rank}_{\pm} \,(\sqrt[\circ]{A})$.
\end{proposition}
The proof of this result is essentially the one we used in Lemma \ref{lem:inequality}, as factorizations of an equimodular matrix with $\sqrt[\circ]{A}$ give rise to semidefinite factorizations to $A$ by taking outer products of the rows of the factors. This bound is particularly important in the study of polytopes, since it fully characterizes polytopes with minimal extension complexity.
\begin{proposition}[\cite{gouveia2013polytopes,goucha2017ranks}]
Given a $d$-polytope $P$, we have that its complex and real semidefinite complexities are at least $d+1$. Moreover, they are $d+1$ if and only if $\textup{rank}_\theta \,(\sqrt[\circ]{S_P})=d+1$ or $\textup{rank}_{\pm} \,(\sqrt[\circ]{S_P})=d+1$, respectively.
\end{proposition}
This fact allowed to characterize minimally sdp-representable polytopes in $\mathbb R^3$ and $\mathbb R^4$ in the real case (see \cite{gouveia2013polytopes,gouveia2017fourdim}) and has given some interesting consequences for the complex case (see \cite{goucha2017ranks}). One of the main motivations for us to study the phaseless rank comes precisely from this connection.
\subsection{Amoebas of determinantal varieties}
Another way of looking at phaseless rank is through amoeba theory. Amoebas are geometric objects that were introduced by Gelfand, Kapranov and Zelevinsky in \cite{gelfand2008discriminants} to study algebraic varieties. These complex analysis objects have applications in algebraic geometry, both complex and tropical, but are notoriously hard to work with. They are the image of a variety under the entrywise logarithm of the absolute values of the coordinates.
\begin{definition}
Given a complex variety $V \subseteq \mathbb{C}^{n}$, its \textit{amoeba} is defined as
$$\mathcal{A}(V)=\{ \text{Log}|z|=(\log|z_1|,\ldots,\log|z_n|):z\in V \cap {(\mathbb{C}^{*})}^{n}\}.$$
\end{definition}
Deciding if a point is on the amoeba of a given variety, the so called \emph{amoeba membership problem}, is notoriously hard, making even the simple act of drawing an amoeba a definitely nontrivial task. Other questions like computing volumes or even dimensions of amoebas are also hard. A slightly more algebraic version of this object can be defined by simply taking the entrywise absolute values, and omitting the logarithm.
\begin{definition}
Given a complex variety $V \subseteq \mathbb{C}^{n}$, its \textit{algebraic} or \textit{unlog amoeba} is defined as
$$\mathcal{A}_{\textup{alg}}(V)=\{|z|=(|z_1|,\ldots,|z_n|):z\in V \}.$$
\end{definition}
Considering this definition, it is clear how it relates to the notion of phaseless rank by way of \emph{determinantal varieties}. These and their corresponding ideals are a central object in both commutative algebra and algebraic geometry, and a great volume of research has been focused on studying them. Given positive integers $n,m$ and $k$, with $k \leq \min\{n,m\}$, we define the determinantal variety $Y_{k}^{n,m}$ as the set
of all $n\times m$ complex matrices of rank at most $k$. It is clear that this is simply the variety associated to $I^{n,m}_{k+1}$, the ideal of the $k+1$ minors of an $n\times m$ matrix with distinct variables as entries.
\begin{example} \label{ex:amoeba}
In Figure \ref{fig:amb} we consider the amoeba of the variety $V$ defined by the following $3 \times 3$ determinant:
$$\det \begin{bmatrix} 1 & x & y \\ x & 1 & z \\ y & 0 & 1 \end{bmatrix}=1-x^2+xyz-y^2=0.$$
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=0.4\textwidth]{amoebab.png} \hspace{2cm} \includegraphics[width=0.4\textwidth]{amoebaalgb.png}}
\caption{$\mathcal{A}(V)$ and $\mathcal{A}_{\textup{alg}}(V)$ of a determinantal variety.}
\label{fig:amb}
\end{figure}
\end{example}
Note that directly from the definition of amoeba, we have that the locus of $n\times m$ matrices of phaseless rank at most $k$ is an algebraic amoeba of a determinantal variety, more precisely,
$$P^{n\times m}_{k} = \mathcal{A}_{\textup{alg}}(Y_{k}^{n,m}).$$
\begin{example}
The blue region in Example \ref{ex:amoeba} is exactly the region of the values of $x, y$ and $z$ for which
$$\textup{rank}_\theta \, \begin{bmatrix} 1 & x & y \\ x & 1 & z \\ y & 0 & 1 \end{bmatrix} \leq 2.$$
This is not totally immediate, since in the phaseless rank definition we are allowed to freely choose a phase independently to each entry of the matrix, which includes the $1$'s and also the possibility of different phases for different copies of the same variable, which is not allowed in the amoeba definition. However, since multiplying rows and columns by unitary complex numbers does not change absolute values or rank, we can make any phase attribution into one of the right type, and the regions do coincide.
\end{example}
More generally, computing the phaseless rank of a matrix corresponds essentially to solving the membership problem in the determinantal amoeba, so any result on the phaseless rank can immediately be interpreted as a result about this fundamental object in amoeba theory. Also on the interconnectedness between amoebas and phaseless rank, see Proposition 5.2 from \cite{forsberg2000laurent}, which, in our language, states that the intersection of a fixed number of compactified hyperplane amoebas is empty if and only if the phaseless rank of a specific nonnegative matrix is maximal.
\section{Camion-Hoffman's Theorem}
In this section we set to revisit Camion-Hoffman's Theorem, originally proved in \cite{camion1966nonsingularity}. The main purpose of this section is to set the ideas behind this result in a language and generality that will be convenient for our goals, highlighting the facts that will be most useful, and introducing the necessary notation. For the sake of completeness a proof of the theorem is included. The main idea behind the proof is the simple observation that checking for nonmaximal phaseless rank is simply a linear programming feasibility problem, i.e., checking if a nonnegative matrix has nonmaximal phaseless rank amounts to checking if a specific polytope is nonempty. Here, by nonmaximal phaseless rank we mean that the phaseless rank is less than the minimum of the matrix dimensions.
Inspired by the language of amoeba theory (\cite{purbhoo2008nullstellensatz}) we introduce the notion of \emph{lopsidedness}. Simply put, a list of nonnegative numbers is lopsided if one is greater than the sum of all others.
It is easy to see geometrically, that a nonlopsided list of numbers can always be realized as the lengths of the sides of a polygon in $\mathbb R^2$. Interpreting it in terms of complex numbers we get that a list of nonnegative real numbers $\{a_1,\dots,a_n\}$ is nonlopsided if and only if there are $\theta_k \in [0,2\pi]$ for which $\sum_{k=1}^n a_k e^{\theta_k i} = 0$. This is enough to give us a first characterization of nonmaximal phase rank.
\begin{lemma}\label{lem:lop}
Let $A \in \mathbb{R}^{n\times m}_+$, with $n\leq m$. Then, $\textup{rank}_\theta \,(A)<n$ if and only if there is $\lambda \in \mathbb{R}^n_+$ with $\sum_{i=1}^{n} \lambda_i = 1$ such that, for $l=1,\ldots,m$, $\{A_{1l}\lambda_1,\ldots,A_{nl}\lambda_n\}$ is not lopsided.
\end{lemma}
\begin{proof}
First note that $\textup{rank}_\theta \,(A)<n$ if and only if there exists a matrix $B$ with $B_{kl}=A_{kl} e^{i\theta_{kl}}$ for all $k,l$, such that $\textup{rank}\,(B)<n$. This is the same as saying that the rows of $B$ are linearly dependent, and so there exists a nonzero complex vector $z=(z_1,\ldots,z_n)$ such that $\sum |z_j|=1$ and $\sum_{k=1}^n A_{kl} z_k e^{i\theta_{kl}}=0$, for $l=1,\ldots,m$.
By the observation above, this is equivalent to saying that, for $l=1,\ldots,m$, $\{A_{1l}|z_1|,\ldots,A_{nl}|z_n|\}$ is not lopsided.
\end{proof}
The previous result tells us essentially that $\textup{rank}_\theta \,(A)<n$ if and only if we can scale rows of $A$ by nonnegative numbers in such a way that the entries on each of the columns verify the generalized triangular inequalities. The conditions for a matrix $A \in \mathbb{R}^{n\times m}_+$, with $n\leq m$, to verify $\textup{rank}_\theta \,(A)<n$ can now be simply stated as checking if there exists $\lambda \in \mathbb{R}^n$ such that
$$\begin{cases}
A_{ij} \lambda_i \leq \sum_{k\neq i} A_{kj} \lambda_k, \, j=1,\ldots,m, \, i=1,\ldots,n \\ \\
\lambda_i\geq 0, \, i=1,\ldots,n \\ \\
\sum_{i=1}^{n} \lambda_i = 1.
\end{cases}$$
We have just observed the following result.
\begin{corollary}\label{cor:lp}
Given $A \in \mathbb{R}^{n\times m}_+$, with $n\leq m$, deciding if $\textup{rank}_\theta \,(A)<n$ is a linear programming feasibility problem.
\end{corollary}
Note that this gives us a polynomial time algorithm (on the encoding length) for checking nonmaximality of the phaseless rank. Equivalently, this gives us a polynomial time algorithm to solve the amoeba membership problem for the determinantal variety of maximal minors.
We are now almost ready to state and prove a version of the result of Camion-Hoffman. We need only to briefly introduce some facts about $M$-matrices.
\begin{definition}
An $n\times n$ real matrix $A$ is an M-matrix if it has nonpositive off-diagonal entries and all its eigenvalues have nonnegative real part.
\end{definition}
The class of $M$-matrices is well studied, and there are numerous equivalent characterizations for them. Of particular interest to us will be the following characterizations.
\begin{proposition} \label{prop:mmatrix}
Let $A \in \mathbb R^{n\times n}$ have nonpositive off-diagonal entries. Then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $A$ is a nonsingular $M$-matrix;
\item There exists $x \geq 0$ such that $Ax > 0$;
\item The diagonal entries of $A$ are positive and there exists a diagonal matrix $D$ such that $AD$ is strictly diagonally dominant;
\item All leading principal minors are positive;
\item The diagonal entries of $A$ are positive and all leading principal minors of size at least $3$ are positive;
\item Every real eigenvalue of $A$ is positive.
\end{enumerate}
\end{proposition}
\begin{remark}
Characterizations ii, iii, iv and vi can be found in Theorem 2.3 of \cite{doi:10.1137/1.9781611971262} and v in Corollary 2.3 of \cite{poole1974survey}.
\end{remark}
Finally, recall that given $A \in \mathbb{C}^{n\times n}$, its \emph{comparison matrix}, $\mathcal{M}(A)$, is defined by $\mathcal{M}(A)_{ij} = |A_{ij}|$, if $i=j$, and $\mathcal{M}(A)_{ij} = -|A_{ij}|$, otherwise.
\begin{theorem}[Camion-Hoffman's Theorem] \label{thm:cam_hof}
Given $A \in \mathbb{R}^{n\times n}_{+}$, $\textup{rank}_\theta \,(A)=n$ if and only if there exists a permutation matrix $P$ such that $\mathcal{M}(AP)$ is a nonsingular M-matrix.
\end{theorem}
\begin{proof}
Let the entries of $A$ be denoted by $a_{ij}$, $1 \leq i,j \leq n$. By Corollary \ref{cor:lp}, $\textup{rank}_\theta \,(A)=n$, if and only if the linear problem
$$M \lambda \leq 0, \quad \lambda\geq 0, \quad \sum_{i=1}^{n} \lambda_i = 1$$
is not feasible, where
$$M=\begin{bmatrix} M_1 \\ M_2 \\ \vdots \\ M_n\end{bmatrix}, \ \ \textrm{ with } M_i=\begin{bmatrix}
a_{1i} & -a_{2i} & \ldots & -a_{ni}\\
-a_{1i} & a_{2i} & \ldots & -a_{ni}\\
\vdots & \vdots & \ddots & \vdots\\
-a_{1i} & -a_{2i} & \ldots & a_{ni}\end{bmatrix} \textrm{ for } i=1,\dots,n.
$$
By Ville's Theorem, a simple variant of Farkas' Lemma, this is equivalent to the existence of $y\geq 0$ such that $y^T M > 0$. Furthermore, since $y^T M$ is in the convex cone generated by the rows of $M$, then, by Carath\'{e}odory's Theorem, $y^T M$ can be written as a nonnegative combination of $n$ rows of $M$. Let us call $y'^T M'$ to this representation of $y^T M$, where $M'$ is a submatrix of $M$ containing exactly $n$ rows of $M$ and $y'\geq 0$.
We first observe that each column of $M'$ has exactly one nonnegative entry and all components of $y'$ should be positive.
Furthermore, if two rows of $M'$ are come from the same $M_i$, the components of $y'^T M'$ will not be all positive. So, there are $n!$ possibilities for $M'$, given by ${M'}^T=\mathcal{M}(AP)$, for some permutation matrix $P$. But then, the existence of $y'\geq 0$ such that $\mathcal{M}(AP)y'>0$ is equivalent to $\mathcal{M}(AP)$ being a nonsingular $M$-matrix by Proposition \ref{prop:mmatrix}, concluding the proof.
\end{proof}
Note that, while equivalent, this is not the original statement of Camion-Hoffman's result. This precise version can be found, for example, in \cite{BRADLEY1969105}, as a corollary of a stronger result. The way it is originally stated, Camion-Hoffman's Theorem says that, if $A$ is an $n\times n$ matrix with nonnegative entries, every complex matrix in the equimodular class of $A$, $\Omega(A)$, is nonsingular if and only if there exists a permutation matrix $P$ and a positive diagonal matrix $D$ such that $PAD$ is strictly diagonally dominant. Proposition \ref{prop:mmatrix} immediately gives us the equivalence of both statements. We also highlight Proposition 5.3 from \cite{forsberg2000laurent}, where the authors rediscover Camion-Hoffman's Theorem in an amoeba theory context.
\begin{example} \label{ex:3x3characterization}
Let us see how Camion-Hoffman's Theorem applies to a $3\times 3$ matrix. Let $X \in \mathbb R_+^{n\times n}$ have entries $[x_{ij}]$. We want to characterize $P_2^{3 \times 3}$, that is to say, when is $\textup{rank}_\theta \,(X)\leq 2$.
By Camion-Hoffman's Theorem, this happens if and only if for every permutation matrix $P \in S_3$, we have that $\mathcal{M}(XP)$ is not a nonsingular $M$-matrix. By Proposition \ref{prop:mmatrix}, checking
if $\mathcal{M}(XP)$ is a nonsingular $M$-matrix amounts to checking if its determinant is positive (since it is a $3\times 3$ matrix).
Hence, $\textup{rank}_\theta \,(X) \leq 2$ if and only if $\det(\mathcal{M}(XP)) \leq 0$ for all $P \in S_3$. There are $6$ possible matrices $P$ giving rise to $6$ inequalities. For $P$ equal to the identity, for example, we get
$$\det \begin{bmatrix} x_{11} &-x_{12} & -x_{13} \\ -x_{21} &x_{22} & -x_{23} \\ -x_{31} &-x_{32} & x_{33} \\ \end{bmatrix} \leq 0,$$
which means
$$x_{11}x_{22}x_{33}- x_{11}x_{23}x_{32} - x_{12}x_{21}x_{33}-x_{12}x_{23}x_{31} - x_{13}x_{21}x_{32} - x_{13}x_{22}x_{31} \leq 0.$$
It is not hard to check that any other $P$ will result in a similar equality, where one monomial of the terms of the expansion of the determinant of $X$ appears with a positive sign, and all others with a negative sign.
\end{example}
This can be very useful to understand the geometry of the phaseless rank, as seen in a slightly more concrete example.
\begin{example}
Building from Example \ref{ex:3x3characterization}, let us characterize the nonnegative values of $x$ and $y$ for which the circulant matrix
$$\begin{bmatrix}
1 & x & y \\ y & 1 & x \\ x & y & 1
\end{bmatrix}$$
has phaseless rank less than $3$. Computing the six polynomials determined in that example, we find that they collapse to just four distinct ones:
$$1-x^3-y^3-3 y x, \ \ -1+x^3-y^3-3 y x, \ \ -1-x^3+y^3-3 y x, \ \ -1-x^3-y^3- y x .$$
For nonnegative $x$ and $y$, the last one is always negative, so it can be ignored. Furthermore, the other three factor each into a linear term and a nonnegative quadratic term, which can also be ignored, so we are left only with the three linear inequalities
$$1-x-y \leq 0, \ \ 1+x-y \leq 0, \ \ 1-x+y \leq 0.$$
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=0.4\textwidth]{circulant.png}}
\caption{Region where the $3 \times 3$ nonnegative circulant matrices have nonmaximal $\textup{rank}_\theta \,$}
\label{fig:circ}
\end{figure}
In Figure \ref{fig:circ} we can observe the region. Note that the only singular matrix in that region is that for which $x=y=1$, highlighted in the figure, every other one has usual rank equal to three. It is not hard to check that the signless rank additionally drops to two precisely on the boundary of the region.
\end{example}
\section{Consequences and extensions}
In this section, we derive some new results and strengthen some old ones, based on both Camion-Hoffman's result and, more generally, the underlying idea of using linear programming theory to study the phaseless rank.
\subsection{The rectangular case}
While we now have a full characterization for square matrices with nonmaximal phaseless rank, we are interested in extending it to more general settings. In this section we will study the case of rectangular matrices.
Note that since transposition preserves the rank, we might restrict ourselves always to the case of $A \in \mathbb R^{n \times m}$ with $n \leq m$ for ease of notation. The simplest question one can ask is when does such a matrix have nonmaximal phaseless rank, i.e., when is $\textup{rank}_\theta \,(A)<n$?
Denote by $A_I$, where $I$ is a set of $n$ distinct numbers between $1$ and $m$, the $n\times n$ submatrix of $A$ of columns indexed by elements of $I$. It is clear that if $A$ has phaseless rank less than $n$ so does $A_I$,
since the submatrices $B_I$ of a complex matrix $B$ that is equimodular with $A$ and has rank less than $n$ will be, themselves, equimodular to the matrices $A_I$ and have rank less than $n$. The reciprocal is much less clear, since the existence of singular matrices equimodular with each of the $A_I$ does not seem to imply the existence of a singular matrix globally equimodular with $A$, since patching together the phases attributions to different submatrices is not trivial. Surprisingly, the result does hold.
\begin{proposition}\label{prop:rect}
Let $A \in \mathbb{R}^{n\times m}_+$, with $n\leq m$. Then, $\textup{rank}_\theta \,(A)<n$ if and only if $\textup{rank}_\theta \,(A_I)<n$ for all $I \subseteq \{1,\dots,m\}$ with $|I|=n$.
\end{proposition}
\begin{proof}
By the above discussion, the only thing that needs proof is the sufficiency of the condition $\textup{rank}_\theta \,(A_I)<n$ for all $I$, since it is clearly implied by $\textup{rank}_\theta \,(A)<n$. Assume that the condition holds. Then, by Lemma \ref{lem:lop}, for each $A_I$ there exists $\lambda^I \in \mathbb{R}^n_+$ with coordinate sum one, such that for each column $l \in I$, $\{A_{1l}\lambda^{I}_1,\ldots,A_{nl}\lambda^{I}_n\}$ is not lopsided.
Given any $x \in \mathbb{R}^n_+$, denote by $\textup{Lop}(x)$ the set of $y \in \mathbb R^n_+$ with coordinate sum one such that $\{x_1 y_1,\ldots,x_n y_n\}$ is not lopsided. This is simply the polyhedral set
\begin{align*}
\text{Lop}(x)= \Big\{y \in \mathbb{R}^n_+, \, \sum_{i=1}^{n} y_i = 1: \, x_i y_i \leq \sum_{k\neq i} x_k y_k, \, i=1,\ldots,n \Big\}
\end{align*}
and, in particular, is convex.
Let $a_j$ denote the $j$th column of $A$. The convex sets $\text{Lop}(a_j)$, for $j=1,...,m$, are contained in the hyperplane of coordinate sum one, an $n-1$ dimensional space. Furthermore, by assumption, any $n$ of them intersect, since for any $I=\{i_1,\dots,i_n\}$, we have $\lambda^I \in \bigcap_{j\in I} \text{Lop}(a_j)$. By Helly's Theorem, we must have
$$\bigcap^m_{j=1} \text{Lop}(a_j) \neq \emptyset,$$
which means that we can take $\lambda$ in the intersection, which will then verify the conditions of Lemma \ref{lem:lop}, proving that $\textup{rank}_\theta \,(A)<n$.
\end{proof}
This shows that we can reduce the $n\times m$ case to multiple $n \times n$ cases, so we can still apply Camion-Hoffman's result to study this case.
\begin{example}
Consider the family of $3\times 4$ matrices parametrized by
$$\left[
\begin{array}{cccc}
x-y+1 & x-y+1 & x+1 & 1 \\
1-x & -x+y+1 & 1-y & x+y+1 \\
1-y & 1-x & 1 & x-y+1 \\
\end{array}
\right].$$
If we want to study the region where the phaseless rank is two, it is enough to look at the four $3\times 3$ submatrices and use the result of Example \ref{ex:3x3characterization} to compute the region for each of them, which are shown in Figure \ref{fig:submatrices}. The red pentagonal region is the region where the matrix is nonnegative, while the colored region inside is the region of nonmaximal rank for each of the submatrices.
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=0.22\textwidth]{submatrix1.png} \hfill
\includegraphics[width=0.22\textwidth]{submatrix2.png}\hfill
\includegraphics[width=0.22\textwidth]{submatrix3.png}\hfill
\includegraphics[width=0.22\textwidth]{submatrix4.png}
}
\caption{Region of nonmaximal phase rank for each $3 \times 3$ submatrix}
\label{fig:submatrices}
\end{figure}
By Proposition \ref{prop:rect} we then can simply intersect the four regions to observe the region where the phaseless rank of the full matrix is at most $2$. The result is shown in Figure \ref{fig:fullmatrix}
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=0.3\textwidth]{fullmatrix.png}
}
\caption{Region of nonmaximal phase rank for the full matrix}
\label{fig:fullmatrix}
\end{figure}
\end{example}
\subsection{Geometric implications} \label{ssec:geometric}
From Camion-Hoffman's Theorem and Proposition \ref{prop:rect} one can also derive results on the geometry of the sets $P_{n-1}^{n \times m}$, of the $n\times m$ matrices of nonmaximal phaseless rank. More precisely, we are interested in the semialgebraic descriptions of such sets, and their boundaries.
Recall that $P_k^{n \times m}$ is always semialgebraic by the Tarski-Seidenberg principle, since it is the projection of a semialgebraic set. However the description can in principle be very complicated. For this special case, Theorem \ref{thm:cam_hof} together with Proposition \ref{prop:mmatrix} give a concrete semialgebraic description of $P_{n-1}^{n \times n}$.
Recall that Theorem \ref{thm:cam_hof} states that
$$P^{n\times n}_{n-1}=\bigcap_{P \in S_n} \{ A \in \mathbb R_+^{n\times n} : \mathcal{M}({AP}) \textrm{ is not a nonsigular } M\textrm{-matrix}\}.$$
Let $\det_i(X)$ denote the $i$-th leading principal minor of matrix $X$. The characterizations of $M$-matrices given in Proposition \ref{prop:mmatrix} then allow us to write this more concretely as
$$P^{n\times n}_{n-1}=\bigcap_{P \in S_n} \bigcup_{i=3}^n \{ A \in \mathbb R_+^{n\times n} : \textup{det}_i(\mathcal{M}({AP})) \leq 0 \},$$
which is a closed semialgebraic set, but not necessarily basic. For the $n \times m$ case, we just have to intersect the sets corresponding to each of the $n\times n$ submatrices, so we can still write $P^{n\times m}_{n-1}$ explicitly as an intersection of unions of sets described by a single polynomial inequality.
Note that when $n=3$ the unions have a single element, which trivially gives us the following corollary.
\begin{corollary}\label{cor:3by3}
The set $P^{3 \times m}_2$ is a basic closed semialgebraic set, for $m\geq 3$.
\end{corollary}
It is generally not true that we can ignore the size $3$ minor when testing a matrix for the property of being a nonsingular $M$-matrix. However, in our particular application we can get a little more in this direction.
\begin{corollary}\label{cor:34}
For any $A \in \mathbb{R}^{4\times 4}_{+}$, we have $\textup{rank}_\theta \,(A)<4$ if and only if $\det(\mathcal{M}(AP))\leq 0$ for all permutation matrices $P \in S_4$. In particular, $P^{4 \times m}_3$ is a basic closed semialgebraic set for all $m \geq 4$.
\end{corollary}
\begin{proof}
By Theorem \ref{thm:cam_hof}, $\textup{rank}_\theta \,(A)=4$ if and only if, for some $P$, $\mathcal{M}(AP)$ is a nonsingular M-matrix, which implies, by Proposition \ref{prop:mmatrix}, that all its leading principal minors are positive, including its determinant.
This shows that if $\det(\mathcal{M}(AP)) \leq 0$ for all permutation matrices $P$ then $\textup{rank}_\theta \,(A)<4$.
Suppose now that $\det(\mathcal{M}(AP))>0$, for some $P$. We have to show that that this implies $\textup{rank}_\theta \,(A)=4$. There exist three different permutation matrices $P_1$, $P_2$ and $P_3$, distinct from $P$ such that $$\det(\mathcal{M}(AP_1))=\det(\mathcal{M}(AP_2))=\det(\mathcal{M}(AP_3))=\det(\mathcal{M}(AP))>0.$$
Namely, $P_1$, $P_2$ and $P_3$ are obtained from $P$ by partitioning its columns in two pairs and transposing the columns in each pair. If we denote the entries of $AP$ by $b_{ij}$, $i,j \in \{1,2,3,4\}$, we get the four matrices
$\mathcal{M}(AP), \mathcal{M}(AP_1), \mathcal{M}(AP_2)$ and $\mathcal{M}(AP_3)$
as presented below in order:
$$\left[
\begin{array}{cccc}
b_{11} & -b_{12} & -b_{13} & -b_{14} \\
-b_{21} & b_{22} & -b_{23} & -b_{24} \\
-b_{31} & -b_{32} & b_{33} & -b_{34} \\
-b_{41} & -b_{42} & -b_{43} & b_{44} \\
\end{array}
\right], \ \ \ \left[
\begin{array}{cccc}
b_{12} & -b_{11} & -b_{14} & -b_{13} \\
-b_{22} & b_{21} & -b_{24} & -b_{23} \\
-b_{32} & -b_{31} & b_{34} & -b_{33} \\
-b_{42} & -b_{41} & -b_{44} & b_{43} \\
\end{array}
\right],$$
$$
\left[
\begin{array}{cccc}
b_{13} & -b_{14} & -b_{11} & -b_{12} \\
-b_{23} & b_{24} & -b_{21} & -b_{22} \\
-b_{33} & -b_{34} & b_{31} & -b_{32} \\
-b_{43} & -b_{44} & -b_{41} & b_{42} \\
\end{array}
\right], \ \ \
\left[
\begin{array}{cccc}
b_{14} & -b_{13} & -b_{12} & -b_{11} \\
-b_{24} & b_{23} & -b_{22} & -b_{21} \\
-b_{34} & -b_{33} & b_{32} & -b_{31} \\
-b_{44} & -b_{43} & -b_{42} & b_{41} \\
\end{array}
\right].$$
One can now easily check that $\det(\mathcal{M}(AP))$ can be written as
$$b_{41}\textup{det}_3(\mathcal{M}(AP_3))+b_{42}\textup{det}_3(\mathcal{M}(AP_2))+b_{43}\textup{det}_3(\mathcal{M}(AP_1))+b_{44}\textup{det}_3(\mathcal{M}(AP)),$$
which, since all $b_{ij}$ are nonnegative, means that at least one of the size
$3$ leading principal minors must be positive. By Proposition \ref{prop:mmatrix}, the corresponding matrix must be a nonsingular $M$-matrix, since it has both the $3\times 3$ and the $4 \times 4$ leading principal minors positive.
This shows that if $\det(\mathcal{M}(AP))>0$ for some permutation matrix, then Camion-Hoffman's Theorem guarantees that $\textup{rank}_\theta \,(A) = 4$, completing the proof.
\end{proof}
\begin{remark}
One can extract a little more information from the proof of Corollary \ref{cor:34}.
For checking whether a $4 \times 4$ nonnegative matrix $A$ has phaseless rank less than four, we just need to check $\det \mathcal{M}(AP) \leq 0$ for all permutation matrices $P$. In addition, we also know that each determinant is obtained from four different permutation matrices, leaving only six polynomial inequalities to check.
More concretely, if $A$ has entries $a_{ij}$, and $\textup{perm}(A)$ denotes the permanent of $A$, we just have to consider the inequalities:
$$2 \left(a_{12} a_{23} a_{34} a_{41}+a_{11} a_{24} a_{33} a_{42}+a_{14} a_{21} a_{32} a_{43}+a_{13} a_{22} a_{31} a_{44}\right)-\textup{perm}(A) \leq 0,$$
$$2 \left(a_{13} a_{22} a_{34} a_{41}+a_{14} a_{21} a_{33} a_{42}+a_{11} a_{24} a_{32} a_{43}+a_{12} a_{23} a_{31} a_{44}\right)-\textup{perm}(A) \leq 0,$$
$$2 \left(a_{12} a_{24} a_{33} a_{41}+a_{11} a_{23} a_{34} a_{42}+a_{14} a_{22} a_{31} a_{43}+a_{13} a_{21} a_{32} a_{44}\right)-\textup{perm}(A) \leq 0,$$
$$2 \left(a_{14} a_{22} a_{33} a_{41}+a_{13} a_{21} a_{34} a_{42}+a_{12} a_{24} a_{31} a_{43}+a_{11} a_{23} a_{32} a_{44}\right)-\textup{perm}(A) \leq 0,$$
$$2 \left(a_{13} a_{24} a_{32} a_{41}+a_{14} a_{23} a_{31} a_{42}+a_{11} a_{22} a_{34} a_{43}+a_{12} a_{21} a_{33} a_{44}\right)-\textup{perm}(A) \leq 0,$$
$$2 \left(a_{14} a_{23} a_{32} a_{41}+a_{13} a_{24}a_{31} a_{42}+a_{12} a_{21} a_{34} a_{43}+a_{11} a_{22} a_{33} a_{44}\right)-\textup{perm}(A) \leq 0.$$
\end{remark}
Unfortunately, Corollary \ref{cor:34} does not extend beyond $n=4$. From $n=5$ onwards, the condition that $\det(\mathcal{M}(AP))\leq 0$ for all permutation matrices is stronger than having phaseless rank less than $n$, as shown in the next example.
\begin{example}\label{ex:5counterexample}
Consider the matrices $$A=\begin{bmatrix}
7 & 4 & 9 & 10 & 0\\
9 & 2 & 3 & 0 & 3\\
3 & 10 & 6 & 4 & 8\\
0 & 4 & 1 & 6 & 4\\
0 & 3 & 3 & 10 & 2
\end{bmatrix} \text{ and } P=\begin{bmatrix}
1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0\\
0 & 1 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1
\end{bmatrix}.$$
We have that $\textup{rank}_\theta \,(A)<5$, by Lemma \ref{lem:lop}, since no column is lopsided. However, $\det(\mathcal{M}(AP)) = 3732 > 0$, so it does not verify the determinant inequalities for all permutations matrices.
\end{example}
We now turn our attention to the boundary of the set $P^{n \times n}_{n-1}$, which we will denote by $\partial P^{n \times n}_{n-1}$. For $n \leq 4$, the explicit description we got in Corollary \ref{cor:3by3} and Corollary \ref{cor:34} immediately guarantees us that the positive part of the boundary is contained in the set of matrices $A$ such that $\det(\mathcal{M}(AP))=0$ for some permutation matrix $P$. In particular this tells us that $\partial P^{n \times n}_{n-1} \cap \mathbb R_{++}^{n \times n} \subseteq S^{n \times n}_{n-1}$, for $n \leq 4$, the set of signless rank deficient matrices since $\det(\mathcal{M}(AP))=0$ implies
$\det(\mathcal{M}(AP)P^{-1})=0$ and $\mathcal{M}(AP)P^{-1}$ is simply $A$ with the signs of some entries switched. What is less clear is that exactly the same is still true for all $n$.
\begin{proposition}\label{prop:boundary}
If $A \in \partial P^{n\times n}_{n-1} \cap \mathbb{R}^{n\times n}_{++}$, then $\det(\mathcal{M}(AP))=0$ for some permutation matrix $P$.
\end{proposition}
\begin{proof}
Suppose $A \in \partial P^{n\times n}_{n-1} \cap \mathbb{R}^{n\times n}_{++}$. Since $P^{n\times n}_{n-1}$ is closed, $\textup{rank}_\theta \,(A)<n$ and there must exist a sequence $A_k$ of matrices such that $A_k \rightarrow A$ and every $A_k$ is nonnegative and has phaseless rank $n$.
By Camion-Hoffman's result this implies that for every $k$ we can find a permutation matrix $P_k \in S_n$ such that $\mathcal{M}(A_kP_k)$ is a nonsingular $M$-matrix or, equivalently, such that all eigenvalues of $\mathcal{M}(A_kP_k)$ have positive real part. Note that since there is a finite number of permutations, there exists a permutation matrix $P$ such that $P_{k_i}=P$ for an infinite subsequence $A_{k_i}$, and that $\mathcal{M}(A_{k_i}P)$ have all eigenvalues with positive real part.
Since eigenvalues vary continuously, and $\mathcal{M}(A_{k_i}P) \rightarrow \mathcal{M}(AP)$, we must have that all eigenvalues of $\mathcal{M}(AP)$ have nonnegative real part, so $\mathcal{M}(AP)$ is an $M$-matrix. It cannot be a nonsingular $M$-matrix, as that would imply that $\textup{rank}_\theta \,(A)=n$. Therefore, $\mathcal{M}(AP)$ must be singular, i.e., $\det(\mathcal{M}(AP))=0$, as intended.
\end{proof}
So, in spite of needing the smaller leading principal minors to fully describe the region, the boundary of $P^{n \times n}_{n-1}$ will still be contained in the set cut out by the determinants of the comparison matrices of the permutations of the matrices, even for $n>4$. In the next example we try to illustrate what is happening.
\begin{example}
Consider the slice of the nonnegative matrices in $\mathbb R_+^{5 \times 5}$ that contains the identity, the all-ones matrix and the matrix in Example \ref{ex:5counterexample}, all scaled to have row sums $1$. By what we saw in Example \ref{ex:5counterexample}, we know that in this slice the set of nonnegative matrices, the set of matrices of phaseless rank less than $5$ and the set of matrices $A$ verifying $\mathcal{M}(AP) \leq 0$ for all $P$ are all distinct. This can be seen in the first image of Figure \ref{fig:5slice}, where we see the sets in light blue, green and yellow, respectively, and the three special matrices mentioned as black dots.
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=0.47\textwidth]{5plot1.png} \hfill \includegraphics[width=0.47\textwidth]{5plot2.png}}
\caption{A slice of the cone of $5 \times 5$ nonnegative matrices, with the nonmaximal phaseless rank region and its basic closed semialgebraic inner approximation highlighted}
\label{fig:5slice}
\end{figure}
\end{example}
In the second image of the same figure we can see the zero sets of the $120$ different determinants of the form $\det(\mathcal{M}(AP))$ and check that the extra positive boundary points of $P^{5\times 5}_4$ do indeed come from one of them.
\subsection{Upper bounds}
In Proposition \ref{prop:rect} we have shown that for an $n \times m$ matrix, with $n \leq m$, to have phaseless rank less than $n$ it was enough to check all its $n \times n$ submatrices. A natural question is to ask if a matrix has phaseless rank less than $k$ if and only if the same is true for all its $k \times k$ submatrices, for any positive integer $k$. This is false, as was shown by Levinger (\cite{levinger1972generalization}).
\begin{theorem}[\cite{levinger1972generalization}]
Let $A = mI_n + J_n$, where $m$ is an integer with $1 \leq m < n-2$, and $I_n$ and $J_n$ are, respectively, the $n \times n$ identity and all-ones matrices. Then,
$\textup{rank}_\theta \,(A) \geq m+2$.
\end{theorem}
Note that it is not hard to see that all $(m+2) \times (m+2)$ matrices of the matrix $A$ constructed above have phaseless rank at most $m+1$, so this is indeed a counterexample.
So a perfect generalization of Proposition \ref{prop:rect} is impossible, but we can try to settle for a weaker goal: discovering what having all $k \times k$ submatrices with phaseless rank less than $k$ allows us to conclude about the phaseless rank of the full matrix. This program was carried out in the same paper \cite{levinger1972generalization}, where the following result was derived.
\begin{proposition}[\cite{levinger1972generalization}]\label{prop:lev}
Let $A \in \mathbb{R}^{n\times m}_+$, with $n\leq m$. If all $k\times k$ submatrices of $A$ have nonmaximal phaseless rank, for some $k\leq n$, then $$\textup{rank}_\theta \,(A)\leq m- \left\lfloor \frac{m-1}{k-1} \right\rfloor.$$
\end{proposition}
In this section we use Proposition \ref{prop:rect} to improve on this result. The result we prove is virtually the same, except that we can replace the $m$ in the bound with the smaller $n$, obtaining a much better bound for rectangular matrices.
\begin{proposition}\label{prop:prank_bd}
Let $A \in \mathbb{R}^{n\times m}_+$, with $n\leq m$. If all $k\times k$ submatrices of $A$ have nonmaximal phaseless rank, for some $k\leq n$, then $$\textup{rank}_\theta \,(A)\leq n- \left\lfloor \frac{n-1}{k-1} \right\rfloor.$$
\end{proposition}
\begin{proof}
Let $M$ be an $k\times m$ submatrix of $A$. By Proposition \ref{prop:rect} the matrix $M$, has nonmaximal rank. Hence, for every $k\times m$ submatrix $M$, we can find $B_M \in \Omega(M)$ with rank less than $k$. Moreover, we are free to pick the first row of $B_M$ to be real, since scaling an entire column of $B_M$ by $e^{\theta i}$ does not change the rank or the equimodular class.
Consider then $k \times m$ submatrices $M_i$ of $A$, $i=1,\dots,\left\lfloor \frac{n-1}{k-1} \right\rfloor$ all containing the first row,which we assume non-zero, but otherwise pairwise disjoint. We can then construct a matrix $B$ by piecing together the $B_{M_i}$'s, since they coincide in the only row they share, and filling out the remaining rows, always less than $k-1$, with the corresponding entries of $A$.
By construction, in that matrix $B$ we always have in the rows corresponding to $B_{M_i}$ a row different than the first that is a linear combination of the others, and can be erased without dropping the rank of $B$. Doing this for all $i$, we get that the rank of $B$ has at least a deficiency per $B_i$, so its rank is at most
$$n- \left\lfloor \frac{n-1}{k-1} \right\rfloor,$$
and since $B$ is equimodular with $A$, $\textup{rank}_\theta \,(A)$ verifies the intended inequality.
\end{proof}
Note that by setting $k=n$ we recover Proposition \ref{prop:rect}, so we have a strict extension of that result. Setting $k=2$, we get that if all $2\times 2$ minors have phaseless rank $1$ so does the matrix, which is an obvious consequence of the observation already made in Section \ref{sec:definitions} that $\textup{rank}_\theta \,(A)=1$ if and only if $\textup{rank}\,(A)=1$. For every $k$ in-between we get new results, although not necessarily very strong. They are, however, enough to get some further geometric insight. We say that $\textup{rank}_\theta \,(A)=k$ is \emph{typical} in $\mathbb R_+^{n\times m} $ if there exists an open set in $\mathbb R_+^{n\times m}$ for which all matrices have phaseless rank $k$.
An interesting question is the study of minimal typical ranks, which in our case corresponds to ask for the minimal $k$ for which $P^{n\times m}_k$ has full dimension.
We claim that if $k$ is typical, then we must have $k \geq \left\lceil \frac{n+m-\sqrt{(n-1)^2+(m-1)^2}}{2} \right\rceil$.
Take the map which sends each matrix in $(\mathbb{C}^{*})^{n\times m}$ to its entrywise absolute value, in $\mathbb R_{++}^{n\times m}$. The image under this map of the variety of complex matrices with no zero entries and of rank at most $k$ is $P^{n\times m}_{k}\cap \mathbb R_{++}^{n\times m}$, which is full-dimensional if and only if $k$ is typical. Note that we can assume that every matrix in the domain has real entries in the first row and column, since row and column scaling by complex numbers of absolute value one preserve both the rank and the entrywise absolute value matrix. The real dimension of the variety of complex matrices of rank at most $k$ with real first row and column is $2(n+m-k)k$, twice the number of complex degrees of freedom, minus $m+n-1$, the numbers of entries forced to be real. This difference should be at least $n\times m$, the dimension of $P^{n\times m}_{k}\cap \mathbb R_{++}^{n\times m}$, since the map is differentiable. Thus, we must have
$$2(n+m-k)k-n-m+1 \geq nm,$$
which boils down to $$k\geq \left\lceil \frac{n+m-\sqrt{(n-1)^2+(m-1)^2}}{2} \right\rceil,$$ because $k$ is a positive integer.
\begin{corollary}\label{cor:dimension}
For $\mathbb R_+^{n \times m}$, with $3 \leq n \leq m$, the minimal typical phaseless rank $k$ must verify
$$ \left\lceil \frac{n+m-\sqrt{(n-1)^2+(m-1)^2}}{2} \right\rceil \leq k \leq \left\lceil \frac{n+1}{2} \right\rceil.$$
\end{corollary}
\begin{proof}
The lower bound comes from the above dimension count. To prove the upper bound, note that the $3 \times 3$ all-ones matrix has phaseless rank $1$ (less than three), and any small enough entrywise perturbation of it also has phaseless rank less than $3$, since it will still have nonlopsided columns. This means that the $n \times m$ all-ones matrix, and any sufficiently small perturbation of it, have all $3 \times 3$ submatrices with nonmaximal phaseless rank, which implies, by Proposition \ref{prop:prank_bd}, that their phaseless rank is at most $\left\lceil \frac{n+1}{2} \right\rceil$. Hence, there exists an open set of $\mathbb R_+^{n \times m}$ in which every matrix has phaseless rank less or equal than that number, which implies the smallest typical rank is at most that, giving us the upper bound.
\end{proof}
For $m$ much larger than $n$ the bound is almost tight, since the lower bound converges to $n/2$. In fact, for odd $n$ and sufficiently large $m$ we will have that the typical rank is actually $\frac{n+1}{2}$, since that will be the only integer satisfying both bounds.
\section{Applications and outlook}
\subsection{The amoeba point of view}\label{subsec:amb_pov}
Many of the results developed in the previous sections have nice interpretations from the viewpoint of amoeba theory. Here, we will introduce some concepts and problems coming from this area of research and show the implications of the work previously developed.
As mentioned before, checking for amoeba membership is a hard problem. Even certifying that a point is not in an amoeba is generally difficult. To that end, several necessary conditions for amoeba membership have been developed. One such condition is the non-lopsidedness criterion. In its most basic form, this gives a necessary condition for a point to be in the amoeba of the principal ideal generated by some polynomial $f$, $\mathcal{A}(f)$.
Let $f \in \mathbb{C}[z_1,\ldots,z_n]$ and $a \in \mathbb{R}^n$. By writing $f$ as a sum of monomials, $f(\mathbf{z})=m_1(\mathbf{z})+\ldots+m_d(\mathbf{z})$, define
$$f\{\mathbf{a}\}:=\{|m_1(\mathbf{a})|,\ldots,|m_d(\mathbf{a})|\}.$$
It is clear that in order for $\mathbf{a}$ to be the vector of absolute values of some complex root of $f$, the vector $f\{\mathbf{a}\}$ cannot be lopsided, as it must cancel after the phases are added in. We then define
$$\textup{Nlop}(f)=\{a \in \mathbb R^n \ : \ f\{\mathbf{a}\} \text{ is not lopsided}\}.$$
It is clear that $\mathcal{A}(f) \subseteq \textup{Log}(\textup{Nlop}(f))$, but the inclusion is generally strict. One immediate consequence of Example \ref{ex:3x3characterization} is the following.
\begin{proposition}
Let $f=det(X)$ be the cubic polynomial in variables $x_{ij}$, $i,j=1,2,3$. Then
$$\mathcal{A}(f) = \textup{Log}(\textup{Nlop}(f)).$$
\end{proposition}
So, the above proposition gives us an example where nonlopsidedness is a necessary and sufficient condition. In fact, this is just a special case of a more general result from amoeba theory: that for any polynomial whose support forms the set of vertices of a simplex (which is the case for the $3\times3$ determinant), it holds that $\mathcal{A}(f) = \textup{Log}(\textup{Nlop}(f))$ . This follows from \cite{forsberg2000laurent} (see, for instance, Theorem 3.1 of \cite{theobald2013amoebas} for details).
Another interesting example that we can extract from our results concerns amoeba bases. Purbhoo shows, in \cite{purbhoo2008nullstellensatz}, that the amoebas of general ideals can be reduced in a way to the case of principal ideals, since $\mathcal{A}(V(I))=\bigcap_{f \in I} \mathcal{A}(f)$. The problem is that this is an infinite intersection, which immediately raises the question if a finite intersection may suffice. This suggests the notion of an \emph{amoeba basis}, introduced in \cite{schroeter2013boundary}.
\begin{definition}
Given an ideal $I \subseteq \mathbb{C}[z_1,\ldots,z_n]$, we call a finite set $B \subset I$ an amoeba basis for $I$ if it generates $I$ and it verifies the property
$$\mathcal{A}(V(I))=\bigcap_{f \in B} \mathcal{A}(f)$$
while any proper subset of $B$ does not.
\end{definition}
Unfortunately, amoeba bases may fail to exist and in fact very few examples of them are known. In \cite{nisse2018describing} it is proved that varieties of a particular kind, those that are \emph{independent complete intersections}, have amoeba bases, and it is conjectured that only union of those can have them (see \cite[Conjecture 5.3]{nisse2018describing}). Proposition \ref{prop:rect} gives us a nice new example of such nice behavior, disproving the conjecture, since the variety of $n \times m$ rectangular matrices, with $n < m$, of rank less than $n$ is irreducible and not even a set-theoretic complete intersection \cite{Bruns90}.
\begin{corollary}
Let $X$ be an $n \times m$ matrix of indeterminates. The set of maximal minors of $X$ is an amoeba basis for the determinantal ideal they generate.
\end{corollary}
Note that this is just another result in a long line of results about the special properties of the basis of maximal minors of a matrix of indeterminates, notoriously including the fact that they form a universal Groebner basis, as proved in \cite{Bernstein1993}.
For $3 \times n$ matrices we actually have that the nonlopsidedness of the generators is enough to guarantee the amoeba membership, an even stronger condition.
All other results automatically translate to amoeba theory, and some have interesting translations. We provide explicit semialgebraic descriptions for the amoeba of maximal minors, adding one example to the short list of amoebas for which such is available, as pointed out in \cite[Question 3.7]{nisse2018describing}. Moreover, Proposition \ref{prop:boundary} implies that the boundary of the amoeba of the determinant of a square matrix of indeterminates is contained in the image by the entrywise absolute value map of the set of its real zeros, while Corollary \ref{cor:dimension} states some conditions for full dimensionality of the amoeba of the variety of bounded rank matrices.
\subsection{Implications on semidefinite rank}
As we saw before, upper bounds on the phaseless rank will immediately give us upper bounds on the complex semidefinite rank. One can use that to improve on some results in the literature, and hopefully to construct examples.
For a simple illustration, recall the following result proved in \cite{lee2017some}, that gives sufficient conditions for nonmaximality of the complex semidefinite rank of a matrix.
\begin{proposition}[\cite{lee2017some}]\label{prop:leewei}
Let $A\in \mathbb{R}^{n\times m}_+$. If every column of $\sqrt[\circ]{A}$ has no dominant entry (i.e., if every column of $\sqrt[\circ]{A}$ is not lopsided), then $\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A)<n$.
\end{proposition}
We remark that the assumption in the previous result is just a sufficient condition for $\textup{rank}_\theta \,(\sqrt[\circ]{A}) < n$, which implies $\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A)<n$, by Proposition \ref{prop:ineqs}. This observation easily follows from applying Lemma \ref{lem:lop} to $\sqrt[\circ]{A}$. This means that Proposition \ref{prop:leewei} is just a specialization of the following more general statement.
\begin{proposition}
Let $A\in \mathbb{R}^{n\times m}_+$. If $\textup{rank}_\theta \,(\sqrt[\circ]{A}) < n$, then $\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A)<n$.
\end{proposition}
\label{eqn:ranks}
One can check whether $\textup{rank}_\theta \,(\sqrt[\circ]{A}) < n$ by using both Proposition \ref{prop:rect}, if the matrix is not square, and Theorem \ref{thm:cam_hof}. More generally, Proposition \ref{prop:ineqs} dictates that every upper bound for $\textup{rank}_\theta \,(\sqrt[\circ]{A})$ is an upper bound for $\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A)$. Thus, we have the following corollary of Proposition \ref{prop:prank_bd}.
\begin{corollary}\label{cor:sqrank}
Let $A \in \mathbb{R}^{n\times m}_+$, with $n \leq m$. If all $k\times k$ submatrices of $\sqrt[\circ]{A}$ have nonmaximal phaseless rank, $$\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A)\leq n-\left\lfloor \frac{n-1}{k-1}\right\rfloor.$$
\end{corollary}
One can actually improve on both these results by removing the need to consider the Hadamard square root. To do that, we need an auxiliary lemma, concerning the Hadamard product of matrices:
\begin{lemma}
Let $A \in \mathbb{R}^{n\times n}_+$ and $\alpha \geq 1$. If $\textup{rank}_\theta \,(A)=n$, then $\textup{rank}_\theta \,(A^{\circ \alpha})=n$, where $A^{\circ \alpha}$ is obtained from $A$ by taking entrywise powers $\alpha$.
\end{lemma}
\begin{proof}
By Theorem \ref{thm:cam_hof}, $\textup{rank}_\theta \,(A)=n$ if and only if there exists a permutation matrix $P$ such that $\mathcal{M}(AP)$ is a nonsingular M-matrix, which is equivalent to saying that the minimum real eigenvalue of $\mathcal{M}(AP)$ is positive, according to Proposition \ref{prop:mmatrix}, i.e., $\sigma(AP)>0$.
But then, Theorem $4$ from \cite{elsner1988perron} guarantees precisely that we must have $$\sigma(A^{\circ \alpha}P) = \sigma((AP)^{\circ \alpha}) \geq \sigma(AP)^{\alpha} > 0,$$
proving that $\textup{rank}_\theta \,(A^{\circ \alpha})=n$.
\end{proof}
By specializing $\alpha=2$ and applying the previous Lemma to the Hadamard square root of $A$ we get the following immediate Corollary.
\begin{corollary}
Let $A \in \mathbb{R}^{n\times n}_+$. If $\textup{rank}_\theta \,(A)<n$, $\textup{rank}_\theta \,(\sqrt[\circ]{A})<n.$
\end{corollary}
This can be used to get a simpler upper bound on the complex semidefinite rank, testing submatrices of $A$ instead of its square root.
\begin{corollary}\label{cor:sqrank_psd}
Let $A \in \mathbb{R}^{n\times m}_+$, with $n \leq m$. If all $k\times k$ submatrices of $A$ have nonmaximal phaseless rank, $$\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(A)\leq n-\left\lfloor \frac{n-1}{k-1}\right\rfloor.$$
\end{corollary}
This can be used to derive simple upper bounds on the extension complexity of polytopes. Recall that for a $d$-dimensional polytope, $P$,
its slack matrix, $S_P$, has rank $d+1$ and its complex semidefinite rank is the complex semidefinite extension complexity of $P$. Since every $(d+2)\times (d+2)$ submatrix of $S_P$ has rank $d+1$, it also has phaseless rank at most $d+1$. Thus, by applying the previous corollary we obtain the following result.
\begin{corollary}\label{cor:sqrank_pol}
Let $P$ be a $d$-dimensional polytope with $v$ vertices and $f$ facets, and $m=\min\{v,f\}$ then
$$\textup{rank}^{\mathbb{C}}_{\text{psd}}\,(S_P) \leq m-\left\lfloor \frac{m-1}{d+1}\right\rfloor.$$
\end{corollary}
For $d=2$, for example, this gives us an upper bound of $\left \lceil \frac{2n+1}{3} \right \rceil$ for the complex extension complexity of an $n$-gon, which is similar asymptotically to the $4 \left \lceil \frac{n}{6} \right \rceil$ bound derived in \cite{gouveia2015worst} and slightly
better for small $n$ (note that that bound is valid for the real semidefinite extension complexity, and so automatically for the complex case too). Of course it is just linear, so it does not reach the sublinear complexity proved by Shitov in \cite{shitov2014sublinear} even for the linear extension complexity, but it is applicable in general and can be useful for small polytopes in small dimensions. Moreover, it is, as far as we know, the only non-trivial bound that works for polytopes of arbitrary dimension. As a last remark, we note that such lift can explicitly can be constructed. This can easily be done from an actual rank
$m-\left\lfloor \frac{m-1}{d+1}\right\rfloor$ matrix that is equimodular to the Hadamard square root of the slack matrix, and such matrix can, with a small amount of work, be explicitly constructed from our results.
\subsection{Equiangular lines}
A set of $n$ lines in the vector space $\mathbb{R}^d$ or $\mathbb{C}^d$ is called equiangular if all the lines intersect at a single point and every pair of lines makes the same angle. Bounding the maximum number of real equiangular lines for a given dimension has long been a popular research problem. Classically, we want bounds on the absolute maximum number of such lines (denoted by $N(d)$) or on the maximum number for a given common angle $\arccos(\alpha)$ (denoted by $N_{\alpha}(d)$). A somewhat thorough survey on this type of results can be found in \cite{de2018k}, while further reading on the real case can be seen in \cite{greaves2016equiangular}, \cite{jiang2019equiangular}, and \cite{lemmens1991equiangular}.
The complex case has seen a flurry of recent developments due to its connection to quantum physics (see for instance \cite{MR2142983},\cite{MR2301093},\cite{MR2059685},\cite{MR2662471}). In fact, it is well known that the maximum number of complex equiangular lines in $\mathbb{C}^d$, denoted by $N^{\mathbb{C}}(d)$, is bounded from above by $d^2$ and it is conjectured that $N^{\mathbb{C}}(d)=d^2$ for all $d\geq 2$ (\cite{zauner1999grundzuge}). When such a maximum set of $d^2$ lines exists, one can construct a symmetric, informationally complete, positive operator-valued measure (SIC-POVM), an object that plays an important role in quantum information theory. Recent developments in the construction of large sets of complex equiangular lines can be found in \cite{jedwab2015large} and \cite{jedwab2015simple}.
To see how these notions relate to the object of our study, consider a set of $n$ lines through a point in a $d$-dimensional Euclidean space, which we consider either $\mathbb{R}^d$ or $\mathbb{C}^d$. Let $v_i$, $i=1,...,n$, be unit vectors for each of the lines, and let $V$ be the matrix whose columns correspond to these vectors. Note that the lines having pairwise angle $\arccos(\alpha)$ is the same as having $|v_i^*v_j|= \alpha$ for all $i \not = j$. More precisely, $|V^* V| = A^{\alpha}_n$, where $A^{\alpha}_n$ denotes the $n\times n$ matrix with ones on the diagonal and $\alpha$'s everywhere else, which means $A^{\alpha}_n$ is equimodular to a positive semidefinite matrix of rank at most $d$. Conversely, if $A^{\alpha}_n$ is equimodular to a positive semidefinite matrix of rank at most $d$, one can do an eigendecomposition to attain a set of $n$ equiangular lines in the $d$-dimensional Euclidean with common angle $\arccos(\alpha)$. This immediately suggests a semidefinite variant of the phaseless rank.
\begin{definition}
For a symmetric matrix $A \in \mathbb{R}^{n\times n}_+$, its psd-phaseless rank is defined as
$$\textup{rank}_\theta \,^{\text{psd}}(A) = \min\{\textup{rank}\,(B): B\in \Omega(A) \text{ and }B \succeq 0\}.$$
\end{definition}
We can then use this notion to highlight that the problem of finding equiangular lines with fixed angle is equivalent to that of finding a matrix rank.
\begin{proposition}
For $0\leq \alpha\leq 1$, $\textup{rank}_\theta \,^{\text{psd}}(A^{\alpha}_n)$ is the smallest dimension $d$ for which there exists an equiangular set of $n$ lines in $\mathbb{C}^d$ with common angle $\arccos{\alpha}$.
\end{proposition}
Note that, in particular, $\textup{rank}_\theta \,^{\text{psd}}(A) \geq \textup{rank}_\theta \,(A)$, so lower bounds on the usual phaseless rank give us upper bounds on the numbers of equiangular lines. In the real case, we can introduce the analogous notion of psd-signless rank and, in that case, the trivial signless rank inequality from Lemma \ref{lem:inequality} recovers the traditional Gerzon upper bound for the number of equiangular lines. In the complex case, the inequality $N^{\mathbb{C}}(d)\leq d^2$ can be rewritten as $\textup{rank}_\theta \,^{\text{psd}}(A^{\alpha}_n)\geq \sqrt{n}$ for all $\alpha$ which, once again, follows directly from $\textup{rank}_\theta \,(A^{\alpha}_n)$ being a lower bound for $\textup{rank}_\theta \,^{\text{psd}}(A^{\alpha}_n)$ and Lemma \ref{lem:inequality}. To illustrate this strategy of turning lower bounds on phaseless rank into upper bounds on the number of equiangular lines, we present a simple result derived from our basic bounds on phaseless rank.
\begin{proposition}
For $\alpha<\frac{1}{d}$, $N_{\alpha}^{\mathbb{C}}(d)=d$.
\end{proposition}
\begin{proof}
Fix $d$ and let $\alpha<\frac{1}{d}$. Observe that one can write $N_{\alpha}^{\mathbb{C}}(d)$ as
$$\max\{n:\textup{rank}_\theta \,^{\text{psd}}(A^{\alpha}_n)\leq d\}$$
Since $A^{\alpha}_{d+1}$ has lopsided columns, and is a submatrix of any $A^{\alpha}_n$ for $n>d$, we have $$\textup{rank}_\theta \,^{\text{psd}}(A^{\alpha}_{n}) \geq \textup{rank}_\theta \,(A^{\alpha}_n) \geq d+1$$
for any $n > d$. Since $A^{\alpha}_d$ is positive semidefinite and has rank $d$, the result follows.
\end{proof}
While fairly simple, this result highlights the usefulness of deriving effective lower bounds to the phaseless rank, as a means to obtain upper bounds to $N_{\alpha}^{\mathbb{C}}(d)$. A related classical concept that can be studied in terms of psd-phaseless rank is that of mutually unbiased bases in $\mathbb{C}^d$ (MUB's). Two orthonormal bases $\{u_1,...,u_d\}$ and $\{v_1,...,v_d\}$ of $\mathbb{C}^d$ are said to be unbiased if $|u_i^*v_j|=\frac{1}{\sqrt{d}}$ for all $i$ and $j$. A set of orthonormal bases is a set of mutually unbiased bases if all pairs of distinct bases are unbiased. It is known that there cannot exist sets of more than $d+1$ MUB's in $\mathbb{C}^d$, and such sets exist for $d$ a prime power, but the precise maximum number is unknown even for $d=6$, where it is believed to be three (see \cite{durt2010mutually}, \cite{bandyopadhyay2002new} and \cite{brierley2009constructing} for more information and a survey into this rich research area). To translate this in terms of phaseless rank, consider the matrix $B_d^k$ defined as the matrix of $k \times k$ blocks where the blocks in the diagonal are $d \times d$ identities and the off-diagonal ones are constantly equal to $\frac{1}{\sqrt{d}}$. The following simple fact is then clear.
\begin{proposition}
There exists a set of $k$ mutually unbiased bases in $\mathbb{C}^d$ if and only if $\textup{rank}_\theta \,^{\text{psd}}(B_d^k) = d$.
\end{proposition}
As in equiangular lines, lower bounds on the phaseless rank have the potential to give upper bounds on the maximum number of MUB's.
\subsection{Conclusion and some open questions}
Throughout this paper we established the connection between the classical results of Camion and Hoffman on equimodular classes of matrices with the modern developments in the theories of amoebas and semidefinite extension complexity.
This provided a rich field of motivation and applications, and allowed for interesting and new developments. However, many questions remain completely open and are ripe for further explorations.
\begin{enumerate}
\item Is it possible to characterize other cases besides the nonmaximal phaseless rank? The simplest outstanding case would be to characterize $4\times 4$ matrices of phaseless rank at most $2$.
\item Since the phaseless rank has strong conceptual connections to both the rank minimization and the phase retrieval problems can one use the body of work on approximations to those problems to develop some approximations to these quantities?
\item What can we say about the complexity of computing the phaseless rank?
\item While some work was already carried out here on the dimension of these semialgebraic sets, it should be possible to state more precise results on which values of the phaseless rank are typical.
\end{enumerate}
\section*{Acknowledgments}
The authors would like to thank Ant\'{o}nio Leal Duarte for pointing us towards the literature on Camion-Hoffman's Theorem, and Timo de Wolff for the encouragement and constructive feedback on the amoeba applications.
\bibliographystyle{plain}
| {
"timestamp": "2020-10-09T02:17:34",
"yymm": "1909",
"arxiv_id": "1909.02417",
"language": "en",
"url": "https://arxiv.org/abs/1909.02417",
"abstract": "We consider the problem of finding the smallest rank of a complex matrix whose absolute values of the entries are given. We call this minimum the phaseless rank of the matrix of the entrywise absolute values. In this paper we study this quantity, extending a classic result of Camion and Hoffman and connecting it to the study of amoebas of determinantal varieties and of semidefinite representations of convex sets. As a consequence, we prove that the set of maximal minors of a matrix of indeterminates form an amoeba basis for the ideal they define, and we attain a new upper bound on the complex semidefinite extension complexity of polytopes, dependent only on their number of vertices and facets. We also highlight the connections between the notion of phaseless rank and the problem of finding large sets of complex equiangular lines or mutually unbiased bases.",
"subjects": "Algebraic Geometry (math.AG); Optimization and Control (math.OC)",
"title": "The phaseless rank of a matrix",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9901401441145626,
"lm_q2_score": 0.7981867729389246,
"lm_q1q2_score": 0.7903167663880845
} |
https://arxiv.org/abs/2206.03125 | Monte Carlo integration of $C^r$ functions with adaptive variance reduction: an asymptotic analysis | The theme of the present paper is numerical integration of $C^r$ functions using randomized methods. We consider variance reduction methods that consist in two steps. First the initial interval is partitioned into subintervals and the integrand is approximated by a piecewise polynomial interpolant that is based on the obtained partition. Then a randomized approximation is applied on the difference of the integrand and its interpolant. The final approximation of the integral is the sum of both. The optimal convergence rate is already achieved by uniform (nonadaptive) partition plus the crude Monte Carlo; however, special adaptive techniques can substantially lower the asymptotic factor depending on the integrand. The improvement can be huge in comparison to the nonadaptive method, especially for functions with rapidly varying $r$th derivatives, which has serious implications for practical computations. In addition, the proposed adaptive methods are easily implementable and can be well used for automatic integration. | \section{Introduction}
Adaption is a useful tool to improve performance of algorithms. The problems of
numerical integration and related to it $L^1$ approximation are not exceptions.
If an underlying function possesses some singularities and is otherwise smooth,
then using adaption is necessary to localise the singular points and restore
the convergence rate typical for smooth functions, see, e.g., \cite{KP1}, \cite{PlaskotaWasilkowski2005, PlaskotaWasilkowskiZhao2008}, \cite{PP1}.
For functions that are smooth in the whole domain, adaptive algorithms do not offer
a better convergence rate than nonadaptive algorithms; however, they can essentially
lower asymptotic constants. This is why adaptive quadratures, see, e.g.,
\cite{Lyness1972, DavisRabinowitz1984},
are widely used for numerical integration. Their superiority over nonadaptive
quadratures is rather obvious, but precise answers to the question of
"how much adaption helps" are usually missing. This gap was partially filled by
recent results of
\cite{Gocwin2021, Plaskota2015, PlaskotaSamoraj2022},
where best asymptotic constants of deterministic algorithms that use piecewise
polynomial interpolation were determined for $r$-times continuously differentiable
functions $f:[a,b]\to\mathbb R.$ In this case, adaption relies on adjusting
the partition of the interval $[a,b]$ to the underlying function.
While the convergence rate is of order $N^{-r},$ it turns out that
the asymptotic constant depends on $f$ via the factor of
$(b-a)^r\big\|f^{(r)}\big\|_{L^1}$ for uniform (nonadaptive) partition,
and $\big\|f^{(r)}\|_{L^{1/(r+1)}}$ for best adaptive partition.
In the current paper, we present the line of thinking similar to that of
the aforementioned papers. The difference is that now we want to carry out
the analysis and obtain asymptotic constants for randomised algorithms.
Our goal is the numerical approximation of the integral
\begin{equation}\label{theproblem}
Sf=\int_a^bf(x)\,\mathrm dx.
\end{equation}
It is well known that for $f\in L^2(a,b)$ the \emph{crude Monte Carlo},
\begin{equation}\label{MCstandard}
M_Nf=\frac{b-a}N\sum_{i=1}^N f(t_i),\quad\mbox{where}\quad t_i\stackrel{iid}\sim U(a,b),
\end{equation}
returns an approximation with expectation $\mathbb E(M_Nf)=Sf$ and error (standard deviation)
\begin{equation}\label{MCerr}
\sqrt{\mathbb E\big(Sf-M_Nf\big)^2}=\frac{\sigma(f)}{\sqrt N},\quad\mbox{where}\quad
\sigma(f)^2=(b-a)S(f^2)-(Sf)^2.
\end{equation}
If the function enjoys more smoothness, $f\in C^r([a,b]),$ then a much higher convergence
rate $N^{-(r+1/2)}$ can be achieved using various techniques of \emph{variance reduction},
see, e.g., \cite{Heinrich1993}. One way is to apply a randomised approximation of the form
\begin{equation}\label{MCvr}
\overline M_{N,r}(f)=S(L_{m,r}f)+M_n(f-L_{m,r}f),
\end{equation}
where $L_{m,r}$ is the piecewise polynomial interpolation of $f$ of degree $r-1$ using
a partition of the interval $[a,b]$ into $m$ subintervals, $M_n$ is a Monte Carlo type
algorithm using $n$ samples of $f,$ and $N$ is the total number of function evaluations
used. The optimal rate is already achieved for uniform (nonadaptive) partition and
crude Monte Carlo. Then, see Theorem~\ref{thm:equ} of Section~\ref{sec:equispaced} with $\beta=0,$
the error asymptotically equals
$$c\,(b-a)^{r+1/2}\big\|f^{(r)}\big\|_{L^2(a,b)}\,N^{-(r+1/2)},$$
where $c$ depends only on the choice of interpolation points within subintervals.
The main result of this paper relies on showing that with the help of adaption
the asymptotic error of the methods \eqref{MCvr} can be reduced to
\begin{equation}\label{bestada}
c\,\big\|f^{(r)}\big\|_{L^{1/(r+1)}(a,b)}\,N^{-(r+1/2)},
\end{equation}
see Theorem \ref{thm:strata} of Section \ref{sec:strata} and Theorem \ref{thm:importstrata} of Section \ref{sec:second}.
Observe that the gain can be significant, especially when the derivative $f^{(r)}$
drastically changes. For instance, for $[a,b]=[0,1],$ $f(x)=1/(x+d),$ and $r=4,$
adaption is asymptotically better than nonadaption roughly
$5.7*10^{12}$ times if $d=10^{-4},$ and $1.8*10^{29}$ times if $d=10^{-8}.$
We construct two randomised algorithms, denoted $\overline M_{N,r}^{\,*}$ and
$\overline M_{N,r}^{\,**},$ that achieve the error \eqref{bestada}. Although they
use different initial approaches; namely, stratification versus importance sampling,
in the limit they both reach essentially the same partition, such that
the $L^1$ errors of Lagrange interpolation in all subintervals are equalised.
However, numerical tests of Section \ref{sec:impl} show that the algorithm $\overline M_{N,r}^{\,*}$ achieves
the error \eqref{bestada} with some delay, which makes $\overline M_{N,r}^{\,**}$
worth recommending rather than $\overline M_{N,r}^{\,*}$ in practical computations.
Other advantages of $\overline M_{N,r}^{\,**}$ are that it is easily implementable and,
as shown in Section \ref{sec:automatic}, it can be successfully used for automatic Monte Carlo integration.
Our analysis is restricted to one-dimensional integrals, but we believe that it can be extended and efficient adaptive Monte Carlo algorithms constructed also for multivariate integration, where randomisation finds its major application.
In the sequel, we use the following notation. For two functions of $N$ we write
$g_1(N)\lessapprox g_2(N)$ iff $\limsup_{N\to\infty}g_1(N)/g_2(N)\le 1,$ and we write
$g_1(N)\approx g_2(N)$ iff $\lim_{N\to\infty}g_1(N)/g_2(N)=1.$ Similarly, for functions
of $\varepsilon$ we write $h_1(\varepsilon)\lessapprox h_2(\varepsilon)$ iff
$\limsup_{\varepsilon\to 0^+}h_1(\varepsilon)/h_2(\varepsilon)\le 1,$ and
$h_1(\varepsilon)\approx h_2(\varepsilon)$ iff $\lim_{\varepsilon\to 0^+}h_1(\varepsilon)/h_2(\varepsilon)=1.$
\section{Variance reduction using Lagrange interpolation}\label{sec:equispaced}
We first derive some general error estimates for the variance reduction algorithms where the standard
Monte Carlo is applied for the error of piecewise Lagrange interpolation. Specifically, we divide the interval
$[a,b]$ into $m$ subintervals using a partition $a=x_0<x_1<\cdots<x_m=b,$ and on each subinterval
$[x_{j-1},x_j]$ we approximate $f$ using Lagrange interpolation of degree $r-1$ with the interpolation points
\begin{equation}\label{fixedpts}
x_{j,s}=x_{j-1}+z_s(x_j-x_{j-1}),\qquad 1\le s\le r,
\end{equation} where
\begin{equation}\label{zi-points}
0\le z_1<z_2<\cdots<z_r\le 1
\end{equation}
are fixed (independent of the partition). Denote such an approximaton by $L_{m,r}f.$
Then $f=L_{m,r}f+R_{m,r}f$ with $R_{m,r}f=f-L_{m,r}f.$
The integral $Sf$ is finally approximated by
$$ \overline M_{m,n,r}f\,=\,S(L_{m,r}f)+M_n(R_{m,r}f), $$
where $M_n$ is the standard Monte Carlo \eqref{MCstandard}. We obviously have
$\mathbb{E}(\overline M_{m,n,r}f)=Sf.$ Since
$$ Sf-\overline M_{m,n,r}f\,=\,Sf-S(L_{m,r}f)-M_n(R_{m,r}f)\,=\,S(R_{m,r}f)-M_n(R_{m,r}f),$$
by \eqref{MCerr} we have
\begin{equation}\label{errform}
\mathbb E\big(Sf-\overline M_{m,n,r}f\big)^2\,=\,
\frac1n\left((b-a)S\big((R_{m,r}f)^2\big)-\big(S(R_{m,r}f)\big)^2\right).
\end{equation}
\smallskip
Note that
$$ S\big((R_{m,r}f)^2\big)\,=\,\int_a^b(f-L_{m,r}f)^2(x)\,\mathrm dx
\,=\,\|f-L_{m,r}f\|_{L^2(a,b)}^2 $$
is the squared $L^2$-error of the applied (piecewise) polynomial interpolation, while
$$ S(R_{m,r}f)\,=\,\int_a^b(f-L_{m,r}f)(x)\,\mathrm dx\,=\,S(f)-S(L_{m,r}f) $$
is the error of the quadrature $\overline Q_{m,r}f=S(L_{m,r}f).$
\smallskip
From now on we assume that $f$ is not a polynomial of degree smaller than or equal to
$r-1,$ since otherwise $\overline M_{m,n,r}f=Sf.$ Define the polynomial
\begin{equation}\label{mainpoly}
P(z)=(z-z_1)(z-z_2)\cdots(z-z_r).
\end{equation}
\medskip
We first consider the interpolation error $\|f-L_{m,r}f\|_{L^2(a,b)}.$ Let
\begin{equation}\label{alpha}
\alpha\,=\,\|P\|_{L^2(0,1)}=\bigg(\int_0^1|P(z)|^2\mathrm dz\bigg)^{1/2}.
\end{equation}
For each $j,$ the local interpolation error equals
\begin{eqnarray*} \big\|f-L_{m,r}f\big\|_{L^2(x_{j-1},x_j)} &=&
\bigg(\int_{x_{j-1}}^{x_j}\bigl|\,(x-x_{j,1})\cdots(x-x_{j,r})f[x_{j,1},\ldots,x_{j,r},x]\,\bigr|^2\mathrm dx\bigg)^{1/2} \\
&=& \alpha\,h_j^{r+1/2}\,\frac{|f^{(r)}(\xi_j)|}{r!},\qquad\qquad\xi_j\in[x_{j-1},x_j].
\end{eqnarray*}
Hence
$$ \|f-L_{m,r}f\|_{L^2(a,b)}\,=\,\frac{\alpha}{r!}\bigg(\sum_{j=1}^m h_j^{2r+1}\big|f^{(r)}(\xi_j)\big|^2\bigg)^{1/2}.$$
In particular, for the equispaced partition, in which case $h_j=(b-a)/m,$ we have
\begin{eqnarray}
\big\|f-L_{m,r}f\big\|_{L^2(a,b)} &=& \frac{\alpha}{r!}\,\bigg(\frac{b-a}{m}\bigg)^r \label{apper}
\bigg(\frac{b-a}{m}\sum_{j=1}^m|f^{(r)}(\xi_j)|^2\bigg)^{1/2} \\
&\approx& \frac{\alpha}{r!}\,\bigg(\frac{b-a}{m}\bigg)^r\,\big\|f^{(r)}\big\|_{L^2(a,b)}
\qquad\mbox{as}\quad m\to+\infty. \nonumber
\end{eqnarray}
\medskip
Now, we consider the quadrature error $Sf-\overline Q_{m,r}f.$ Let
\begin{equation}\label{beta}
\beta\,=\,\int_0^1 P(z)\,\mathrm dz.
\end{equation}
The local integration errors equal
\begin{eqnarray*}
\lefteqn{\int_{x_{j-1}}^{x_j}(f-L_{m,r}f)(x)\,\mathrm dx \;=\;
\int_{x_{j-1}}^{x_j} (x-x_{j,1})\cdots(x-x_{j,r})f[x_{j,1},\ldots,x_{j,r},x]\,\mathrm dx} \\
&&=\;\frac 1{r!}\int_{x_{j-1}}^{x_j} (x-x_{j,1})\cdots(x-x_{j,r})f^{(r)}(\xi_j(x))\,\mathrm dx,
\qquad\xi_j(x)\in[x_{j-1},x_j].
\end{eqnarray*}
Choose arbitrary $\zeta_j\in[x_{j-1},x_j]$ for $1\le j\le m.$ Then
\begin{eqnarray*}
\lefteqn{\bigg|\frac 1{r!}\,\int_{x_{j-1}}^{x_j} (x-x_{j,1})\cdots(x-x_{j,r})f^{(r)}(\xi_j(x))\,\mathrm dx\,-\,
\frac{f^{(r)}(\zeta_j)}{r!}\int_{x_{j-1}}^{x_j} (x-x_{j,1})\cdots(x-x_{j,r})\,\mathrm dx\bigg| } \\
&=& \frac1{r!}\,\bigg|\int_{x_{j-1}}^{x_j}(x-x_{j,1})\cdots(x-x_{j,r})\left(f^{(r)}(\xi_j(x))-f^{(r)}(\zeta_j)\right)\,
\mathrm dx\bigg|\;\le\;\omega(h_j)\,\frac{h_j^{r+1}}{r!}\,\|P\|_{L^1(0,1)},
\end{eqnarray*}
where $\omega$ is the modulus of continuity of $f^{(r)}.$ We also have
$$ \frac{f^{(r)}(\zeta_j)}{r!}\int_{x_{j-1}}^{x_j}(x-x_{j,1})\cdots(x-x_{j,r})\,\mathrm dx\,=\,
\frac{\beta}{r!}\,h_j^{r+1}f^{(r)}(\zeta_j). $$
Hence $Sf-\overline Q_{m,r}f\,=\,X_m\,+\,Y_m,$ where
$$ X_m \,=\, \frac{\beta}{r!}\,\sum_{j=1}^mh_j^{r+1}f^{(r)}(\zeta_j)\qquad\mbox{and}\qquad
|Y_m| \,\le\, \frac{\|P\|_{L^1(0,1)}}{r!}\sum_{j=1}^m\omega(h_j)h_j^{r+1}. $$
In particular, for the equispaced partition,
\begin{eqnarray*}
X_m &=& \frac{\beta}{r!}\,(b-a)^r\bigg(\sum_{j=1}^m\frac{b-a}{m}f^{(r)}(\zeta_j)\bigg)\,m^{-r}, \\
|Y_m| &\le& \frac{\|P\|_{L^1(0,1)}}{r!}\;\omega\bigg(\frac{b-a}{m}\bigg)(b-a)^{r+1}m^{-r}.
\end{eqnarray*}
Suppose that $\beta\ne 0$ and $\int_a^bf^{(r)}(x)\,\mathrm dx\ne 0.$ Then
$X_m\approx\frac{\beta}{r!}(b-a)^r\left(\int_a^bf^{(r)}(x)\,\mathrm dx\right)m^{-r}.$
Since $\omega(h)$ goes to zero as $h\to 0^+,$ the component $X_m$ dominates $Y_m$
as $m\to+\infty.$ Hence
\begin{equation}\label{err-int}
Sf-\overline Q_{m,r}f \,\approx\,
\frac{\beta}{r!}\,\bigg(\frac{b-a}{m}\bigg)^r\,\bigg(\int_a^bf^{(r)}(x)\,\mathrm dx\bigg)
\qquad\mbox{as}\quad m\to+\infty.
\end{equation}
On the other hand, if $\beta=0$ or $\int_a^bf^{(r)}(x)\,\mathrm dx=0$ then the quadrature
error converges to zero faster than $m^{-r},$ i.e.
$$\lim_{m\to+\infty}\big(Sf-\overline Q_{m,r}f\big)\,m^r\,=\,0.$$
Note that $\beta=0$ if and only if the quadrature $\overline Q_{m,r}$ has the degree of exactness
at least $r,$ i.e., it is exact for all polynomials of degree $r$ or less. Obviously, the maximal degree
of exactness equals $2r-1.$
\medskip
We see that for the equidistant partition of the interval $[a,b]$ the error
$\big(\mathbb E(Sf-\overline M_{m,n,r}f)^2\big)^{1/2}$ is asymptotically proportional
to $$\phi(m,n)=n^{-1/2}m^{-r},$$
regardless of the choice of points $z_i$ in \eqref{zi-points}.
Let us minimize $\phi(m,n)$ assuming the total number of points used is at most $N.$
We have two cases depending on whether both endpoints of each subinterval are used
in interpolation. If so, i.e., if $z_1=0$ and $z_r=1$ (in this case $r\ge 2$) then
$N=(r-1)m+1+n.$ The optimal values are
\begin{equation}\label{mn1}
m^*=\frac{2r(N-1)}{(r-1)(2r+1)},\qquad n^*=\frac{N-1}{2r+1},
\end{equation}
for which $$\phi(m^*,n^*)\,=\,
\sqrt{2}\,\bigg(1-\frac1r\bigg)^r\bigg(\frac{r+1/2}{N}\bigg)^{r+1/2}.$$
Otherwise we have $N=rm+n.$ The optimal values are
\begin{equation}\label{mn2}
m^*=\frac{2N}{2r+1},\qquad n^*=\frac{N}{2r+1},
\end{equation} for which
$$\phi(m^*,n^*)\,=\,\sqrt{2}\,\bigg(\frac{r+1/2}{N}\bigg)^{r+1/2}.$$
\smallskip
Denote by $\overline M_{N,r}$ the corresponding algorithm with
the equidistant partition, where for given $N$ the values
of $n$ and $m$ equal correspondingly $\lfloor n^*\rfloor$ and $\lfloor m^*\rfloor.$
Our analysis is summarized in the following theorem.
\begin{thm}\label{thm:equ} We have as $N\to+\infty$ that
$$ \sqrt{\mathbb E\big(Sf-\overline M_{N,r}f\big)^2}\;\approx\;
c_r\,(b-a)^r\,C(P,f)\,N^{-(r+1/2)},$$ where
$$ C(P,f)=\sqrt{\alpha^2\,(b-a)\bigg(\int_a^b\big|f^{(r)}(x)\big|^2\mathrm dx\bigg)\,-\,
\beta^2\bigg(\int_a^bf^{(r)}(x)\,\mathrm dx\bigg)^2},$$
$\alpha$ and $\beta$ are given by \eqref{alpha} and \eqref{beta}, and
\begin{equation}\label{ciar}
c_r=\left\{\begin{array}{ll}\sqrt{2}\,\big(1-\frac1r\big)^r\frac{(r+1/2)^{r+1/2}}{r!},&
\quad\mbox{if}\quad r\ge 2,\,z_1=0,\,z_r=1,\\ \ \sqrt{2}\,\frac{(r+1/2)^{r+1/2}}{r!},&
\quad\mbox{otherwise}.\end{array}\right.
\end{equation}
\end{thm}
We add that the algorithm $\overline M_{N,r}$ is fully implementable since we assume that
we have access to function evaluations at points from $[a,b].$
\section{First adaptive algorithm}\label{sec:strata}
Now we add a stratification strategy to our algorithm of
Theorem \ref{thm:equ} to obtain an adaptive algorithm with
a much better asymptotic constant. That is,
we divide the initial interval $[a,b]$ into $k$ equal length subintervals $I_i, \ 1\leq i \leq k,$ and on each
subinterval we apply the approximation of Theorem \ref{thm:equ} with some $N_i,$ where
\begin{equation}\label{Nsum}\sum_{i=1}^k N_i\le N.\end{equation}
Denote such an approximation by $\overline M_{N,k,r}.$
(Note that $\overline M_{N,r}=\overline M_{N,1,r}$.) Then, by Theorem \ref{thm:equ},
for fixed $k$ we have as all $N_i\to+\infty$ that
$$ \sqrt{\mathbb E\big(Sf-\overline M_{N,k,r}f\big)^2}\;\approx\;c_r h^r
\bigg(\sum_{i=1}^k\frac{C_i^2}{N_i^{2r+1}}\bigg)^{1/2},$$ where
\begin{equation}\label{Ci}
C_i=C_i(P,f)=\sqrt{\alpha^2\,h\,\int_{I_i}\big|f^{(r)}(x)\big|^2\mathrm dx-\beta^2\,
\left(\int_{I_i}f^{(r)}(x)\,\mathrm dx\right)^2},\qquad h=\frac{b-a}{k}.
\end{equation}
Minimization of $\psi(N_1,\ldots,N_k)=\left(\sum_{i=1}^kC_i^2N_i^{-(2r+1)}\right)^{1/2}$ with respect
to \eqref{Nsum} gives
$$ N_i^*\,=\,\frac{C_i^{1/(r+1)}}{\sum_{j=1}^k C_j^{1/(r+1)}}\,N,\qquad 1\le i\le k, $$
and then
$$\psi(N_1^*,\ldots,N_k^*)= \bigg(\sum_{i=1}^k C_i^{1/(r+1)}\bigg)^{r+1}N^{-(r+1/2)}. $$
Let $\xi_i,\eta_i\in I_i$ be such that
$\int_{I_i}\big|f^{(r)}(x)\big|^2\mathrm dx=h\big|f^{(r)}(\xi_i)\big|^2$ and
$\int_{I_i}f^{(r)}(x)\,\mathrm dx=hf^{(r)}(\eta_i).$ Then
$$C_i=h\sqrt{\alpha^2|f^{(r)}(\xi_i)|^2-\beta^2|f^{(r)}(\eta_i)|^2}$$
and we have as $k\to+\infty$ that
\begin{eqnarray}
\bigg(\sum_{i=1}^kC_i^{1/(r+1)}\bigg)^{r+1} &=& h\,\bigg(\sum_{i=1}^k
\big(\alpha^2|f^{(r)}(\xi_i)|^2-\beta^2|f^{(r)}(\eta_i)|^2\big)^\frac{1}{2(r+1)}\bigg)^{r+1}\nonumber \\
&\approx&h\,(\alpha^2-\beta^2)^{1/2}\bigg(\sum_{i=1}^k\big|f^{(r)}(\xi_i)\big|^{1/(r+1)}\bigg)^{r+1}\nonumber \\
&\approx&h^{-r}(\alpha^2-\beta^2)^{1/2}\bigg(\sum_{i=1}^kh\big|f^{(r)}(\xi_i)\big|^{1/(r+1)}\bigg)^{r+1}\nonumber \\
&\approx&h^{-r}(\alpha^2-\beta^2)^{1/2}\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}.\label{asseq}
\end{eqnarray}
It is clear that we have to take $N_i$ to be an integer and at least $r,$ for instance
$$ N_i=\left\lfloor N_i^*\left(1-\frac{kr}N\right)+r\right\rfloor,\qquad 1\le i\le k. $$
Then the corresponding number $m_i$ of subintervals and number $n_i$ of random points in $I_i$
can be chosen as
$$ m_i=\max\left(\lfloor m_i^*\rfloor,1\right),\qquad n_i=\lfloor n_i^*\rfloor,$$
where $m_i^*$ and $n_i^*$ are given by \eqref{mn1} and \eqref{mn2} with $N$ replaced by $N_i.$
Denote by $\overline M^{\,*}_{N,r}$ the above constructed approximation $\overline M_{N,k_N,r}$ with
$k_N$ such that $k_N\to+\infty$ and $k_N/N\to 0$ as $N\to+\infty.$ For instance, $k_N=N^\kappa$ with
$0<\kappa<1.$ Our analysis gives the following result.
\begin{thm}\label{thm:strata}
We have as $N\to+\infty$ that
$$ \sqrt{\mathbb E\big(Sf-\overline M^{\,*}_{N,r}f\big)^2}\,\approx\,
c_r\,\sqrt{\alpha^2-\beta^2}\,
\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}N^{-(r+1/2)}.$$
\end{thm}
The asymptotic constant of the approximation $\overline M^{\,*}_{N,r}$ of Theorem \ref{thm:strata}
is never worse than that of $\overline M_{N,r}$ of Theorem \ref{thm:equ}.
Indeed, comparing both constants we have
\begin{eqnarray*}
&&c_r(b-a)^r\sqrt{\alpha^2\,(b-a)\bigg(\int_a^b\big|f^{(r)}(x)\big|^2\mathrm dx\bigg)\,-\,
\beta^2\bigg(\int_a^bf^{(r)}(x)\,\mathrm dx\bigg)^2}\\
&&\qquad\ge\;c_r\sqrt{\alpha^2-\beta^2}\,(b-a)^{r+1/2}
\bigg(\int_a^b\big|f^{(r)}(x)\big|^2\bigg)^{1/2}\mathrm dx\\
&&\qquad\ge\,c_r\sqrt{\alpha^2-\beta^2}\,
\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1},
\end{eqnarray*}
where the first inequality follows from Schwarz inequality and the second one from H\"older's
inequality for integrals. As shown in Introduction, the gain can be significant, especially
when the derivative $f^{(r)}$ drastically changes.
\medskip
The approximation $\overline M^{\,*}_{N,r}$ possesses good asymptotic properties, but is not feasible
since we do not have a direct access to the $C_i$s. In a feasible implementation one can approximate
$C_i$ using divided differences, i.e.,
\begin{equation}\label{Citilda}
\widetilde C_i=h\sqrt{\alpha^2-\beta^2}\,|d_i|\,r!\,\qquad\mbox{where}\qquad
d_i=f[x_{i,0},x_{i,1},\ldots,x_{i,r}]\end{equation}
and $x_{i,j}$ are arbitrary points from $I_i.$ Then
$$ N_i^*=\frac{|d_i|^{1/(r+1)}}{\sum_{j=1}^{k_N}|d_j|^{1/(r+1)}}\,N.$$
This works well for functions $f$ for which the $r$th derivative does not nullify at any point in $[a,b].$
Indeed, then $f^{(r)}$ does not change its sign and, moreover, it is separated away from zero.
This means that
$$\lim_{N\to\infty}\,\max_{1\le i\le k_N}\,{C_i}/{\widetilde C_i}=1,$$
which is enough for the asymptotic equality of Theorem \ref{thm:strata} to hold true.
If $f^{(r)}$ is not separated away from zero then we may have problems with proper approximations
of $C_i$ in the intervals $I_i$ where $|f^{(r)}|$ assumes extremely small values or even zeros. A possible and
simple remedy is to choose `small' $\Delta>0$ and modify $\widetilde C_i$ as follows:
\begin{equation}\label{citil}\widetilde C_i=\left\{\begin{array}{rl}
h\,\sqrt{\alpha^2-\beta^2}\,|d_i|\,r! &\;\,\mbox{for}\;|d_i|r!\ge\Delta,\\
h\,\alpha\,\Delta\,r! &\;\,\mbox{for}\;|d_i|r!<\Delta.\end{array}\right.\end{equation}
Then, letting $A_1=\big\{a\le x\le b:\,|f^{(r)}(x)|\ge\Delta\big\}$ and $A_2=[a,b]\setminus A_1,$
we have as $k\to+\infty$ that
\begin{eqnarray*}
\bigg(\sum_{i=1}^kC_i^{1/(r+1)}\bigg)^{r+1} &\lessapprox&
\bigg(\sum_{i=1}^k\widetilde C_i^{1/(r+1)}\bigg)^{r+1} \\ &\approx&
h^{-r}\,(\alpha^2-\beta^2)^{1/2}\bigg(\int_{A_1}\big|f^{(r)}(x)\big|^{\frac{1}{r+1}}\mathrm dx+
|A_2|\Big(\sqrt{\tfrac{\alpha^2}{\alpha^2-\beta^2}}\,\Delta\Big)^{\frac{1}{r+1}}\bigg)^{r+1}.
\end{eqnarray*}
Hence, the approximation of $C_i$ by \eqref{citil} results in an algorithm whose error is approximately
upper bounded by
$$ c_r\,\sqrt{\alpha^2-\beta^2}\,
\bigg(\int_a^b\big|f_\Delta^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}N^{-(r+1/2)},$$ where
\begin{equation}\label{efdelta}
\big|f_\Delta^{(r)}(x)\big|=\max\bigg(\big|f^{(r)}(x)\big|,
\sqrt{\tfrac{\alpha^2}{\alpha^2-\beta^2}}\,\Delta\bigg).
\end{equation}
We obviously have $\lim_{\Delta\to 0^+}\int_a^b\big|f_\Delta^{(r)}(x)\big|^{1/(r+1)}\mathrm dx
=\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx.$
\medskip
A closer look at the deterministic part of $\overline M^{\,*}_{N,r}$ shows that the final
partition of the interval $[a,b]$ tends to equalize the $L^1$ errors in all of the $m$ subintervals.
As shown in \cite{PlaskotaSamoraj2022}, such a partition is optimal in the sense that it minimizes
the asymptotic constant in the error $\|f-L_{m,r}f\|_{L^1(a,b)}$ among all possible piecewise
Langrange interpolations $L_{m,r}$. A disadvantage of the optimal partition is that it is not nested.
This makes it necessary to start the computations from scratch when $N$ is updated to a higher value.
Also, a proper choice of the sequence $k_N=N^\kappa$ is problematic, especially when $N$ is still
relatively small. On one hand, the larger $\kappa$ the better the approximation of $C_i$ by
$\widetilde C_i,$ but also the more far away the partition from the optimal one. On the other hand,
the smaller $\kappa$ the closer the partition to the optimal one, but also the worse the approximation
of $C_i.$ This trade--off significantly affects the actual behavior of the algorithm, which can be seen
in numerical experiments of Section \ref{sec:impl}.
In the following section, we propose another adaptive approach leading to an easily implementable algorithm
that produces nested partitions close to optimal and possesses asymptotic properties similar to that
of $\overline M^{\,*}_{N,r}.$ As we shall see in Section \ref{sec:automatic}, nested partitions are vital for
automatic Monte Carlo integration.
\section{Second adaptive algorithm}\label{sec:second}
Consider a $\varrho$-weighted integration problem
$$S_\varrho f=\int_a^bf(x)\varrho(x)\,\mathrm dx,$$
where the function $\varrho:[a,b]\to\mathbb{R}$ is integrable and positive
a.e. and $\int_a^b\varrho(x)\,\mathrm dx=1.$ The corresponding Monte Carlo algorithm is
$$M_{n,\varrho}f=\frac1n\sum_{i=1}^nf(t_i),\qquad t_i\stackrel{iid}\sim\mu_\varrho,$$
where $\mu_\varrho$ is the probability distribution on $[a,b]$ with density $\varrho.$ Then
$$\mathbb E(S_\varrho f-M_{n,\varrho}f)^2=\frac1n\big(S_\varrho(f^2)-(S_\varrho f)^2\big).$$
Now, the non-weighted integral \eqref{theproblem} can be written as
$$S(f)=\int_a^b h(x)\varrho(x)\,\mathrm dx=S_\varrho(h),
\quad\mbox{where}\quad h(x)=\frac{f(x)}{\varrho(x)}.$$
Then
$$\mathbb E(Sf-M_{n,\varrho}h)^2=\mathbb E(S_\varrho h-M_{n,\varrho}h)^2
=\frac1n\big(S_\varrho(h^2)-(S_\varrho h)^2\big)=\frac1n\big(S_\varrho(f/\varrho)^2-(Sf)^2\big).$$
Let's go further on and apply a variance reduction,
\begin{equation}\label{VRrho}
\overline M_{n,\varrho}f=S(Lf)+M_{n,\varrho}\bigg(\frac{f-Lf}{\varrho}\bigg),
\end{equation}
where $Lf$ is an approximation to $f.$ Then
$$\mathbb E\big(Sf-\overline M_{n,\varrho}f\big)^2=
\frac1n\left(\int_a^b\frac{(f-Lf)^2(x)}{\varrho(x)}\,\mathrm dx-
\bigg(\int_a^b (f-Lf)(x)\,\mathrm dx\bigg)^2\right).$$
The question is how to choose $L$ and $\varrho$ to make the quantity
$$\int_a^b\frac{(f-Lf)^2(x)}{\varrho(x)}\,\mathrm dx-
\bigg(\int_a^b (f-Lf)(x)\,\mathrm dx\bigg)^2$$
as small as possible.
Observe that if $$\varrho(x)=\frac{|(f-Lf)(x)|}{\|f-Lf\|_{L^1(a,b)}}$$ then
$$\mathbb E\big(Sf-\overline M_{n,\varrho}f\big)^2=
\frac1n\left(\|f-Lf\|_{L^1(a,b)}^2-\bigg(\int_a^b(f-Lf)(x)\mathrm dx\bigg)^2\right)$$
and this error is even zero if $(f-Lf)(x)$ does not change its sign. This suggests the following algorithm.
Suppose that $Lf=L_{m,r}f$ is based on a partition of $[a,b]$ such that the $L^1$ errors
in all $m$ subintervals $I_i$ have the same value, i.e.,
\begin{equation}\label{optpartition}
\|f-L_{m,r}f\|_{L^1(I_i)}=\frac1m\,\|f-L_{m,r}f\|_{L^1(a,b)},\quad 1\le i\le m.
\end{equation}
Then we apply the variance reduction \eqref{VRrho} with density
\begin{equation}\label{eq:rho}
\varrho(x)=\frac1{mh_i},\qquad x\in I_i,\quad1\le i\le m,
\end{equation}
where $h_i$ is the length of $I_i.$ That is, for the corresponding probability measure $\mu_\varrho$
we have $\mu_\varrho(I_i)=\tfrac1m$ and the conditional distribution $\mu_\varrho(\cdot|I_i)$ is uniform
on $I_i.$
We now derive an error formula for such an approximation.
Let $\gamma=\|P\|_{L^1(a,b)}=\int_0^1|P(z)|\,\mathrm dz.$
(Recall that $P$ is given by \eqref{mainpoly}.) We have
$$\|f-L_{m,r}f\|_{L^1(I_i)}=\frac{\gamma}{r!}\,h_i^{r+1}\big|f^{(r)}(\xi_i)\big|\quad\mbox{and}\quad
\|f-L_{m,r}f\|_{L^2(I_i)}=\frac{\alpha}{r!}\,h_i^{r+1/2}\big|f^{(r)}(\zeta_i)\big|$$
for some $\xi_i,\zeta_i\in I_i.$ Denoting
\begin{equation}\label{xidef} A=h_i^{r+1}\big|f^{(r)}(\xi_i)\big|\end{equation}
(which is independent of $i$) we have as $m\to+\infty$ that
\begin{eqnarray*}
\lefteqn{\bigg(\int_a^b\frac{(f-L_{m,r}f)^2(x)}{\varrho(x)}\,\mathrm dx\bigg)^{1/2}\;=\;
\bigg(m\sum_{i=1}^mh_i\int_{I_i}(f-L_{m,r}f)^2(x)\,\mathrm dx\bigg)^{1/2}} \\
&&\;=\;\frac{\alpha}{r!}\,\bigg(m\sum_{i=1}^m h_i^{2r+2}\big|f^{(r)}(\zeta_i)\big|^{2}\bigg)^{1/2}
\,\approx\,\frac{\alpha}{r!}\,\bigg(m\sum_{i=1}^m h_i^{2r+2}\big|f^{(r)}(\xi_i)\big|^2\bigg)^{1/2} \\
&&\;=\;\frac{\alpha}{r!}\,\bigg(m\sum_{i=1}^mA^2\bigg)^{1/2}\,=\,\frac{\alpha}{r!}\,mA\,=\,
\frac{\alpha}{r!}\,\big(mA^{1/(r+1)}\big)^{r+1}m^{-r} \\
&&\;=\;\frac{\alpha}{r!}\,\bigg(\sum_{i=1}^m h_i\big|f^{(r)}(\xi_i)\big|^{1/(r+1)}\bigg)^{r+1}m^{-r}
\;\approx\;\frac{\alpha}{r!}\,\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}m^{-r}.
\end{eqnarray*}
To get an asymptotic formula for $\int_a^b(f-L_{m,r}f)(x)\,\mathrm dx$ we use the analysis done
in Section \ref{sec:equispaced}. If $\beta=0$ then the integral decreases faster than $m^{-r}.$
Let $\beta\ne 0.$ Then
$$\int_a^b(f-L_{m,r}f)(x)\,\mathrm dx\;\approx\;
\frac{\beta}{r!}\,\sum_{i=1}^m h_i^{r+1}f^{(r)}(\xi_i)=\frac{\beta}{r!}(m_+-m_-)A,$$
where $\xi_i$s are as in \eqref{xidef}, and $m_+$ and $m_-$ are the numbers of indexes $i$ for which
$f^{(r)}(\xi_i)\ge 0$ and $f^{(r)}(\xi_i)<0,$ respectively. Let
$$D_+=\{x\in[a,b]:\,f^{(r)}(x)\ge 0\},\qquad D_-=\{x\in[a,b]:\,f^{(r)}(x)<0\}.$$
Since $A\approx\|f^{(r)}\|_{L^{1/(r+1)}(a,b)}m^{-(r+1)}$ and
$m_+A^{1/(r+1)}\approx\int_{D_+}|f^{(r)}(x)|^{1/(r+1)}\mathrm dx,$ we have
$$m_+\approx\frac{\int_{D_+}|f^{(r)}(x)|^{1/(r+1)}\mathrm dx}
{\int_{D_+\cup D_-}|f^{(r)}(x)|^{1/(r+1)}\mathrm dx},\qquad
m_-\approx\frac{\int_{D_-}|f^{(r)}(x)|^{1/(r+1)}\mathrm dx}
{\int_{D_+\cup D_-}|f^{(r)}(x)|^{1/(r+1)}\mathrm dx}.$$
Thus
$$\int_a^b(f-L_{m,r}f)(x)\,\mathrm dx\approx\frac{\beta}{r!}
\bigg(\frac{\int_a^b|f^{(r)}(x)|^{1/(r+1)}\mathrm{sgn} f^{(r)}(x)\,\mathrm dx}
{\int_a^b|f^{(r)}(x)|^{1/(r+1)}\mathrm dx}\bigg)\|f^{(r)}\|_{L^{1/(r+1)}(a,b)}\,m^{-r}$$
provided $\int_a^b|f^{(r)}(x)|^{1/(r+1)}\mathrm{sgn} f^{(r)}(x)\,\mathrm dx\ne 0,$
and otherwise the convergence is faster than $m^{-r}.$
\smallskip
Our analysis above shows that if $m$ and $n$ are chosen as in
\eqref{mn1} and \eqref{mn2} then the error of the described algorithm asymptotically
equals (as $N\to+\infty$)
\begin{equation}\label{annoy}
c_r\,\sqrt{\alpha^2-\beta^2
\bigg(\frac{\int_a^b|f^{(r)}(x)|^{1/(r+1)}\mathrm{sgn}f^{(r)}(x)\,\mathrm dx}
{\int_a^b|f^{(r)}(x)|^{1/(r+1)}\mathrm dx}\bigg)^2}
\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}N^{-(r+1/2)}.
\end{equation}
\smallskip
The factor at $\beta^2$ in \eqref{annoy} can be easily replaced by $1$ with the help
of stratified sampling. Indeed, instead of randomly sampling $n$ times with density
\eqref{eq:rho} on the whole interval $[a,b],$ one can apply the same sampling strategy
independently on $k$ groups $G_j$ of subintervals. Each group consists of $s=m/k$
subintervals,
$$G_j=\bigcup_{\ell=1}^s I_{(j-1)s+\ell},\qquad 1\le j\le k,$$
and the number of samples for each $G_j$ equals $n/k.$ As in the algorithm
$\overline M^{\,*}_{N,r},$ we combine $k=k_N$ and $N$ in such a way that
$k_N\to+\infty$ and $k_N/N\to 0$ as $N\to\infty.$
Then the total number of points used in each $G_j$ is $N_j=N/k.$ Denoting
$$C_j=\bigg(\int_{G_j}\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}
=\bigg(\frac 1k\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}$$
and using the fact that the factor at $\beta^2$ equals $1$ if $f^{(r)}$ does not
change its sign, the error of such an approximation asymptotically equals
$$c_r\,\sqrt{\alpha^2-\beta^2}\,\bigg(\sum_{j=1}^k\frac{C_j^2}{N_j^{2r+1}}\bigg)^{1/2}
=c_r\,\sqrt{\alpha^2-\beta^2}\,
\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}N^{-(r+1/2)},$$
as claimed.
(Note that $N_j=N/k$ minimize the sum $\sum_{j=1}^kC_j^2N_j^{-(2r+1)}$
with respect to $\sum_{j=1}^kN_j=N;$ compare with the analysis
in Section \ref{sec:strata}.)
Thus we obtained exactly the same error formula as in
Theorem \ref{thm:strata} for $\overline M_{N,r}^{\,*}.$
\medskip
It remains to show a feasible construction of a nested partition that is close to the one
satisfying \eqref{optpartition}. To that end, we utilize the iterative method presented in
\cite{PlaskotaSamoraj2022}, where the $L^p$ error of piecewise Lagrange interpolation is examined.
\smallskip
We first consider the case when
\begin{equation}\label{derpos}
f^{(r)}>0\quad\mbox{or}\quad f^{(r)}<0.
\end{equation}
In the following construction, we use a priority queue $\mathcal S$ whose elements are subintervals.
For each subinterval $I_i$ of length $h_i,$ its priority is given as
\begin{equation}\label{priority}
p_f(I_i)\,=\,h_i^{r+1}|d_i|,
\end{equation}
where $d_i$ is the divided difference \eqref{Citilda}.
In the following pseudocode, $\mathrm{insert}(\mathcal S,I)$ and
$I:=\mathrm{extract\_max}(\mathcal S)$ implement correspondingly
the actions of inserting an interval to $\mathcal S,$ and extracting
from $\mathcal S$ an interval with the highest priority.
\medskip
$\mathbf{algorithm}\;\mathrm{PARTITION}$
$\mathcal S=\emptyset;\;\mathrm{insert}(\mathcal S,[a,b]);$
$\mathbf{for}\; k=2:m$
$\quad [l,r]=\mathrm{extract\_max}(\mathcal S);$
$\quad c=(l+r)/2;$
$\quad \mathrm{insert}(\mathcal S,[l,c]); \mathrm{insert}(\mathcal S,[c,r])$
$\mathbf{endfor}$
\medskip\noindent
After execution, the elements of $\mathcal S$ form a partition into $m$ subintervals $I_i.$
Note that if the priority queue is implemented through a \emph{heap} then the running time
of $\mathrm{PARTITION}$ is proportional to $m\log m.$
\smallskip
Denote by $\overline M^{\,**}_{N,r}$ the corresponding algorithm that
uses the above nested partition and density \eqref{eq:rho}, and $N$ is related to the number $m$ of subintervals and the number $n$ of random
samples as in \eqref{mn1} and \eqref{mn2}. We want to see how much
worse is this algorithm than that using the (not nested) partition
\eqref{optpartition}.
Let $A=(A_1,A_2,\ldots,A_m)$ with
$$A_i=p_f(I_i)\,r!=h_i^{r+1}\big|f^{(r)}(\omega_i)\big|,\qquad\omega_i\in I_i,$$
and $\|A\|_p=\big(\sum_{i=1}^mA_i^p\big)^{1/p}.$ For the corresponding piecewise Lagrange
approximation $L_{m,r}f$ and density $\varrho$ given by \eqref{eq:rho} we have
\begin{eqnarray*}
\lefteqn{\int_a^b\frac{(f-L_{m,r}f)^2(x)}{\varrho(x)}\,\mathrm dx
\,-\,\bigg(\int_a^b(f-L_{m,r}f)(x)\,\mathrm dx\bigg)^2}\\
&&\approx\,\frac{1}{(r!)^2}\bigg(\alpha^2 m\sum_{i=1}^m A_i^2
-\beta^2\bigg(\sum_{i=1}^mA_i\bigg)^2\,\bigg)\,=\,\frac{1}{(r!)^2}
\left(\,\alpha^2m\|A\|_2^2-\beta^2\|A\|_1^2\,\right).
\end{eqnarray*}
We also have
$\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}\approx
\big(\sum_{i=1}^mA_i^{1/(r+1)}\big)^{r+1}=\|A\|_{\frac{1}{r+1}}.$ Hence
\begin{eqnarray}\nonumber
\sqrt{\mathbb E(Sf-\overline M^{**}_{N,r}f)^2}&\approx&
\left(\,\alpha^2m\|A\|_2^2-\beta^2\|A\|_1^2\,\right)^{1/2}n^{-1/2}m^{-r}\\
&\approx&K_{m,r}(A)\,c_r\,\sqrt{\alpha^2-\beta^2}\,
\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\bigg)^{r+1}N^{-(r+1/2)},\label{efde}
\end{eqnarray}
where $$K_{m,r}(A)=\frac{\sqrt{\kappa_\alpha^2\,m\,\|A\|_2^2
-\kappa_\beta^2\,\|A\|_1^2}}{\|A\|_{\frac{1}{r+1}}}\,m^r,
\qquad\kappa_\alpha=\frac{\alpha}{\sqrt{\alpha^2-\beta^2}},
\quad\kappa_\beta=\frac{\beta}{\sqrt{\alpha^2-\beta^2}}.$$
Observe that halving an interval results in two subintervals whose priorities are asymptotically
(as $m\to+\infty$) $2^{r+1}$ times smaller than the priority of the original interval.
This means that $K_{m,r}(A)$ is asymptotically not larger than
\begin{equation}\label{KA}
K^*(r)\;=\;\limsup_{m\to\infty}\;\max\,
\left\{K_{m,r}(A):\;A=(A_1,\ldots,A_m),\,\max_{1\le i,j\le m}\frac{A_i}{A_j}\le 2^{r+1}\,\right\}.
\end{equation}
Thus we obtained the following result.
\begin{thm}\label{thm:importstrata}
If the function $f$ satisfies \eqref{derpos} then we have as $N\to+\infty$ that
$$\sqrt{\mathbb E(Sf-\overline M^{\,**}_{N,r}f)^2}\,\lessapprox\,K^*(r)\,c_r\,\sqrt{\alpha^2-\beta^2}\,
\bigg(\int_a^b\big|f^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}N^{-(r+1/2)},$$
where $K^*(r)$ is given by \eqref{KA}.
\end{thm}
We numerically calculated $K^*(r)$ in some special cases.
For instance, if the points $z_i$ in \eqref{zi-points} are equispaced, $z_i=(i-1)/(r-1),$ $1\le i\le r,$
then for $r=2,3,4,5,6$ we correspondingly have
$$K^*(r)\,=\,4.250,\;3.587,\;7.077,\;11.463,\;23.130,$$
while for any $z_i$s satisfying $\beta=\int_0^1(z-z_1)\cdots(z-z_r)\,\mathrm dz=0$ we have
$$K^*(r)\,=\,2.138,\;3.587,\;6.323,\;11.463,\;21.140.$$
\medskip
If $f$ does not satisfy \eqref{derpos} then the algorithm $\overline M^{**}_{N,r}f$ may fail.
Indeed, it may happen that $p_f(I_i)=0$ while $f^{(r)}\not=0$ in $I_i.$ Then this subinterval may
never be further subdivided. In this case, we can repeat the same construction,
but with the modified priority
$$p_f(I_i)\,=\,h_i^{r+1}\max\big(\,|d_i|,\,\Delta/r!\big),$$
where $\Delta>0.$ Then the error is asymptotically
upper bounded by
$$K^*(r)\,c_r\,\sqrt{\alpha^2-\beta^2}\,
\bigg(\int_a^b\big|f_\Delta^{(r)}(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1}N^{-(r+1/2)},$$
where $\big|f_\Delta^{(r)}(x)\big|$ is given by \eqref{efdelta}.
\section{Numerical experiments}\label{sec:impl}
In this section, we present results of two numerical experiments that illustrate the performance of the nonadaptive Monte Carlo algorithm $\overline M_{N,r}$ and the adaptive algorithms $\overline M^{\,*}_{N,r}$ and $\overline M^{**}_{N,r}.$ Our test integral is
$$\int_0^1\frac{1}{x+10^{-4}}\,\mathrm dx.$$
Since for $r\in\mathbb{N}$ we have $(-1)^r f^{(r)}>0,$ the parameter $\Delta$ is set to zero.
The three algorithms are verified for $r=2$ and $r=4$. In both cases, the interpolation nodes
are equispaced, i.e., in \eqref{zi-points} we take $$z_i =\frac{i-1}{r-1},\qquad 1\leq i \leq r.$$ In addition, for the first adaptive algorithm $\overline M_{N,r}^{\,*}$ we take $k_N = N^\kappa$ with $\kappa = 0.8.$ This exponent was chosen to ensure a trade--off as per our discussion in Section \ref{sec:strata}, and some empirical results. Also, for a fixed $N$ we plot a single output instead of the expected value estimator. Therefore the error fluctuations are visible.
For completeness, we also show the asymptotes corresponding to the theoretical errors from Theorems \ref{thm:equ} and \ref{thm:strata}, and the upper bound from Theorem \ref{thm:importstrata}. The scale is logarithmic, $-\log_{10}(\mathrm{error})$ versus $\log_{10}N.$
The results for $r=2$ are presented in Figure~\ref{impl:MC_test_r2}.
\begin{figure}[!htp]
\centering
\includegraphics[width=0.6\textwidth]{MC_v10_r=2_d=4.PNG}
\caption{Comparison of nonadaptive and adaptive Monte Carlo algorithms together with related asymptotic constants (AC) for $r=2.$}
\label{impl:MC_test_r2}
\end{figure}
\FloatBarrier
As it can be observed, both adaptive algorithms significantly outperform the nonadaptive MC; however, the right asymptotic behaviour of the first adaptive algorithm is visible only for large $N.$
Similar conclusions can be inferred from validation performed for $r=4,$ with all other parameters unchanged. We add that the results for $N$ larger than $10^{4.8}$ are not illustrative due to unavoidable rounding errors.
\begin{figure}[!htp]
\centering
\includegraphics[width=0.61\textwidth]{MC_v10_r=4_d=4.PNG}
\caption{Comparison of nonadaptive and adaptive Monte Carlo algorithms together with related asymptotic constants (AC) for $r=4.$}
\label{impl:MC_test_r4}
\end{figure}
\FloatBarrier
Notably, both adaptive algorithms attain their asymptotic errors, but this is not the case for nonadaptive MC for which the output is not stable. Initially, the first adaptive algorithm does not leverage additional sampling since for all intervals $I_i$ we have $N_i/ (2r+1)< 1.$ The Monte Carlo adjustments are visible only for $N \geq 10^3$ and the error tends to the theoretical asymptote.
In conclusion, the numerical experiments confirm our theoretical findings and, in particular, superiority of the second adaptive algorithm $\overline M^{\,**}_{N,r}.$
\section{Automatic Monte Carlo integration}\label{sec:automatic}
We now use the results of Section \ref{sec:second} for automatic Monte Carlo integration.
The goal is to construct an algorithm that for given $\varepsilon>0$ and $0<\delta<1$ returns
an $\varepsilon$-approximation to the integral $Sf$ with probability at least $1-\delta,$
asymptotically as $\varepsilon\to 0^+.$ To that end, we shall use the approximation
$\overline M_{N,r}^{\,**}f$ with $N$ determined automatically depending on $\varepsilon$ and $\delta.$
\smallskip
Let $X_i$ for $1\le i\le n$ be independent copies of the random variable
$$X=S(f-L_{m,r}f)-\frac{(f-L_{m,r}f)(t)}{\varrho(t)},\qquad t\sim\mu_{\varrho},$$
where $L_{m,r},$ $n$ and $\varrho$ are as in $\overline M_{N,r}^{\,**}f.$
Then $\mathbb E(X)=0$ and
$$Sf-\overline M_{N,r}^{\,**}f=\frac{X_1+X_2+\cdots+X_n}{n}.$$
By Hoeffding's inequality \cite{Hoeffding1963} we have
$$\mathrm{Prob}\left(\big|Sf-\overline M_{N,r}^{\,**}f\big|>\varepsilon\right)\,\le\,
2\,\exp\left(\frac{-\varepsilon^2n}{2\,B_m^2}\right),$$
where $B_m=\max_{a\le t\le b}|X(t)|.$ Hence we fail with probability at most $\delta$ if
\begin{equation}\label{Hoeff}
\frac{\varepsilon^2n}{2B_m^2}\,\ge\,\ln\frac{2}{\delta}.
\end{equation}
Now we estimate $B_m.$ Let $\lambda=\|P\|_{L^\infty(0,1)}=\max_{0\le t\le 1}|P(t)|,$ and
$$\mathcal L_r(f)=\bigg(\int_a^b\big|f^{(r)}_\Delta(x)\big|^{1/(r+1)}\mathrm dx\bigg)^{r+1},$$
where $\Delta=0$ if $f^{(r)}>0$ or $f^{(r)}<0,$ and $\Delta>0$ otherwise. Let
$A=(A_1,A_2,\ldots,A_m)$ with
$$A_i=h_i^{r+1}\max_{x\in I_i}|f^{(r)}(x)|,\quad 1\le i\le m,$$
where, as before, $\{I_i\}_{i=1}^m,$ is the partition used by $\overline M_{N,r}^{\,**}f$
and $h_i$ is the length of $I_i.$ Since
$\|A\|_\frac{1}{r+1}=\big(\sum_{i=1}^mA_i^{1/(r+1)}\big)^{r+1}\lessapprox\mathcal L_r(f),\;$
for $x\in I_i$ we have
$$\frac{\big|f(x)-L_{m,r}f(x)\big|}{\varrho(x)}\,\le\,\frac{\lambda}{r!}\,m\,A_i\,\lessapprox\,
\frac{\lambda}{r!}\,\bigg(\frac{m\,\|A\|_\infty}{\|A\|_\frac{1}{r+1}}\bigg)\mathcal L_r(f)
\,\lessapprox\,2^{r+1}\frac{\lambda}{r!}\,\mathcal L_r(f)\,m^{-r}.$$
We have the same upper bound for $S(f-L_{m,r}f)$ since by mean-value theorem
$$S(f-L_{m,r}f)=\int_a^b\frac{(f-L_{m,r}f)(x)}{\varrho(x)}\,\varrho(x)\,\mathrm dx
=\frac{(f-L_{m,r}f)(\xi)}{\varrho(\xi)},\qquad\xi\in[a,b].$$
Hence
$$B_m\,\lessapprox\,2^{r+2}\frac{\lambda}{r!}\,\mathcal L_r(f)\,m^{-r}.$$
Using the above inequality and the fact that $\sqrt{n}\,m^r\approx N^{r+1/2}/(c_rr!)$
with $c_r$ given by \eqref{ciar}, we get
$$\frac{\varepsilon^2n}{2B_m^2}\gtrapprox\left(\frac{\varepsilon\,N^{r+1/2}}
{\hat c_r\,\mathcal L_r(f)}\right)^{\!\!2},\quad\mbox{where}\quad
\hat c_r=2^{r+5/2}\lambda c_r.$$
The last inequality and \eqref{Hoeff} imply that we fail to have error $\varepsilon$ with probability at most $\delta$ for
\begin{equation}\label{eNform}
N\,\gtrapprox\,\left(\hat c_r\,\mathcal L_r(f)\,\frac{\sqrt{\ln(2/\delta)}}{\varepsilon}\right)^{\frac1{r+1/2}},
\qquad\mbox{as}\quad\varepsilon\to 0^+.
\end{equation}
\smallskip
Now the question is how to obtain the random approximation $\overline M_{N,r}^{\,**}f$ for $N$ satisfying \eqref{eNform}.
\medskip
One possibility is as follows. We first execute the iteration $\mathbf{for}$ in the algorithm $\mathrm{PARTITION}$
of Section \ref{sec:second} for $k=2:m,$ where $m$ satisfies
$\lim_{\varepsilon\to 0^+}m\,\varepsilon^{\frac{1}{r+1/2}}=0,$ e.g.,
$$m=\left\lfloor\bigg(\frac{\sqrt{\ln(2/\delta)}}{\varepsilon}\bigg)^\frac{1}{r+1}\right\rfloor.$$
Let $\{I_i\}_{i=1}^m$ be the obtained partition. Then we replace $\mathcal L_r(f)$ in \eqref{eNform}
by its asymptotic equivalent
\begin{equation}\label{ElT}
\widetilde{\mathcal L}_r(f)=\bigg(\sum_{i=1}^{m}p_f(I_i)^\frac{1}{r+1}\bigg)^{r+1},
\end{equation} set
\begin{equation}\label{eNeps}
N_\varepsilon=\left\lfloor\left(\hat c_r\,\widetilde{\mathcal L}_r(f)\,\frac{\sqrt{\ln(2/\delta)}}
{\varepsilon}\right)^{\frac1{r+1/2}}\right\rfloor,
\end{equation}
and continue the iteration for $k=m+1:m_\varepsilon,$
where $m_\varepsilon$ is the number of subintervals corresponding to $N_\varepsilon.$
Finally, we complete the algorithm by $n_\varepsilon$ random samples.
Denote the final randomized approximation by $\mathcal A_{\varepsilon,\delta}f.$
Then we have $\mathcal A_{\varepsilon,\delta}f=\overline M_{N_\varepsilon,r}^{\,**}f$ and
$$\mathrm{Prob}\big(\,\big|Sf-\mathcal A_{\varepsilon,\delta}f|>\varepsilon\big)\,\lessapprox\,\delta,
\qquad\mbox{as}\quad\varepsilon\to 0^+.$$
\medskip
A disadvantage of the above algorithm is that it uses a priority queue and therefore its total running time is proportional to $N\log N.$ It turns out that by using recursion the running time can be reduced to $N.$
A crucial component of the algorithm with the running time
proportional to $N$ is the following recursive procedure, in which $\mathcal S$ is a set of intervals.
\medskip
$\mathbf{procedure}\;\mathrm{AUTO}\,(f,a,b,e)$
$\mathbf{if}\;p_f([a,b])\le e$
$\quad \mathrm{insert}(\mathcal S,[a,b])$
$\mathbf{else}$
$\quad c:=(a+b)/2;$
$\quad\mathrm{AUTO}(f,a,c,e);$
$\quad\mathrm{AUTO}(f,c,b,e)$
$\mathbf{endif}$
\medskip
Similarly to $\mathcal A_{\varepsilon,\delta},$ the algorithm consists of two steps. First $\mathrm{AUTO}$
is run for $e=\varepsilon'$ satisfying $\varepsilon'\to 0^+$ and $\varepsilon/\varepsilon'\to 0^+$ as $\varepsilon\to 0^+,$ e.g.,
$$\varepsilon'=\varepsilon^\kappa,
\quad\mbox{where}\quad 0<\kappa<1.$$ Then $\mathcal L_r(f)$ in \eqref{eNform} is replaced by
$\widetilde{\mathcal L}_r(f)$ given by \eqref{ElT}, and $N_\varepsilon$ found from \eqref{eNeps}.
The recursion is resumed with the target value $e=\varepsilon'',$ where
$$\varepsilon''=\widetilde{\mathcal L}_r(f)\,m_\varepsilon^{-(r+1)}.$$
The algorithm is complemented by the corresponding $n_\varepsilon$ random samples.
Observe that the number $m''$ of subintervals in the final partition is asymptotically at least $m_\varepsilon.$
Indeed, for any function $g\in C^{r}([a,b])$ with $g^{(r)}(x)=\big|f_\Delta^{(r)}(x)\big|$ we have
$\mathcal L_r(g)=\mathcal L_r(f)$ and
$$\frac{\gamma}{r!}\,\widetilde{\mathcal L}_r(f)\big(m''\big)^{-r}\,\lessapprox\,
\|g-L_{m'',r}g\|_{L^1(a,b)}\,\approx\,\frac{\gamma}{r!}\sum_{i=1}^{m''}(h_i'')^{r+1}g^{(r)}(\xi_i)
\approx\frac{\gamma}{r!}\sum_{i=1}^{m''}p_f(I_i'')\lessapprox\frac{\gamma}{r!}\,m''\varepsilon'',$$
where the first inequality above follows from Proposition 2 of \cite{PlaskotaSamoraj2022}. This implies
$$m''\,\gtrapprox\,\bigg(\frac{\widetilde{\mathcal L}_r(f)}{\varepsilon''}\bigg)^\frac{1}{r+1}\approx\,m_\varepsilon,$$
as claimed.
Denote the resulting approximation by $\mathcal A_{\varepsilon,\delta}^*f.$ Observe that its running time
is proportional to $N_\varepsilon$ since recursion can be implemented in linear time.
\begin{thm}\label{thm:auto}
We have
$$\mathrm{Prob}\big(\,\big|Sf-\mathcal A_{\varepsilon,\delta}^*f|>\varepsilon\big)\,\lessapprox\,\delta,
\qquad\mbox{as}\quad\varepsilon\to 0^+.$$
\end{thm}
\bigskip
\phantomsection
Now we present outcomes of the second automatic procedure $\mathcal A_{\varepsilon,\delta}^*$ for the test integral
\begin{equation}\label{autoint}
\int_0^1\cos\bigg(\dfrac{100\,x}{x+10^{-4}}\bigg)\,\mathrm dx.
\end{equation}
Although the derivatives fluctuate and nullify many times in this case, we take $\Delta=0.$
We confront the outcomes for $r=2$ and $r=4.$ In each case, we compute the number of breaches \linebreak (i.e. when the absolute error is greater than $\varepsilon = 10^{-3}$) based on $K=10\,000$ independent executions of the code. We also take $\varepsilon' = \varepsilon^{1/2}.$ In our testing, we expect the empirical probability of the breach to be less than $\delta=0.05.$ For completeness, we also present the maximum error from all executions together with obtained $N_\varepsilon.$
\begin{center}
\begin{table}[!htp]
\begin{tabular}{|r|c|c|c|c|c|}
\hline
& $\varepsilon$ & $\delta$ & $K$ & $N_\varepsilon$ & $e_{max}$ \\
\hline
$r = 2$ & $10^{-3}$ & 0.05 & 10\,000 & 3\,092 & $4.5 \cdot 10^{-5}$ \\
\hline
\hline
$r = 4$ & $10^{-3}$ & 0.05 & 10\,000 & 811 & $5.5 \cdot 10^{-6}$ \\
\hline
\end{tabular}
\caption{Performance of the second automatic algorithm for the integral \eqref{autoint}.}
\label{tab:automatic}
\end{table}
\FloatBarrier
\end{center}
\vspace{-1.0cm}
Note that in both cases we did not identify any exceptions. The magnitude of the maximum errors indicate a serious overestimation of $N_\varepsilon,$ but the results are satisfactory given the upper bound estimate in Theorem \ref{thm:auto}.
\section{Appendix}\label{sec:appendix}
Below we present a crucial part of the code in Python programming language, where all the algorithms were implemented. In addition, we provide relevant comments linked to particular fragments of the code.
\begin{lstlisting}[language=Python, caption=Second adaptive algorithm $\overline M^{**}_{N,r}$- crucial part of the code. , basicstyle=\ttfamily\tiny]
def second_adaptive_MC(a, b, N, main_nodes, f, r, node_type = 'uniform'): #(1)
partial_quad = Decimal('0.0') #(2)
if node_type == 'uniform': #(3)
n = int(np.floor((N-1)/(1 + 2 *r)))
m = int(np.floor((2 * r * (N-1))/(2 * r ** 2 - r - 1)))
mc_init = MC_samples_nonunif(m, n) #(4)
#(5)
MnR = Decimal('0.0')
l = 0
for i in range(len(main_nodes) - 1 ):
h_i = main_nodes[i+1] - main_nodes[i]
#(6)
if node_type == 'uniform':
interpol_ix = main_nodes[i] * np.ones(r) + np.multiply(optimalt_equidistant(r), h_i)
interpol_iy = []
for s in range(1,r+1):
interpol_iy.append(f(interpol_ix[s-1]))
#(7)
if r == 2:
SL_mr = Decimal('0.5') * Decimal(h_i) * (Decimal(f(main_nodes[i])) + Decimal(f(main_nodes[i+1])))
elif r == 4:
SL_mr = Decimal('0.125') * Decimal(h_i) * Decimal((f(main_nodes[i]) \
+ 3 * f(main_nodes[i] + 1/3 * h_i) + 3 * f(main_nodes[i] + 2/3 * h_i) \
+ f(main_nodes[i+1])))
#(8)
while l < n and mc_init[l] < (i+1):
mc_point = math.modf(mc_init[l])[0] * h_i + main_nodes[i] #(9)
MnR = MnR + (Decimal(f(mc_point)) - Decimal(lagrange(interpol_ix, interpol_iy, mc_point))) * Decimal(h_i) #(10)
l = l + 1
partial_quad = partial_quad + SL_mr #(11)
#(12)
MnR = Decimal(MnR) * Decimal(m) / Decimal(n)
partial_quad = partial_quad + MnR
return partial_quad
\end{lstlisting}
\begin{enumerate}
\item [(1)] The almost optimal partition \texttt{main\_nodes} is derived out of this function in order to save computation time when the trajectories are computed subsequently. Moreover, \texttt{node\_type} argument lets the user insert his own partitions, e.g. those based on Chebyshev polynomials of the second kind.
\item [(2)] In order to minimize errors resulting from (possibly) adding relatively small adjustments to the estimated quadrature value, we use \texttt{Decimal} library. It enables us to increase the precision of intermediate computations, which is now set to 28 digits in decimal system.
\item [(3)] In our case, the interpolating polynomial is based on equidistant mesh including endpoints of a subinterval $I_i$. By \texttt{np} we understand the references to NumPy library.
\item [(4)] Initializing the variables which control Monte Carlo adjustments for our quadrature. \linebreak In particular, \texttt{l} stores the number of currently used random points, while we loop through the subintervals.
\item [(5)] The program calculates all interpolation nodes in the interval $I_i.$ For that reason, the function \texttt{optimalt\_uniform} is executed to provide distinct $z_1, \ldots, z_r \in [0,1].$
\item [(6)] Depending on the value of $r,$ different formulas for (nonadaptive, deterministic) quadrature $SL_{m,r}$ are leveraged.
\item [(7)] Below, we calculate the Monte Carlo adjustment on interval $I_i.$
\item [(8)] This code yields random points used for . \texttt{MC\_init} function reports them in a from of a number from 0 to $m.$ The integer part points the index $i$ of subinterval, while the fractional part - its position within $I_i.$ Both parameters are sourced by using \texttt{math.modf} function.
\item [(9)] For stability reasons, the coefficients of interpolating polynomial in canonical base are not stored. Therefore, for every point, \texttt{lagrange} function is invoked separately. \linebreak The computational cost of such solution is $\Theta(r^2)$ and hence regarded negligible.
\item [(10)] We decided to add $SL_{m,r}$ for each subinterval and then add the cumulative adjustments. Since latter are usually relatively much smaller than the quadrature values, this might result in neglecting the actual adjustment values. Please note that \texttt{Decimal} library was also used to address such constraints.
\item [(11)] Ultimately, we add Monte Carlo result to the previous approximation.
\end{enumerate}
As it can be observed, the current solution enables the user to insert own interpolation meshes, increase the precision of computations, as well as extend the method to arbitrary regularity $r \in \mathbb{N}.$
| {
"timestamp": "2022-06-08T02:12:57",
"yymm": "2206",
"arxiv_id": "2206.03125",
"language": "en",
"url": "https://arxiv.org/abs/2206.03125",
"abstract": "The theme of the present paper is numerical integration of $C^r$ functions using randomized methods. We consider variance reduction methods that consist in two steps. First the initial interval is partitioned into subintervals and the integrand is approximated by a piecewise polynomial interpolant that is based on the obtained partition. Then a randomized approximation is applied on the difference of the integrand and its interpolant. The final approximation of the integral is the sum of both. The optimal convergence rate is already achieved by uniform (nonadaptive) partition plus the crude Monte Carlo; however, special adaptive techniques can substantially lower the asymptotic factor depending on the integrand. The improvement can be huge in comparison to the nonadaptive method, especially for functions with rapidly varying $r$th derivatives, which has serious implications for practical computations. In addition, the proposed adaptive methods are easily implementable and can be well used for automatic integration.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Monte Carlo integration of $C^r$ functions with adaptive variance reduction: an asymptotic analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795102691455,
"lm_q2_score": 0.8006920092299293,
"lm_q1q2_score": 0.7902666071461738
} |
https://arxiv.org/abs/1708.02399 | The bidirectional ballot polytope | A bidirectional ballot sequence (BBS) is a finite binary sequence with the property that every prefix and suffix contains strictly more ones than zeros. BBS's were introduced by Zhao, and independently by Bosquet-M{é}lou and Ponty as $(1,1)$-culminating paths. Both sets of authors noted the difficulty in counting these objects, and to date research on bidirectional ballot sequences has been concerned with asymptotics. We introduce a continuous analogue of bidirectional ballot sequences which we call bidirectional gerrymanders, and show that the set of bidirectional gerrymanders form a convex polytope sitting inside the unit cube, which we refer to as the bidirectional ballot polytope. We prove that every $(2n-1)$-dimensional unit cube can be partitioned into $2n-1$ isometric copies of the $(2n-1)$-dimensional bidirectional ballot polytope. Furthermore, we show that the vertices of this polytope are all also vertices of the cube, and that the vertices are in bijection with BBS's. An immediate corollary is a geometric explanation of the result of Zhao and of Bosquet-M{é}lou and Ponty that the number of BBS's of length $n$ is $\Theta(2^n/n)$. | \section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{conjecture}{Conjecture}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\newtheorem{example}{Example}
\begin{document}
\begin{center}
\uppercase{\bf The bidirectional ballot polytope}
\vskip 20pt
{\bf Steven J. Miller}\\
{\smallit Department of Mathematics and Statistics, Williams College, Williamstown, Massachusetts}\\
{\tt sjm1@williams.edu; Steven.Miller.MC.96@aya.yale.edu}\\
\vskip 10pt
{\bf Carsten Peterson}\\
{\smallit Department of Mathematics, University of Michigan, Ann Arbor, Michigan}\\
{\tt carstenp@umich.edu}\\
\vskip 10pt
{\bf Carsten Sprunger}\\
{\smallit Department of Mathematics, Stanford University, Palo Alto, California}\\
{\tt csprun@stanford.edu}\\
\vskip 10pt
{\bf Roger Van Peski}\\
{\smallit Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts}\\
{\tt rvp@mit.edu}\\
\end{center}
\vskip 30pt
\centerline{\smallit Received: , Revised: , Accepted: , Published: }
\vskip 30pt
\centerline{\bf Abstract}
\noindent
A bidirectional ballot sequence (BBS) is a finite binary sequence with the property that every prefix and suffix contains strictly more ones than zeros. BBS's were introduced by Zhao, and independently by Bosquet-M{\'e}lou and Ponty as $(1,1)$-culminating paths. Both sets of authors noted the difficulty in counting these objects, and to date research on bidirectional ballot sequences has been concerned with asymptotics. We introduce a continuous analogue of bidirectional ballot sequences which we call bidirectional gerrymanders, and show that the set of bidirectional gerrymanders form a convex polytope sitting inside the unit cube, which we refer to as the bidirectional ballot polytope. We prove that every $(2n-1)$-dimensional unit cube can be partitioned into $2n-1$ isometric copies of the $(2n-1)$-dimensional bidirectional ballot polytope. Furthermore, we show that the vertices of this polytope are all also vertices of the cube, and that the vertices are in bijection with BBS's. An immediate corollary is a geometric explanation of the result of Zhao and of Bosquet-M{\'e}lou and Ponty that the number of BBS's of length $n$ is $\Theta(2^n/n)$.
\pagestyle{myheadings}
\markright{\smalltt INTEGERS: 18 (2018)\hfill}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction}\label{sec:intr}
In \cite{Zh1}, Zhao introduced a family of combinatorial objects called bidirectional ballot sequences, defined as follows.
\begin{definition}
A finite 0-1 sequence is a \textbf{bidirectional ballot sequence} (\textbf{BBS}) if every prefix and every suffix contains strictly more ones than zeros. Let $B_n$ denote the number of bidirectional ballot sequences of length $n$.
\end{definition}
Bidirectional ballot sequences have a natural interpretation in terms of lattice paths. Suppose we start at $(0, 0)$ and take a finite number of steps either of the form $(1, 1)$ or $(1, -1)$. We call such a path a \textbf{standard lattice path}. We define the length of the path to be the number of steps it contains. We define the height of a point in the lattice path to be its $y$-coordinate. Bidirectional ballot sequences of length $n$ are in bijection with standard lattice paths of length $n$ whose unique minimum height is attained at the first point in the path, and whose unique maximum height is attained at the last point in the path. The bijection is given by identifying the digit `0' in a BBS with a step of the form $(1, -1)$ and the digit `1' with a step of the form $(1, 1)$ (for an example of this, see Section \ref{section:polytope_vertices}).
From this perspective, bidirectional ballot sequences were independently introduced by Bosquet-M{\'e}lou and Ponty \cite{BP} as a special type of what they call culminating paths. In particular, an $(a, b)$-culminating path is a sequence of lattice points starting at $(0, 0)$ such that each step is of the form $(1, a)$ or $(1, -b)$ and such that the unique minimum height is achieved at the first point and the unique maximum height is achieved at the last point. Thus bidirectional ballot sequences are in bijection with $(1, 1)$-culminating paths. In \cite{BP} it is noted that $(1, 1)$-culminating paths had been used in \cite{FGK} with connections to theoretical physics, and general $(a, b)$-culminating paths had been used in \cite{AGMML}, \cite{CR}, and \cite{PL} with connections to bioinformatics.
In both \cite{Zh1} and \cite{BP}, it is noted that unlike other easy to define classes of lattice paths (e.g. Dyck paths), the enumeration of BBS's is tricky; there is no obvious recursive structure to such paths. Both authors focused on the asymptotics of $B_n$. In particular, \cite{BP} obtained a generating function in $n$ for the number of $(a, b)$-culminating paths of length $n$ with fixed height $k$ (the generating function for the $(1, 1)$ case was found in \cite{FGK}). Furthermore, they showed that $B_n \sim 2^n/4n$. Independently, \cite{Zh1} showed that $B_n = \Theta (2^n/n)$ and stated without detailed proof that $B_n \sim 2^n/4n$. Additionally in \cite{Zh1}, the author conjectured an even finer asymptotic expression for $B_n$. This conjecture was later proved by Hackl, Heuberger, Prodinger and Wagner \cite{HHPW}, who refined the asymptotic expression even further using techniques from analytic combinatorics.
The motivation for the study of culminating paths in \cite{BP} was the observation that such paths had been independently introduced and utilized in disparate contexts (theoretical physics and bioinformatics) as well as a general interest in understanding subfamilies of lattice paths. However, the motivation in \cite{Zh1}, as well as our original motivation for studying BBS's, arises from additive combinatorics. Let $A \subset \mathbb{Z}$ be a finite set of integers. We define the sumset $A + A$ as those elements in $\mathbb{Z}$ expressible as $a + b$ with $a, b \in A$. Similarly, the difference set $A - A$ is those elements expressible as $a - b$ with $a, b \in A$. We say that $A$ is a \textbf{more sums than differences} (\textbf{MSTD}) set if $|A + A| > |A - A|$. Because of the commutativity of addition, one may intuitively expect that in general $|A - A| \geq |A + A|$. This intuition turns out to be correct in some contexts (see \cite{HM}), in particular if each element in $[n] := \{1,2,\ldots,n\}$ is independently chosen to be in $A$ with some probability $p(n)$ tending to zero). Let $\rho_n$ be the proportion of subsets of $[n]$ which are MSTD. In \cite{MO}, it was shown that $\rho_n > 2 \times 10^{-7}$ for $n \geq 15$, and in \cite{Zh2} it was shown that $\lim_{n \to \infty} \rho_n$ converges to a positive number; experimental data suggests this limit to be of order $10^{-4}$. Thus, in this sense, a positive proportion of sets are MSTD. However, the techniques in \cite{MO} are probabilistic, and to date no known constant density family of MSTD subsets of $[n]$ as $n \to \infty$ is known.
The best density explicit construction of MSTD sets is due to Zhao in \cite{Zh1} using BBS's. Let $B$ be a binary sequence of length $n$. We can associate to $B$ the set $A \subseteq [n]$ defined as $A := \{i : B_i = 1\}$. For example if $B = 01101$, then $A = \{2, 3, 5\}$. Those subsets $A$ of $[n]$ arising from BBS's have the property that $A + A = \{i : 2 \leq i \leq 2n\}$, which is to say that the sumset is as large as possible (similarly it turns out that the difference set is also as large as possible). Using this property, Zhao was able to translate those subsets of $[n]$ arising from BBS's and append extra elements to the fringes to obtain an MSTD set for each set arising from a BBS. From this, one immediately gets a density $\Theta(1/n)$ family of MSTD sets.
Motivated by the use of BBS's in additive combinatorics, in this paper we study the natural analgoue of BBS's in a continuous setting, which we call \textbf{bidirectional gerrymanders}; in the related paper \cite{MP}, we use similar ideas as in this paper to study the analogue of MSTD sets in a continuous setting.
We first set some notation and then describe our main results. Let $\mathbb{I}_n$ denote the set of all subsets of $\mathbb{R}$ consisting of exactly $n$ disjoint open intervals such that the leftmost interval starts at 0. Suppose $\mathcal{A} \in \mathbb{I}_n$. If we translate $\mathcal{A}$, then the sumset and difference set merely translate as well. Thus, when studying additive behavior, we do not lose any generality by restricting our attention to collections of intervals such that the leftmost interval starts at zero. We can topologize $\mathbb{I}_n$ by identifying it with $\mathbb{R}^{2n-1}_{\geq 0}$, the non-negative orthant: let $\mathcal{A}= I_1 \cup I_2 \cup \dots \cup I_n \in \mathbb{I}_n$ with $I_i$ to the left of $I_j$ for $i < j$. Suppose $I_j = (a_j, b_j)$. We then identify $\mathcal{A}$ with the vector $v_{\mathcal{A}} = [b_1 - a_1, a_2 - b_1, b_2 - a_2, a_3 - b_2, \dots, b_n - a_n]$. Thus the first entry is the length of the first interval, the second entry is the size of the gap between the first and second intervals, the third entry is the length of the second interval, etc. We shall find it convenient to restrict our attention to the following set: let $\mathbb{J}_n \subset \mathbb{I}_n$ be the set of collections of $n$ non-overlapping intervals such that the leftmost interval starts at zero, the length of each interval is between 0 and 1, and the gap between adjacent intervals is between 0 and 1 (if we scale $\mathcal{A} \in \mathbb{I}_n$ by $\alpha \neq 0$, then the sumset and difference set scale by $\alpha$ as well, so $\alpha \mathcal{A}$ has the same essential additive behavior as $\mathcal{A}$; note that up to scaling, every element of $\mathbb{I}_n$ is an element of $\mathbb{J}_n$). We can topologize $\mathbb{J}_n$ by identifying it with $C_{2n-1} = [0, 1]^{2n-1}$, the $2n-1$ dimensional unit cube\footnote{Because the endpoints of an open interval cannot be equal, strictly speaking we are taking $\mathbb{I}_n$ to be the set of all weakly increasing $2n$-tuples of points on the real line and identifying these with collections of $n$ intervals by treating them as endpoints (and correspondingly for $\mathbb{J}_n$). However, in the edge case when $a_j=b_j$, we still allow an `empty' interval at $a_j$, which is included in the data of an element of $\mathbb{I}_n$. Including these degenerate cases allows us to indeed identify $\mathbb{J}_n$ with the closed unit cube.}. For other ways to topologize $\mathbb{I}_n$ and related spaces, see \cite{MP}.
The bidirectional gerrymanders in $\mathbb{J}_n$ form a convex, compact polytope contained in $C_{2n-1}$ which we call the \textbf{bidirectional ballot polytope}, $\mathcal{P}_n$. This polytope has a number of extraordinary combinatorial features. In Section \ref{sec:vol_polytope} we formally define this polytope and show that $C_{2n-1}$ can be partitioned into $2n-1$ disjoint isometric copies of $\mathcal{P}_n$, which in particular shows that the volume of $\mathcal{P}_n$ is $1/(2n-1)$. In Section \ref{section:cube_vertices} we show that the vertices of $\mathcal{P}_n$ are vertices of $C_{2n-1}$. Finally in Section \ref{section:polytope_vertices} we show that the vertices of $\mathcal{P}_n$ are in bijection with $B_{2n+3}$, and that a particular subset of the vertices are in bijection with $B_{2n-1}$. From this we are able to immediately rederive geometrically that $|B_n| = \Theta (2^n/n)$, i.e., there are positive constants $\alpha$ and $\beta$ such that for all $n$ sufficiently large we have $\alpha 2^n/n \le |B_n| \le \beta 2^n/n$.
\section{The Bidirectional Ballot Cone and Polytope} \label{sec:vol_polytope}
We first set some notation. Let $m = 2n-1$ for some $n \in \mathbb{N}$.
\begin{definition}\label{def:bal_vecs}
Let the set of \textbf{left ballot vectors}, $L_n$, and the set of \textbf{right ballot vectors}, $R_n$, be the following sets of vectors in $\mathbb{R}^m$:
\begin{gather}
L_n := \{[1, -1, 0, \dots, 0], [1, -1, 1, -1, 0, \dots, 0], \dots, [1, -1, \dots, 1, -1, 0]\}, \\
R_n := \{[0, \dots, 0, -1, 1], [0, \dots, 0, -1, 1, -1, 1], \dots, [0, -1, 1, \dots, -1, 1] \}.
\end{gather}
We define $V_n$, the set of \textbf{ballot vectors}, as $V_n = L_n \cup R_n$.
\end{definition}
\begin{definition}
The \textbf{bidirectional ballot cone}, $\mathcal{B}_n$, is the set of $x \in \mathbb{R}^m$ such that $x \cdot w \geq 0$ for all $w \in V_n$. When the value of $n$ is obvious, we simply refer to it as $\mathcal{B}$.
\end{definition}
We now define the continuous analogue of BBS's, and show in Proposition \ref{prop:zhao_analogue} that it is the right generalization.
\begin{definition}
Let $\mathcal{A} \in \mathbb{I}_n$. We call $\mathcal{A}$ a \textbf{bidirectional gerrymander} if $v_{\mathcal{A}} \in \mathcal{B}$.
\end{definition}
\begin{proposition}\label{prop:zhao_analogue}
Suppose $\mathcal{A} = I_1 \cup \dots \cup I_n \in \mathbb{I}_n$ with endpoints ordered as before. Suppose the right endpoint of $I_n$ is $b$. Then, $\mathcal{A}$ is a bidirectional gerrymander if and only if $\mu(\mathcal{A} \cap [0, t]) \geq t/2$ and $\mu(\mathcal{A} \cap [b-t, b]) \geq t/2$ for all $t \in [0, b]$.
\end{proposition}
\begin{proof}
Clearly if these measure conditions hold, then $\mathcal A$ is a bidirectional gerrymander, as setting $t$ to be left and right endpoints of the $I_i$ yields the nonnegativity conditions of pairing with the ballot vectors. The condition $\mu(\mathcal{A} \cap [0, t]) \geq t/2$ is equivalent to the non-negativity of $\mu(\mathcal{A} \cap [0,t]) - \mu((\ensuremath{\mathbb{R}} \setminus \mathcal{A}) \cap [0,t])$. For $t \in [0,b]$, $\mu(\mathcal{A} \cap [0,t]) - \mu((\ensuremath{\mathbb{R}} \setminus \mathcal{A}) \cap [0,t])$ takes a local minimum only if $t$ is a left endpoint of an interval $I_i$. Hence if $v_\mathcal{A} \cdot w \geq 0$ for all $w \in L_n$, then the function is nonnegative at its minima and so the first measure condition holds. Similarly, the second measure condition holds as well by the nonnegativity of pairing with the right ballot vectors.
\end{proof}
A BBS in the sense of \cite{Zh1} is a binary sequence for which any subsequence truncated on the left or right contains more $1$'s than $0$'s, and Proposition \ref{prop:zhao_analogue} shows that a bidirectional gerrymander is a subset of $\mathbb{R}$ contained in $[0, a]$ for which any subset obtained by truncating on the left or right contains ``more'' points (in a measure theoretic sense) in the original set than points not in this set. It is thus clear that they are a natural analogue, but, as we shall see, what is surprising is that they can be used to prove results about standard (discrete) BBS's.
\begin{definition}
The \textbf{bidirectional ballot polytope} $\mathcal{P}_n$, is defined as $\mathcal{B}_n \cap C_{m}$. Equivalently, it is those vectors $v_{\mathcal{A}}$ such that $\mathcal{A} \in \mathbb{J}_n$ is a bidirectional gerrymander. When the value of $n$ is obvious, we shall refer to it simply as $\mathcal{P}$.
\end{definition}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{generate_polytope2.eps}
\caption{The polytope $\mathcal{P}_2$ (red) sitting inside $C_3$. Notice that adding two additional copies of $\mathcal{P}_2$, rotated about the main diagonal of the cube by $2 \pi/3$ and $4 \pi/3$ respectively, would result in a partition of $C_3$ (neglecting overlap of boundaries).}
\end{center}
\end{figure}
\begin{definition}
Let $Z_m$ be the cyclic group of order $m$ with generator $\rho$. Let $Z_m$ act on $\mathbb{R}^{m}$ by cyclically permuting the entries (e.g. $\rho^2([0, 1, 2, 3, 4]) = [3, 4, 0, 1, 2]$). For a given set of vectors $V$ and $\sigma \in Z_m$, let $\sigma(V) := \{\sigma(v): v \in V\}$ with $\sigma \in Z_m$. For each $\sigma \in Z_m$, define $\mathcal{B}_\sigma$ by
\begin{gather}
\mathcal{B}_\sigma\ :=\ \{v \in \mathbb{R}^m_{\geq 0} : v \cdot w \geq 0 \ \textnormal{for all} \ w \in \sigma(V_n)\},
\end{gather}
and $\mathcal{P}_\sigma$ likewise. Note that $\mathcal{B}_\sigma = \sigma^{-1}(\mathcal{B})$, and that $\mathcal{B} = \mathcal{B}_{\textnormal{Id}}$ and $\mathcal{P} = \mathcal{P}_{\textnormal{Id}}$.
\end{definition}
\begin{theorem}\label{thm:permute_polytope}
The non-negative orthant, $\mathbb{R}^{m}_{\geq 0}$, is contained in $\bigcup_{\sigma \in Z_m} \mathcal{B}_\sigma$. Furthermore, for $\sigma_1 \neq \sigma_2$, the interiors of $\mathcal{B}_{\sigma_1}$ and $\mathcal{B}_{\sigma_2}$ are disjoint.
\end{theorem}
\begin{proof}
Let $\tau = \rho^2 \in Z_{m}$ be the cyclic shift by two places. Because $m$ is odd, $\tau$ generates $Z_{m}$. In particular, we see that the set of left and right ballot vectors $V_n$ as defined in Definition \ref{def:bal_vecs} is equal to
\begin{equation}
V_n\ =\ \left\{\sum_{i=0}^k \tau^{i}(w): 0 \leq k \leq 2n-3\right\},
\end{equation}
where $w=[1,-1,0,\ldots,0]$.
If $\ell < k \leq 2n-3$ then
\begin{equation}
\sum_{i=0}^k \tau^{i}(w)-\sum_{i=0}^\ell \tau^{i}(w)\ =\ \sum_{i=\ell+1}^k \tau^{i}(w)\ =\ \tau^{\ell+1} \sum_{i=0}^{k-\ell-1} \tau^{i}(w),
\end{equation}
and since $\sum_{i=0}^{2n-2} \tau^{i}(w) = [0,\ldots,0]$ we have similarly that, for $0 \leq k \leq \ell$,
\begin{equation}
\sum_{i=0}^k \tau^{i}(w)-\sum_{i=0}^\ell \tau^{i}(w)\ =\ \tau^{\ell+1} \sum_{i=0}^{(2n-2)+(k-\ell)} \tau^{i}(w).
\end{equation}
Then for each $\ell$ we have that
\begin{equation}
\left\{\sum_{i=0}^k \tau^{i}(w)-\sum_{i=0}^\ell \tau^{i}(w): 0 \leq k \leq 2n-2, k \neq \ell\right\} \ = \ \tau^{\ell+1}(V_n).
\end{equation}
Now let $w_k = \sum_{i=0}^k \tau^{i}(w)$, take any $v \in [0,1]^{m}$, and choose $0 \leq \ell \leq 2n-2$ minimizing $v \cdot w_\ell$ (this $\ell$ may not be unique). Then
\begin{equation}
v \cdot \left(\sum_{i=0}^k \tau^{i}(w)-\sum_{i=0}^\ell \tau^{i}(w)\right)\ \geq\ 0
\end{equation}
for all $0 \leq k \leq 2n-2$. Therefore $v \cdot r \geq 0$ for all $r \in \tau^{\ell+1}(V_n)$, so $v \in \mathcal{B}_{\tau^{\ell+1}}$. This shows that $\ensuremath{\mathbb{R}}_{\geq 0}^{m} = \bigcup_{\sigma \in Z_{m}} \mathcal{B}_\sigma$. Intersecting with $C$ gives the corresponding result for $\mathcal{P}$.
Conversely, if $v \in \text{Int}(\mathcal{B}_{\tau^{\ell+1}}) \cap \text{Int}(\mathcal{B}_{\tau^{k+1}})$ and $\tau^{\ell+1} \neq \tau^{k+1}$, then (because taking the interior simply changes the inequalities defining $\mathcal{B}_{\tau^{\ell+1}}$ to strict ones) we have both
\begin{align*}
v \cdot \left(\sum_{i=0}^k \tau^{i}(w)-\sum_{i=0}^\ell \tau^{i}(w)\right)\ >\ 0 \\
v \cdot \left(\sum_{i=0}^\ell \tau^{i}(w)-\sum_{i=0}^k \tau^{i}(w)\right)\ >\ 0.
\end{align*}
This is a contradiction, so the interiors distinct regions $\mathcal{B}_{\tau^{\ell+1}}$ are disjoint, and it follows immediately that the interiors of distinct regions $\mathcal{P}_{\tau^{\ell+1}}$ are disjoint.
\end{proof}
\begin{corollary}\label{cor:volume}
The unit cube $C_m$ equals $\bigcup_{\sigma \in Z_m} \mathcal{P}_\sigma$. Furthermore, for $\sigma_1 \neq \sigma_2$, the interiors of $\mathcal{P}_{\sigma_1}$ and $\mathcal{P}_{\sigma_2}$ are disjoint. Consequently, the volume of $\mathcal{P}$ is exactly $1/m$.
\end{corollary}
\begin{proof}
Intersecting the nonnegative orthant and the translates $\mathcal{B}_\sigma$ with $C_m$, Theorem \ref{thm:permute_polytope} yields that $C_m$ is partitioned into $m$ regions produced by permuting the coordinates of $\mathcal{P}$. Because the matrix representing $\tau = \rho^2$ has determinant $1$ it leaves volume invariant. Therefore, $\text{Vol}(\mathcal{P}_\sigma) =\text{Vol}(\mathcal{P})$ for all $\sigma \in Z_m$, so $\text{Vol}(\mathcal{P}) = 1/m$.
\end{proof}
\begin{corollary}\label{cor:necklace}
For any vector $v \in \ensuremath{\mathbb{R}}_{\geq 0}^{m}$, there exists $\sigma \in Z_{m}$ such that the vector
\noindent $v'=(v_1',v_2',\ldots,v_{m}')=\sigma(v)$ has the following property: For all $1 \leq k \leq n$,
\begin{equation}\label{eq:necklace1}
\sum_{i=1}^k (v_{2i-1}' - v'_{2i})\ \geq\ 0
\end{equation}
and
\begin{equation}\label{eq:necklace2}
\sum_{i=1}^k (v'_{2n-(2i-1)} - v'_{2n-2i}) \ \geq\ 0.
\end{equation}
If furthermore these are all positive, then $\sigma$ is unique.
\end{corollary}
One interpretation of the above corollary is as follows. Suppose you have a necklace with an odd number of beads. On each bead you write a non-negative number. Then there exists some place where you can cut the necklace such that when you lay out the necklace and think of the sequence of values on the beads as a vector in $\mathbb{R}^{m}$, this vector is a bidirectional gerrymander. Furthermore, if the numbers you write on the beads are ``generic'', in the sense that the inequalities corresponding to \eqref{eq:necklace1} and \eqref{eq:necklace2} are strict, then there is exactly one such place you can cut the necklace.
\begin{center}
\begin{figure}[h]
\begin{tikzpicture}[scale = 1.2]
\draw (0, 1) circle [radius=0.4];
\draw ({-0.951*(1/3)}, {2/3*1 + (0.309)*(1/3)}) -- (-2/3*0.951, {1/3 + (0.309)*2/3});
\node at (0, 1) {\small 1.78};
\draw (-0.951, 0.309) circle [radius=0.4];
\draw ({2/3*(-0.951) + 1/3*(-0.588)}, {2/3*(0.309)+1/3*(-0.809)}) -- ({1/3*(-0.951) + 2/3*(-0.588)}, {1/3*(0.309) + 2/3*(-0.809)});
\node at (-0.951, 0.309) {\small 1.55};
\draw (-0.588, -0.809) circle [radius=0.4];
\draw ({2/3*(-0.588) + 1/3*(0.588)}, {2/3*(-0.809)+1/3*(-0.809)}) -- ({1/3*(-0.588) + 2/3*(0.588)}, {1/3*(-0.809) + 2/3*(-0.809)});
\node at (-0.588, -0.809) {\small 0.76};
\draw (0.588, -0.809) circle [radius=0.4];
\node at (0.588, -0.809) {\small 2.06};
\draw ({2/3*(0.588) + 1/3*(0.951)}, {2/3*(-0.809)+1/3*(0.309)}) -- ({1/3*(0.588) + 2/3*(0.951)}, {1/3*(-0.809) + 2/3*(0.309)});
\draw (0.951, 0.309) circle [radius=0.4];
\node at (0.951, 0.309) {\small 3.21};
\draw ({2/3*(0.951) + 1/3*(0)}, {2/3*(0.309)+1/3*(1)}) -- ({1/3*(0.951) + 2/3*(0)}, {1/3*(0.309) + 2/3*(1)});
\draw [dashed] ({0.5*(0.951 + 0.588) - 0.3*1}, {0.5*(0.309 - 0.809) + 0.3*((0.951 - 0.588)/(0.309 + 0.809))}) -- ({0.5*(0.951 + 0.588) + 0.3*1}, {0.5*(0.309 - 0.809) - 0.3*((0.951 - 0.588)/(0.309 + 0.809))});
\draw [->] (2, 0) -- (3, 0);
\draw (4, 0) circle [radius = 0.4];
\draw (5.2, 0) circle [radius = 0.4];
\draw (6.4, 0) circle [radius = 0.4];
\draw (7.6, 0) circle [radius = 0.4];
\draw (8.8, 0) circle [radius = 0.4];
\node at (4, 0) {\small 3.21};
\node at (5.2, 0) {\small 1.78};
\node at (6.4, 0) {\small 1.55};
\node at (7.6, 0) {\small 0.76};
\node at (8.8, 0) {\small 2.06};
\draw (3.4, 0) -- (3.6, 0);
\draw (4.4, 0) -- (4.8, 0);
\draw (5.6, 0) -- (6.0, 0);
\draw (6.8, 0) -- (7.2, 0);
\draw (8.0, 0) -- (8.4, 0);
\draw (9.2, 0) -- (9.4, 0);
\end{tikzpicture}
\caption{An example ``cut'' of a necklace as in Corollary \ref{cor:necklace}.}
\end{figure}
\end{center}
\section{Vertices of the Bidirectional Ballot Polytope are Vertices of the Cube} \label{section:cube_vertices}
In this section we show that the vertices of $\mathcal{P}_n$ are also vertices of $C_m$, the unit cube. We had previously defined $\mathcal{P}_n$ as the intersection of the unit cube with the ballot cone, which is equivalent to the set of vectors $[\ell_1, g_1, \dots, g_{n-1}, \ell_n]$ satisfying the below inequality:
\begin{gather}
\label{big_matrix}
\begin{array}{@{}r@{}l}
\begin{array}[]{@{}r@{}r}
\text{cube vectors} & \left. \begin{array}{c} \vphantom{1} \\ \vphantom{-1} \\ \vphantom{1} \\ \vphantom{-1} \\ \vphantom{\vdots} \end{array} \right\{ \\
\text{left ballot vectors} & \left. \begin{array}{c} \vphantom{1} \\ \vphantom{1} \\ \vphantom{\vdots} \end{array} \right\{ \\
\text{right ballot vectors} & \left. \begin{array}{c} \vphantom{1} \\ \vphantom{1} \\ \vphantom{\vdots} \end{array} \right\{ \\
\end{array}
&
\left[
\begin{array}{c c c c c c c c c c c}
1 & 0 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots \\
1 & -1 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 & 0 \\
1 & -1 & 1 & -1 & 0 & \dots & 0 & 0 & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & -1 & 1 \\
0 & 0 & 0 & 0 & 0 & \dots & 0 & -1 & 1 & -1 & 1 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\end{array}
\right]
\end{array}
\begin{bmatrix}
\ell_1 \\
g_1 \\
\ell_2 \\
g_2 \\
\vdots \\
g_{n-1} \\
\ell_n
\end{bmatrix}
\ \geq \
\begin{bmatrix}
0 \\
-1 \\
0 \\
-1 \\
\vdots \\
0 \\
0 \\
\vdots \\
0 \\
0 \\
\vdots
\end{bmatrix}
\end{gather}
The first collection of rows in the above matrix is necessary to ensure that we only deal with points inside of the unit cube. Thus we call any vector of the form $[0, \dots, 0, \pm 1, 0, \dots, 0]$ a \textbf{cube vector}.
Before proving the main result of this section, we must review a few concepts related to convex polytopes. We follow the terminology of \cite{BT}.
\begin{definition}
Let $P$ be a polytope in $\mathbb{R}^n$ defined by the inequalities $a_i^T x \geq b_i$ for $i \in \{1, 2, \dots, k\}$. Let $x^*$ be such that for some $i$, $a_i^T x^* = b_i$. Then, we say that the $i$\textsuperscript{th} constraint is \textbf{active} at $x^*$.
\end{definition}
\begin{definition}
A vector $x^* \in \mathbb{R}^n$ is called a \textbf{basic solution} if out of all of the constraints that are active at $x^*$, there is some collection of $n$ of them which is linearly independent. If $x^*$ is a basic solution that satisfies all of the constraints, then it is called a \textbf{basic feasible solution}.
\end{definition}
Part of what makes the study of convex polytopes interesting is that there are several equivalent but strikingly different ways of defining what the vertices of a polytope are. In particular, one definition is that a point $v$ is a vertex if and only if it is a basic feasible solution.
The following shorthand will be helpful in the proof of the main theorem of this section.
\begin{definition}
A matrix/vector is called \textbf{flat} if all of its entries are 0, 1, or -1.
\end{definition}
Let $Q_n$ denote the set of vertices in the polytope $\mathcal{P}_n$. Let $S_n$ denote the set of vertices of the unit cube $C_{m}$. The main result of this section is the following.
\begin{theorem}\label{theorem:cube}
All of the vertices of the bidirectional ballot polytope $\mathcal{P}_n$ are also vertices of the unit cube $C_{m}$; i.e., $Q_n \subset S_n$.
\end{theorem}
\begin{proof}
By the above discussion, we know that we must show that all basic feasible solutions are vertices of the cube. Throughout this proof, we let $n$ be fixed, and let $m = 2n-1$. Thus we unambiguously let $\mathcal{P} = \mathcal{P}_n$, $C = C_{2n-1}$, $Q = Q_n$, and $S = S_n$. Notice that $\mathbb{Z}^m \cap \mathcal{P} \subset S$. From this observation, we now describe the strategy for proving the theorem. Suppose $x^*$ is a basic solution whose corresponding constraints are $a_{i_1}$, $\dots$, $a_{i_m}$. Then $x^*$ satisfies
\begin{gather}
\label{matrix:A}
\begin{bmatrix}
\textnormal{---} a_{i_1} \textnormal{---} \\
\vdots \\
\textnormal{---} a_{i_m} \textnormal{---}
\end{bmatrix}
x^*\ =\ \begin{bmatrix}
b_{i_1} \\
\vdots \\
b_{i_m}
\end{bmatrix}.
\end{gather}
Let $A$ be the matrix in \eqref{matrix:A}. Let $b^*$ be the vector on the right hand side in \eqref{matrix:A}. Thus $x^* = A^{-1} b^*$. Note that $b^* \in \mathbb{Z}^m$ since it is some subset of the entries in the vector on the right hand side of \eqref{big_matrix}. If we can show that $\det(A) = \pm 1$, it will imply that $A^{-1}$ has integer entries, and thus that $A^{-1} b^* \in \mathbb{Z}^m$. From the earlier observation, if $x^*$ is a basic feasible solution, then we must have that $A^{-1} b^* = x^* \in S$, which would prove the theorem.
Now we must show that if $A$ is invertible, then it has determinant $\pm 1$. In order to show this, we keep track of what happens to the determinant in the process of carrying out Gaussian elimination, which converts $A$ into the identity matrix. In particular, we show that at every step, the determinant changes by a factor of $\pm 1$. Since the identity matrix has determinant 1, we could then conclude that $A$ has determinant $\pm 1$. The only elementary row operation which potentially changes the absolute value of the determinant of a matrix is multiplying a row by a scalar. Thus it suffices to show that when Gaussian elimination is performed on $A$, no row is ever multiplied by a scalar other than $\pm 1$. In Gaussian elimination, a row is multiplied by a scalar to convert some non-zero entry in that row to a one. If every non-zero entry in that row is $\pm 1$, then we would simply need to multiply by $\pm 1$. Thus, we shall instead prove the stronger hypothesis that at every step of Gaussian elimination, the intermediate matrix is flat, and hence all of its non-zero entries are $\pm 1$. This is the content of Lemma \ref{lemma_flat}.
\end{proof}
Before proving Lemma \ref{lemma_flat}, we include an example to illustrate the method. Here we omit row swapping for clarity, and we obtain a permutation matrix, which has determinant $\pm 1$. At each step, the leading nonzero term in the bolded row is used to clear the corresponding column.
\begin{gather}
A_0:
\begin{bmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\mathbf{1} & \mathbf{-1} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\
0 & -1 & 1 & -1 & 1 \\
0 & 0 & 0 & -1 & 1
\end{bmatrix} \to \
A_1: \begin{bmatrix}
\mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\
0 & 0 & 0 & 0 & 1 \\
1 & -1 & 0 & 0 & 0 \\
0 & -1 & 1 & -1 & 1 \\
0 & 0 & 0 & -1 & 1
\end{bmatrix} \to \
A_2: \begin{bmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 \\
\mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{-1} & \mathbf{1} \\
0 & 0 & 0 & -1 & 1
\end{bmatrix} \\
\to \
A_3: \begin{bmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & -1 & 1 \\
\mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{-1} & \mathbf{1}
\end{bmatrix} \to \
A_4: \begin{bmatrix}
0 & 1 & 0 & 0 & 0 \\
\mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & -1
\end{bmatrix} \to \
A_5: \begin{bmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0
\end{bmatrix}.
\end{gather}
\begin{lemma}\label{lemma_flat}
In carrying out Gaussian elimination on the matrix $A$ as in Theorem \ref{theorem:cube}, all intermediate matrices are flat.
\end{lemma}
\begin{proof}
We proceed by induction. Let $A_k$ denote the matrix resulting from the $k$\textsuperscript{th} step of Gaussian elimination (i.e. the matrix obtained after ``clearing'' the first $k$ columns). We shall show that for each $k$, every row of the matrix $A_k$ is of exactly one of six types depending on the form of the first $k$ entries of that row and the last $m - k$ entries of that row (in the sequel, we will refer to this as saying that every row is one of the six types with respect to $k$).
We now describe these six types. Let $\alpha_n$ denote any sequence of length $n$ consisting of alternating plus ones and minus ones (e.g. $\alpha_3 = [-1, 1, -1]$ or $\alpha_1 = [1]$). Let $\beta_n$ denote the sequence of length $n$ consisting of all zeros. Let $\gamma_n$ denote any binary sequence of length $n$ containing exactly one one (e.g. $\gamma_4 = [0, 0, 1, 0]$). Let $\oplus$ refer to the operation of vector concatenation (e.g. $[1, 2, 3] \oplus [4, 5] = [1, 2, 3, 4, 5]$). The six types (with respect to $k$) are listed in Table \ref{table:firsttable}f.
\begin{table}[h]
\begin{center}
\begin{tabular}{l l l l}
Type & First $k$ & Last $m - k$ & Example ($k = 3$, $m = 7$) \\
\hline
1 & $\beta_k$ & $\beta_{\ell \geq 1} \ \oplus \ \alpha_{j \geq 1} \ \oplus \beta_{m - k - \ell - j \geq 1}$ & $[0, 0, 0 \ \big| \ 0, 1, -1, 0]$ \\
2 & $\beta_k$ & $\alpha_{\ell \geq 1} \ \oplus \ \beta_{n - k - \ell \geq 0}$ & $[0, 0, 0 \ \big| \ 1, -1, 1, 0]$ \\
3 & $\beta_k$ & $\beta_{\ell \geq 1} \ \oplus \ \alpha_{m-k-\ell \geq 0}$ & $[0, 0, 0 \ \big| \ 0, 0, 1, -1]$ \\
4 & $\gamma_k$ & $\beta_{\ell \geq 1} \ \oplus \alpha_{j \geq 0} \ \oplus \beta_{m-k-\ell-j \geq 1}$ & $[0, 1, 0 \ \big| \ 0, 0, 0, 0]$ \\
5 & $\gamma_k$ & $\alpha_{\ell \geq 1} \ \oplus \ \beta_{n - k - \ell \geq 0}$ & $[0, 1, 0 \ \big| \ 1, -1, 1, 0]$ \\
6 & $\gamma_k$ & $\beta_{\ell \geq 1} \ \oplus \ \alpha_{m-k-\ell \geq 0}$ & $[0, 1, 0 \ \big| \ 0, 0, 1, -1]$ \\
& & &
\end{tabular}
\caption{\label{table:firsttable} The six types with respect to $k$}
\end{center}
\end{table}
We now go through the inductive argument. For the base case, notice that when $k = 0$, the cube vectors are type 1, the left ballot vectors are type 2, and the right ballot vectors are type 3. Thus the claim is proven in the base case.
Now for the inductive step, we shall show that if all rows of $A_k$ are of one of the above types with respect to $k$, then all rows of $A_{k+1}$ are of one of the above types with respect to $k+1$. As described in the proof of Theorem \ref{theorem:cube}, at step $k$ we must first find some row whose first $k$ entries are zero, and whose $k+1$ entry is $\pm 1$. We see then that we must select some row of type 2, call it $T$. We then subtract $T$ from all other rows whose $k+1$ entry is non-zero. Thus the only types we must worry about are types 2 and 5. Notice that when we subtract $T$ from a row of type 2, we get a row either or type 1, type 2, or type 3 with respect to $k+1$. When we subtract $T$ from a row of type 5, we get a row either of type 4, 5, or 6 with respect to $k+1$. All other rows remain the same. Thus when we catalog the new rows with respect to $k+1$, we get that those of type 1 become either type 1 or type 2. As mentioned before, those of type 2 become those of type 1, 2, or 3, except for row $T$ which becomes of type 4 or 5. Type 3 becomes type 2 or 3. Type 4 remains type 4 or becomes type 5. As mentioned before, type 5 becomes type 4, 5, or 6. Lastly, type 6 becomes type 5 or type 6. Thus, by induction, we have proven the desired statement, implying in particular that the matrix is flat at every step.
\end{proof}
\section{Vertices of the cube in the ballot region} \label{section:polytope_vertices}
In this section, we demonstrate that bidirectional ballot sequences of length $2n-1$ correspond in a natural way to $Q_n$, and we rederive the growth rate given in \cite{Zh1} and \cite{BP}.
\begin{definition}
A \textbf{slope vector} is a vector $\lambda = [\lambda_1,\dots,\lambda_m] \in \ensuremath{\mathbb{R}}^m$ with $m\in \mathbb{N}$. To a slope vector $\lambda$, we associate the unique continuous piecewise linear function $f_\lambda: [0,m] \to \ensuremath{\mathbb{R}}$ such that $f(0) = 0$ and $f_\lambda'(x) = \lambda_i$ for $x\in (i-1,i)$ for each $1\le i\le m$.
\end{definition}
Given any binary sequence $b = b_1\cdots b_m$, we associate to this sequence the graph of the function $f_\lambda$ where $\lambda = (\lambda_1,\dots,\lambda_m)$ with $\lambda_i \coloneqq (-1)^{b_i-1}$.
\begin{example}\label{ex:discrete_graph}
The bidirectional ballot sequence $11011001111$ corresponds to the path
\begin{center}
\begin{tikzpicture}
\draw[->] (0,0) -- (11,0) node[anchor=north] {};
\draw[->, dashed] (0,15/3) -- (11,15/3) node[] {};
\draw[->] (0,0) -- (0,5.5) node[anchor=east] {};
\draw (0,0) node[circle, fill, inner sep=2pt] {}
(1,1) node[circle, fill, inner sep=2pt] {}
(2,2) node[circle, fill, inner sep=2pt] {}
(3,1) node[circle, fill, inner sep=2pt] {}
(4,2) node[circle, fill, inner sep=2pt] {}
(5,3) node[circle, fill, inner sep=2pt] {}
(6,2) node[circle, fill, inner sep=2pt] {}
(7,1) node[circle, fill, inner sep=2pt] {}
(8,2) node[circle, fill, inner sep=2pt] {}
(9,3) node[circle, fill, inner sep=2pt] {}
(10,4) node[circle, fill, inner sep=2pt] {}
(11,5) node[circle, fill, inner sep=2pt] {};
\draw[thick] (0,0) -- (1,1) -- (2,2) -- (3,1) -- (4,2) -- (5,3) -- (6,2) -- (7,1) -- (8,2) -- (9,3) -- (10,4) -- (11,5);
\end{tikzpicture}
\end{center}
\end{example}
This is a bijection from binary sequences of length $m$ to graphs of functions $f_\lambda$ with $\lambda \in \{\pm 1\}^m$. Recall from Section \ref{sec:intr} that the graphs which correspond to bidirectional ballot sequences are those of functions $f_\lambda$ where $f_\lambda(0) < f_\lambda(t) < f_\lambda(m)$ for all $0 < t < m$.
\begin{comment}
Recall that these graphs can also be seen as representing a voting process with two candidates and $m$ voters: the voters stand in line at a single ballot box and vote for their candidate one-by-one; one may construct from this a path in the plane starting at the origin by taking a step in the $(1,1)$ direction when a voter votes for candidate $A$ and in the $(1,-1)$ direction if he/she votes for candidate $B$. Then (not necessarily bidirectional) ballot sequences are those for which candidate $A$ was always in the lead, and bidirectional ballot sequences are those for which candidate $A$ was always in the lead and would have also been had the line of voters been reversed. We will see that there is a similar interpretation for the continuous case, as well.
\end{comment}
Now we will draw a correspondence between $Q_n$ and $B_{2n+3}$ through these graphs, as well as a correspondence between a certain subset of $Q_n$ and $B_{2n-1}$, by describing a way to interpret vectors $v\in C_{2n-1} = [0,1]^{2n-1}$ as paths as in the discrete case in such a way that the vertices of the ballot polytope are realized as exactly the graphs above. Given a vector $v = [v_1,\dots,v_{2n-1}] \in C_{2n-1}$, define the slope vector $\lambda_v = [\lambda_1,\dots,\lambda_{2n-1}]$ by $\lambda_i \coloneqq (-1)^{i-1} (2v_i-1)$, and associate to $v$ the graph of the function $f_{\lambda_v}$.
\begin{example}\label{ex:graph}
The gap-parametrization vector $v= \left[\frac34, \frac13, \frac12, \frac23, 1\right] \in [0,1]^5$ gives the slope vector $\l_v = \left[\frac12, -\frac13, 0, \frac13, 1 \right]$, which gives the following graph of the function $f_{\l_v}$, where the values next to the points indicate the distance above the $x$-axis:
\begin{center}
\begin{tikzpicture}
\draw[->] (0,0) -- (11,0) node[anchor=north] {};
\draw[->, dashed] (0,6) -- (11,6) node[] {};
\draw (0,-0.5) node[] {$0$}
(2,-0.5) node[] {$1$}
(4,-0.5) node[] {$2$}
(6,-0.5) node[] {$3$}
(8,-0.5) node[] {$4$}
(10,-0.5) node[] {$5$};
\draw[->] (0,0) -- (0,6.5) node[anchor=east] {};
\draw (0,0) node[circle, fill, inner sep=2pt] {}
(2,2) node[circle, fill, inner sep=2pt](a) {}
(4,4/6) node[circle, fill, inner sep=2pt](b) {}
(6,4/6) node[circle, fill, inner sep=2pt](c) {}
(8,2) node[circle, fill, inner sep=2pt](d) {}
(10,6) node[circle, fill, inner sep=2pt](e) {};
\node[left=0.1cm of a] {\tiny $\frac12$};
\node[above=0.05cm of b] {\tiny $\frac16$};
\node[above=0.05cm of c] {\tiny $\frac16$};
\node[left=0.1cm of d] {\tiny $\frac12$};
\node[below=0.05cm of e] {\tiny $\frac32$};
\draw[thick] (0,0) -- (2,2) -- (4,4/6) -- (6,4/6) -- (8,2) -- (10,6);
\end{tikzpicture}
\end{center}
\end{example}
Although the function $f_{\l_v}$ in Example \ref{ex:graph} has the property that it achieves global minimum and maximum values at it left and right endpoints (respectively), we will see that this is not always the case (see Example \ref{ex:process}). We determine this behavior more precisely now.
If $v = [v_1,\dots,v_{2n-1}] \in C_{2n-1}$, then for $0\le k\le 2n-1$ we have
\begin{equation}\label{eq:gpfunc1}
f_{\lambda_v}(k)\ =\ \sum_{j=1}^k (-1)^{j-1}(2v_j-1) = \begin{cases}
2\sum_{j=1}^k (-1)^{j-1} v_j & \mbox{$k$ is even} \\
-1 + 2\sum_{j=1}^k (-1)^{j-1} v_j & \mbox{$k$ is odd,}
\end{cases}
\end{equation}
and similarly
\begin{equation}\label{eq:gpfunc2}
f_{\lambda_v}(2n-1-k)\ =\ \begin{cases}
f_{\lambda_v}(2n-1) - 2\sum_{j=1}^k (-1)^{j-1} v_{2n-j} & \mbox{$k$ is even} \\
f_{\lambda_v}(2n-1) + 1 - 2\sum_{j=1}^k (-1)^{j-1} v_{2n-j} & \mbox{$k$ is odd.}
\end{cases}
\end{equation}
One can see now that, even if $v\in \mathcal{P}_n$, it is possible for the graph to fail the property stated above, i.e., to achieve a global maximum or minimum at a point in the interior of its interval of definition (again, see Example \ref{ex:process} for an explicit example). However, one can also see that if $v\in\mathcal{P}_n$, it cannot fail this property to a great extent; namely, the values at the left and right endpoints will be within a distance of 1 from the maximum and minimum values, since the large sums in the RHS of \eqref{eq:gpfunc1} and \eqref{eq:gpfunc2} will be non-negative. Nonetheless, we would like the graphs of the functions $f_{\l_v}$ with $v\inQ_n$ to match the graphs of bidirectional ballot sequences in $B_{2n+3}$, and for that reason we give a way to modify a vector $v\inQ_n$ before associating it to a graph. Namely, we will add a sort of buffer to each side of the vector, so that the left and right endpoints get a leg up.
\begin{definition}
If $v=[v_1,\dots,v_{2n-1}]\in C_{2n-1}$, we define
\begin{equation*}
\alpha(v)\ \coloneqq\ [1,0,v_1,v_2,\dots,v_{2n-2},v_{2n-1},0,1].
\end{equation*}
\end{definition}
We now present two correspondences, the first stated more naturally, and the second proven more naturally, which are nonetheless very closely related. The first correspondence is as follows.
\begin{theorem}\label{thm:ballot_vertices} The set
$Q_n$ is in bijection with $B_{2n+3}$, induced by the map
\begin{equation}\label{eq:bijection}
v \mapsto f_{\lambda_{\alpha(v)}}.
\end{equation}
\end{theorem}
Before we prove Theorem \ref{thm:ballot_vertices}, we give an example of the process that induces the bijection.
\begin{example}\label{ex:process}
Consider the gap-parametrization vector $v = [0,0,1,0,0] \in [0,1]^5$, an element of $Q_3$. We shall obtain a bidirectional ballot sequence from $v$.
\emph{
We see that $v$ gives the slope vector $\l_v = [-1,1,1,1,-1]$. The graph of $f_{\l_v}$ is the following, where the values next to the points indicate the distance above the $x$-axis:
}
\begin{center}
\begin{tikzpicture}
\draw[->] (0,2) -- (11,2) node[anchor=north] {};
\draw[->, dashed] (0,4) -- (11,4) node[] {};
\draw (2, 1.5) node[] {$1$}
(4, 1.5) node[] {$2$}
(6, 1.5) node[] {$3$}
(8, 1.5) node[] {$4$}
(10, 1.5) node[] {$5$};
\draw[<->] (0,-0.5) -- (0,7) node[anchor=east] {};
\draw (-0.5,2) node[] {$0$};
\draw (0,2) node[circle, fill, inner sep=2pt] {}
(2,0) node[circle, fill, inner sep=2pt](a) {}
(4,2) node[circle, fill, inner sep=2pt](b) {}
(6,4) node[circle, fill, inner sep=2pt](c) {}
(8,6) node[circle, fill, inner sep=2pt](d) {}
(10,4) node[circle, fill, inner sep=2pt](e) {};
\node[left=0.1cm of a] {\tiny $-1$};
\node[above=0.05cm of b] {\tiny $0$};
\node[above=0.05cm of c] {\tiny $1$};
\node[left=0.1cm of d] {\tiny $2$};
\node[below=0.05cm of e] {\tiny $1$};
\draw[thick] (0,2) -- (2,0) -- (4,2) -- (6,4) -- (8,6) -- (10,4);
\end{tikzpicture}
\end{center}
\emph{
This is \emph{not} the graph of a bidirectional ballot sequence. Namely, the graph passes below the $x$-axis and above the line $y = f_{\l_v}(5)$. Let's now consider $\alpha(v) = [1,0,0,0,1,0,0,0,1] \in [0,1]^9$, which gives slope vector $\l_{\alpha(v)} = [1,1,-1,1,1,1,-1,1,1]$ and leads to the following graph of $f_{\l_{\alpha(v)}}$.
}
\begin{center}
\begin{tikzpicture}
\draw[->] (0,0) -- (10,0) node[anchor=north] {};
\draw[->, dashed] (0,5) -- (10,5) node[] {};
\draw (0, -0.5) node[] {$0$}
(1, -0.5) node[] {$1$}
(2, -0.5) node[] {$2$}
(3, -0.5) node[] {$3$}
(4, -0.5) node[] {$4$}
(5, -0.5) node[] {$5$}
(6, -0.5) node[] {$6$}
(7, -0.5) node[] {$7$}
(8, -0.5) node[] {$8$}
(9, -0.5) node[] {$9$};
\draw[->] (0,0) -- (0,6) node[anchor=east] {};
\draw[dashed] (2,0) -- (2,5) node[] {};
\draw[dashed] (7,0) -- (7,5) node[] {};
\draw (0,0) node[circle, fill, inner sep=2pt] {}
(1,1) node[circle, fill, inner sep=2pt](a) {}
(2,2) node[circle, fill, inner sep=2pt](b) {}
(3,1) node[circle, fill, inner sep=2pt](c) {}
(4,2) node[circle, fill, inner sep=2pt](d) {}
(5,3) node[circle, fill, inner sep=2pt](e) {}
(6,4) node[circle, fill, inner sep=2pt](f) {}
(7,3) node[circle, fill, inner sep=2pt](g) {}
(8,4) node[circle, fill, inner sep=2pt](h) {}
(9,5) node[circle, fill, inner sep=2pt](i) {};
\node[above=0.05cm of a] {\tiny $1$};
\node[right=0.05cm of b] {\tiny $2$};
\node[below=0.05cm of c] {\tiny $1$};
\node[above=0.05cm of d] {\tiny $2$};
\node[above=0.05cm of e] {\tiny $3$};
\node[above=0.05cm of f] {\tiny $4$};
\node[left=0.05cm of g] {\tiny $3$};
\node[above=0.05cm of h] {\tiny $4$};
\node[above=0.05cm of i] {\tiny $5$};
\draw[thick] (0,0) -- (1,1) -- (2,2) -- (3,1) -- (4,2) -- (5,3) -- (6,4) -- (7,3) -- (8,4) -- (9,5);
\end{tikzpicture}
\end{center}
\emph{
The portion of the graph between the vertical dotted lines is simply the graph of $f_{\l_v}$ translated in the plane by the vector $[2,2]$. This graph \emph{does} correspond to a bidirectional ballot sequence, namely $110111011$. We now prove that this process gives a bijection as in the statement of the theorem.
}
\end{example}
\begin{proof}[Proof of Theorem \ref{thm:ballot_vertices}]
By the correspondence between bidirectional ballot sequences and graphs of certain functions given in Example \ref{ex:discrete_graph}, it suffices to show that the map of \eqref{eq:bijection} puts $Q_n$ in bijection with
\begin{equation}\label{eq:defF}
F\ =\ \{f_\mu: \mu \in \{\pm 1\}^{2n+3},\; f_\mu(0)\ <\ f_\mu(t)\ <\ f_\mu(2n+3)\ \text{ for all }\ t\in(0,2n+3)\}.
\end{equation}
If $v\in C_{2n-1}$ is any gap-parametrization vector, then, in light of \eqref{eq:gpfunc1}, \eqref{eq:gpfunc2}, and the fact that $f_{\lambda_v}$ achieves maxima and minima only at integer values, we have that $f_{\lambda_v}(0)-1 \le f_{\lambda_v}(t) \le f_{\lambda_v}(2n-1)+1$ for $t\in [0,2n-1]$ if and only if $v$ is a bidirectional gerrymander. Furthermore, if $v$ is a vertex of the cube $C_{2n-1}$, then $\alpha(v)$ is a vertex of $C_{2n+3} = [0,1]^{2n+3}$ so that $f_{\l_{\alpha(v)}}$ takes integers to integers. Since for any $v\in C_{2n-1}$ we have $f_{\l_{\alpha(v)}}(k+2) = f_{\l_v}(k) + 2$ for $0\le k \le 2n-1$, $f_{\l_{\alpha(v)}}(i) = i$ for $i=0,1,2$, and $f_{\l_{\alpha(v)}}(2n+1+i) = f_{\l_{\alpha(v)}}(2n+1)+i$ for $i=1,2$. Thus if $v$ is a vertex of $ C_{2n-1}$ then $f_{\l_{\alpha(v)}}(0) < f_{\l_{\alpha(v)}}(t) < f_{\l_{\alpha(v)}}(2n+3)$ for all $t\in(0,2n+3)$ if and only if $v\in Q_n$. It follows then that, since $\lambda_{\alpha(v)} \in \{\pm 1\}^{2n+3}$ when $v\in Q_n$, we indeed have that $f_{\l_{\alpha(v)}} \in F$, and so the map in \eqref{eq:bijection} does indeed take $Q_n$ to graphs of bidirectional ballot sequences in $B_{2n+3}$.
Injectivity of the map is clear. To show that the map is surjective, we provide an inverse. For a bidirectional ballot sequence $b=b_1\cdots b_{2n+3}$ of length $2n+3$, we define the vector $w = [w_1,\dots,w_{2n-1}]$, where
\begin{equation}
w_j\ \coloneqq\ \begin{cases}
1 & \mbox{if $j \equiv b_{j+2} \pmod 2$} \\
0 & \mbox{if $j \not\equiv b_{j+2} \pmod 2$.}
\end{cases}
\end{equation}
It is easily verified that the graph of $f_{\l_{\alpha(w)}}$ is the one associated to $b$. Moreover, the two statements directly following \eqref{eq:defF} imply that, since $w\in\{\pm 1\}^{2n-1}$ and the graph of $f_{\l_{\alpha(w)}}$ is that of a bidirectional ballot sequence, we must have that $w\in Q_n$. It is clear that this map is both a right- and left-inverse of the map given by \eqref{eq:bijection}.
\end{proof}
\begin{comment}
One may think of these graphs as in the discrete case as representing a voting process with $2n-1$ voters: the process is exactly the same as before, except that the voters have more choice -- instead of being forced to fully support one of the candidates, they can choose to partially support either of them on a scale from 0 to 1, and the candidate with the most total support wins. Now, bidirectional gerrymanders correspond to the voting processes where candidate $A$ is in the lead the entire time and would have also been had the line of voters been reversed, and among those, bidirectional ballot sequences correspond to groups of very polarized voters.
Let $I \subseteq \mathcal{P} \subseteq C_{2n-1}$ denote the set of bidirectional gerrymanders $v_A$ for which $v_A\cdot w > 0$ for all $w\in V_n$ (notation introduced in Lemma \ref{lem_gerryvectors}).
\end{comment}
We now give the second correspondence. Let $\mathcal{I}_n$ denote the interior of $\mathcal{B}_n$ in $\ensuremath{\mathbb{R}}^{2n-1}$. Let $T_n = \mathcal{I}_n \cap Q_n$, i.e. those vertices of $\mathcal{P}_n$ in the interior of $\mathcal{B}_n$.
\begin{corollary}\label{cor:ballot_vertices}
We have $T_n$ is in bijection with $B_{2n-1}$, induced by the map
\begin{equation}
v\ \mapsto\ f_{\l_v}.
\end{equation}
\end{corollary}
\begin{proof}
The proof here is essentially the same as that of Theorem \ref{thm:ballot_vertices}. The point here is that, when $v\in T_n$, we already have $f_{\l_v}(0) < f_{\l_v}(t) < f_{\l_v}(2n-1)$, following similar reasoning as in the statements directly following \eqref{eq:defF}.
\end{proof}
\begin{comment}
The relationship between Theorem \ref{thm:ballot_vertices} is the same as the fact that the voting processes with $2n-1$ people where candidate $A$ is never behind by more than 1 vote and would have been had the line been reversed correspond (by adding two supporters of candidate $A$ to the beginning and end of the line) to voting processes with $2n+3$ voters which correspond to bidirectional ballot sequences.
\end{comment}
Lastly, we use these correspondences along with our previous analysis of $\mathcal{P}_n$ and its translates to obtain the growth rate in \cite{Zh1}.
\begin{corollary}\label{cor:upper}
For $\ell$ odd,
\begin{gather}
B_\ell\ \geq\ \frac{2^\ell}{16 (\ell-4)}.
\end{gather}
\end{corollary}
\begin{proof}
The inequality is trivial if $\ell \in \{1,3\}$, so assume $\ell \geq 5$. Let $m = \ell - 4$; this is $2n-1$ for some $n \in \mathbb{N}$. By Theorem \ref{thm:ballot_vertices}, we know that the vertices of $\mathcal{P}_n$ are in bijection with $B_{m+4}$. From Corollary \ref{cor:volume}, we know that every vertex of $C_{2n-1}$ is contained in $\mathcal{P}_\sigma$ for some $\sigma \in Z_m$. Since there are $m$ such copies of $\mathcal{P}$, we have
\begin{gather}
m B_{m+4}\ \geq\ 2^m.
\end{gather}
By rearrangement we get
\begin{gather}
B_\ell\ \geq\ \frac{2^\ell}{16 (\ell-4)}.
\end{gather}
\end{proof}
\begin{corollary}\label{cor:lower}
For $\ell$ odd,
\begin{gather}
B_\ell\ \leq\ \frac{2^\ell}{\ell}.
\end{gather}
\end{corollary}
\begin{proof}
Suppose $\ell = 2n-1$. From Corollary \ref{cor:ballot_vertices}, we know that the vertices of $\mathcal{P}_n$ which are in the interior of $\mathcal{B}_n$, namely $T_n$, are in bijection with $B_m$. Since the interiors of $\mathcal{B}_{\sigma_1}$ and $\mathcal{B}_{\sigma_2}$ are disjoint if $\sigma_1 \neq \sigma_2$, we have that $\sigma_1 (T_n) \cap \sigma_2 (T_n) = \emptyset$ for $\sigma_1 \neq \sigma_2$. Therefore, summing over all the vertices in $\sigma(T)$ for each $\sigma \in Z_\ell$, we at most get every vertex of the cube once. That is,
\begin{gather}
\ell B_\ell\ \leq\ 2^\ell.
\end{gather}
Rearranging yields
\begin{gather}
B_\ell\ \leq\ \frac{2^\ell}{\ell}.
\end{gather}
\end{proof}
\begin{corollary}
For all $\ell$, the growth rate of $B_\ell$ is $\Theta(2^\ell/\ell)$.
\end{corollary}
\begin{proof}
By Corollaries \ref{cor:upper} and \ref{cor:lower}, we know that for $\ell$ odd, the growth rate is $\Theta(2^\ell/\ell)$. The only additional insight needed is that for all $\ell$, $B_{\ell+1} \geq B_\ell$. To see this, note that given a BBS of length $\ell$, by appending a 1 to the end of it, we obtain a BBS of length $\ell+1$. Thus up to fixed constants, the inequalities in Corollaries \ref{cor:upper} and \ref{cor:lower} are correct for even $\ell$ as well. Thus, for all $\ell$, $B_\ell$ grows like $\Theta(2^\ell/\ell)$.
\end{proof}
\section{Conclusion}
Our methods reveal a rich combinatorial structure underlying bidirectional ballot sequences. In previous papers on BBS's (\cite{Zh1}, \cite{BP}, \cite{HHPW}), analytic techniques were used to obtain asymptotics, but our techniques reveal a geometric interpretation for the $\Theta (2^n/n)$ growth rate. Interestingly, in the final section of \cite{Zh1}, Zhao states without detailed proof that $n B_n/2^n$ goes to $1/4$, but claims his proof is ``calculation-heavy''. He then posits that ``[t]here should be some natural, combinatorial explanation, perhaps along the lines of grouping all possible walks into orbits of size mostly $n$ under some symmetry, so that almost every orbit contains exactly one walk with the desired property.'' Zhao's statement is strikingly similar to the ideas presented in our paper. Though we have made some effort, we have not been able to derive that $n B_n/2^n \to 1/4$ using the techniques of our paper, but we feel that there is hope for such a proof.
The second, more general takeaway from this paper is the potential for the ideas originally presented in \cite{MP}. The ideas in this paper in fact evolved from the ideas in \cite{MP}. In passing to the continuous setting, several additive number theory and combinatorial problems reveal a rich structure which was not otherwise visible. We believe that there is even greater potential still in such ideas and techniques.
\bigbreak
| {
"timestamp": "2018-08-21T02:09:41",
"yymm": "1708",
"arxiv_id": "1708.02399",
"language": "en",
"url": "https://arxiv.org/abs/1708.02399",
"abstract": "A bidirectional ballot sequence (BBS) is a finite binary sequence with the property that every prefix and suffix contains strictly more ones than zeros. BBS's were introduced by Zhao, and independently by Bosquet-M{é}lou and Ponty as $(1,1)$-culminating paths. Both sets of authors noted the difficulty in counting these objects, and to date research on bidirectional ballot sequences has been concerned with asymptotics. We introduce a continuous analogue of bidirectional ballot sequences which we call bidirectional gerrymanders, and show that the set of bidirectional gerrymanders form a convex polytope sitting inside the unit cube, which we refer to as the bidirectional ballot polytope. We prove that every $(2n-1)$-dimensional unit cube can be partitioned into $2n-1$ isometric copies of the $(2n-1)$-dimensional bidirectional ballot polytope. Furthermore, we show that the vertices of this polytope are all also vertices of the cube, and that the vertices are in bijection with BBS's. An immediate corollary is a geometric explanation of the result of Zhao and of Bosquet-M{é}lou and Ponty that the number of BBS's of length $n$ is $\\Theta(2^n/n)$.",
"subjects": "Combinatorics (math.CO)",
"title": "The bidirectional ballot polytope",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795121840872,
"lm_q2_score": 0.8006920068519378,
"lm_q1q2_score": 0.7902666063324233
} |
https://arxiv.org/abs/1307.5642 | Optimal exponents in weighted estimates without examples | We present a general approach for proving the optimality of the exponents on weighted estimates. We show that if an operator $T$ satisfies a bound like $$ \|T\|_{L^{p}(w)}\le c\, [w]^{\beta}_{A_p} \qquad w \in A_{p}, $$ then the optimal lower bound for $\beta$ is closely related to the asymptotic behaviour of the unweighted $L^p$ norm $\|T\|_{L^p(\mathbb{R}^n)}$ as $p$ goes to 1 and $+\infty$, which is related to Yano's classical extrapolation theorem. By combining these results with the known weighted inequalities, we derive the sharpness of the exponents, without building any specific example, for a wide class of operators including maximal-type, Calderón--Zygmund and fractional operators. In particular, we obtain a lower bound for the best possible exponent for Bochner-Riesz multipliers. We also present a new result concerning a continuum family of maximal operators on the scale of logarithmic Orlicz functions. Further, our method allows to consider in a unified way maximal operators defined over very general Muckenhoupt bases. | \section{Introduction and statement of the main result}
\subsection{Introduction}
A main problem in modern Harmonic Analysis is the study of sharp norm inequalities for some of the classical operators on weighted Lebesgue spaces $L^p(w), \, 1<p<\infty$. The usual examples include the Hardy--Littlewood maximal operator, the Hilbert transform and more general Calder\'on-Zygmund operators (C--Z operators). Here $w$ denotes a non--negative, locally integrable function, that is a weight. The class of weights for which these operators $T$ are bounded on $L^p(w)$ were identified in \cite{Muckenhoupt:Ap} and in the later works \cite{HMW}, \cite{CF}. This class consists of the Muckenhoupt $A_{p}$ weights defined by the condition
\begin{equation*}
[w]_{A_p}:=\sup_{Q}\left(\frac{1}{|Q|}\int_{Q}w(y)\ dy \right)\left(\frac{1}{|Q|}\int_{Q}w(y)^{1-p'}\ dy \right)^{p-1}<\infty,
\end{equation*}
where the supremum is taken over all the cubes $Q$ in $\mathbb{R}^n$, $1<p<\infty$ and as usual $p'$ stands for the dual exponent of $p$ satisfying $1/p+1/p'=1$.
Given any of these operators $T$, the first part of this problem is to look for quantitative bounds of the norm $\|T\|_{L^p(w)}$ in terms of the $A_p$ constant of the weight. Then, the following step is to find the sharp dependence, typically with respect to the power of $[w]_{A_p}$. In recent years, the answer to this last question has let a fruitful activity and development of new tools in Harmonic Analysis.
The first classical example is the case of the Hardy--Littlewood maximal function defined as
\begin{equation*}
Mf(x)=\sup_{x\in Q}\Xint-_Q |f(y)| \ dy,
\end{equation*}
where the supremum is taken over all cubes containing the point $x$ and with sides parallel to the coordinate axes. As usual, we denote by $\Xint-_A f$ the average of the function $f$ over the set $A$. It is well known that if $M$ is the maximal function, then
\begin{equation}\label{eq:buckley}
\|M\|_{L^{ p }(w)} \le c \, [w]^{ \frac{1}{p-1} }_{A_{ p}}, \qquad w \in A_{ p},
\end{equation}
and the exponent is sharp, namely $\frac{1}{p-1}$ cannot be replaced with $\frac{1-\varepsilon}{p-1}$, $\varepsilon>0$. This is due to Buckley \cite{Buckley}.
Similarly
S. Petermichl showed in \cite{Petermichl:Riesz} that
\begin{equation}\label{eq:Petermichl}
\|T\|_{L^{ p }(w)} \le c \, [w]^{ \max\{1, \frac{1}{p-1} \}}_{A_{ p}}, \qquad w \in A_{ p}
\end{equation}
is sharp when $T$ is any Riesz transform. In each of these papers, the optimality of the exponent is shown by exhibiting specific examples adapted to the operator under analysis. In the case of Riesz transform, the examples are specific for the range $1<p<2$ and then, the sharpness for large $p$ is obtained by duality.
Similar weighted estimates are known to be true for other classical operators, such as commutators $[b,T]$ of C--Z operators and BMO functions, the dyadic square function $S_d$, vector valued maximal operators $\overline{M}_q$ for $1\le p,q\le \infty$, Bochner-Riesz multipliers $B^\lambda$ and fractional integrals $I_\alpha$. In the case of sharp bounds with respect to the power of the $A_p$ constant of the weight $w$, the sharpness is always proved by constructing specific examples for each operator.
\subsection{Main results}
The main purpose of this article is to present a new approach to test sharpness of weighted estimates. We provide a very general scheme that can be applied to most of the classical operators in Harmonic Analysis. In particular, we show that there is no need to build such examples and that the sharpness is intimately related to the unweighted $L^p$ norm behaviour of the operator $T$ as $p$ gets close to the endpoint $p=1$ and $p=\infty$. The key ingredient is an application of the so called Rubio de Francia's iteration algorithm. This is a basic but powerful technique that was fruitful since it was first applied to factorization of weights and extrapolation. In particular, we will be using some ideas from the new proof of the extrapolation theorem from \cite{Javi-Duo-JFA} and also from \cite{CMP-Book}.
To illustrate the aim of the next definition, consider the following example. Let $H$ be the Hilbert transform. Then, it is known that the size of its kernel implies that the unweighted $L^p$ norm satisfies
\begin{equation}\label{eq:endpointH}
\|H\|_{L^p(\mathbb{R}^n)}\sim O(\frac{1}{p-1}).
\end{equation}
This condition is a particular case of the classical Yano condition related to the well known Yano's extrapolation theorem as shown in \cite{Yano} (see also \cite[p. 61, Theorem 3.5.1]{Guzman-real} for more details and \cite{Carro-JFA} for a generalization of these ideas). In particular, the above condition allows to prove endpoint boundedness properties for the operator in appropriate $L\log L$ spaces at local level. However, the relevant feature for our purpose is that the operator norm \emph{blows up} with order $1$ and no less.
Influenced by this condition we give a precise definition which tries to capture this \emph{endpoint order} by looking at the asymptotic behaviour of the $L^p$ norm of a general operator $T$.
\begin{definition}\label{def:orders}
Given a bounded operator $T$ on $L^p(\mathbb{R}^n)$ for $1<p<\infty$, we define $\alpha_T$ to be the ``endpoint order" of $T$ as follows:
\begin{equation}\label{eq:endpoint1}
\alpha_T=:\sup\{\alpha\ge 0: \forall \varepsilon>0, \limsup_{p \to 1 } (p-1)^{\alpha-\varepsilon} \|T\|_{L^p(\mathbb{R}^n)} =\infty\}.
\end{equation}
The analogue of \eqref{eq:endpoint1} for $p$ large is the following. Let $\gamma_T$ be defined as follows
\begin{equation}\label{eq:endpointINF}
\gamma_T=:\sup\{\gamma\ge 0: \forall \varepsilon>0, \limsup_{p \to \infty } \,\frac{\|T\|_{L^p(\mathbb{R}^n)}}{p^{\gamma-\varepsilon}} =\infty\}.
\end{equation}
\end{definition}
This definition may have appeared previously in the literature but we are not aware of it.
Now we can state our main result.
\begin{theorem} \label{thm:AbstractBuckley}
Let $T$ be an operator (not necessarily linear). Suppose further that
for some $1<p_0<\infty$ and for any $w \in A_{ p_0}$
\begin{equation}\label{eq:weighted}
\|T\|_{L^{ p_{0} }(w)} \le c \, [w]^{\beta}_{A_{ p_0}}.
\end{equation}
Then $\beta\ge \max\left \{\gamma_T;\frac{\alpha_T}{p_0-1}\right \}$.
\end{theorem}
The novelty here is that we can exhibit a close connection between the weighted estimate and the unweighted behaviour of the operator at the endpoints $p=1$ and $p=\infty$. This result can be applied to the known inequalities \eqref{eq:buckley} and \eqref{eq:Petermichl} to derive the sharpness without building any particular example for each operator.
In addition, we can observe that this is a sort of template suitable for any operator. Indeed, as an application we can derive the optimal exponent that one could expect in a weighted estimate for a maximal operator associated to a generic Muckenhoupt basis.
Note that in this latter case it is not even possible to have an example working for a general basis. However, our method allow us to avoid the use of examples and deal with all the bases at once.
We also obtain new results for a class of maximal functions defined in terms of Orlicz averages. For $\Phi_\lambda(t)=t\log(e+t)^\lambda$, $\lambda \in [0,\infty)$, we prove new weighted estimates for the Orlicz maximal operator $M_{\Phi_\lambda}$ which, in addition, are sharp
as a consequence of Theorem \ref{thm:AbstractBuckley}. We remit to Section \ref{sec:orlicz} for the precise definitions of these operators
which can be seen as continuous versions of the iterated Hardy-Littlewood maximal functions. This continuity is reflected in the exponent of the weighted estimates proved in Theorem \ref{thm:Orlicz}. The operators $M_{\Phi_\lambda}$ are special cases of more general Orlicz maximal operators $M_{\Phi}$ introduced in \cite{perez95} to study sharp sufficient ``bump" type conditions for the so called two-weight problem for the Hardy-Littlewood maximal operator. Similar conditions were also considered in the two weight context for fractional integrals in \cite{perez94:indiana} and very recently in the context of Calder\'on--Zygmund operators \cite{Lerner:simpleA2} where it is used to solve the so called ``bump conjecture''. The special case of $M_{\Phi_\lambda}$ was used in \cite{perez94:london} to derive very sharp two weight estimate of the form $(w,M_{\Phi_\varepsilon}(w))$,
\begin{equation*}
\|Tf\|_{L^{1,\infty}(w)}\leq c_{\varepsilon,T}
\int_{\mathbb{R}^n} |f(x)|\,M_{L(\log L)^{\epsilon}} (w)(x)\,dx,
\quad w\geq 0.
\end{equation*}
where $T$ is any Calder\'on--Zygmund operator and where $\varepsilon>0$ is arbitrarily small. Similar sharp estimates where also obtained in the case $p>1$.
Even in the case where it is not known a sharp weighted estimate, we obtain a lower bound for the exponent of the $A_p$ constant. This is the case of Bochner-Riesz multipliers treated in Section \ref{sec:CZ}, Corollary \ref{cor:BR}.
\subsection{Outline}
This article is organized as follows. In Section \ref{sec:proofs} we prove the main result. Then, in Section \ref{sec:CZ} we show how to derive the sharpness of some known weighted estimates for Calder\'on--Zygmund operators with large kernels. We also exhibit lower bounds for the optimal exponent in the case of Bochner--Riesz multipliers. In Section \ref{sec:maximal-and-square} we study maximal type operators and dyadic square functions. In Section \ref{sec:fractional} we obtain results for fractional integral operators by using similar ideas and off-diagonal extrapolation techniques. Finally, in Section \ref{sec:muckenhoupt-bases} our method is used to obtain optimal exponents in the case of maximal functions defined over general Muckenhoupt bases.
\section{Proof of Theorem \ref{thm:AbstractBuckley} }\label{sec:proofs}
We present here the proof of the main results. The key tool is the Rubio de Francia's iteration scheme or algorithm to produce $A_1$ weights with a precise control of the constant of the weight and the main underlying idea comes from extrapolation theory.
The same ideas that we use here were already used to prove sharp weighted estimates for the Hilbert transform with $A_1$ weights in \cite{Fefferman-Pipher}. A more precise and general version was obtained recently in \cite{Javi-Duo-JFA}. We remark that the first part of the proof, namely the proof of inequality \eqref{eq:CF} below, is a consequence of the extrapolation result from \cite{Javi-Duo-JFA} (see Theorem 3.1, first inequality of (3.2), p. 1889). We choose to include the proof for the sake of completeness. For our inequality \eqref{eq:CF-dual}, which is the analogue for large $p$, we perform a slightly different proof.
\begin{proof}[Proof of Theorem \ref{thm:AbstractBuckley}]
We first consider the bound $\beta\ge\frac{\alpha_T}{p_0-1}$. The first step is to prove the following inequality, which can be seen as an unweighted Coifman-Fefferman type inequality relating the operator $T$ to the Hardy--Littlewood maximal function. We have that
\begin{equation} \label{eq:CF}
\|T\|_{L^{p}(\mathbb{R}^n) } \leq c\, \|M\|_{L^{p}(\mathbb{R}^n) } ^{\beta(p_0-p)} \qquad 1<p<p_0.
\end{equation}
Lets start by defining, for $1<p<p_0$, the operator $R$ as follows:
\begin{equation*}
R(h)= \sum_{k=0}^\infty \frac1{2^k}\frac{M^k
(h)}{\|M\|_{L^{p}(\mathbb{R}^n)}^k}
\end{equation*}
Then we have
(A) \quad $h\le R(h)$
\vspace{.2cm}
(B) \quad $\|R(h)\|_{L^{p}(\mathbb{R}^n)}\le
2\,\|h\|_{L^{p}(\mathbb{R}^n)}$
\vspace{.2cm}
(C) \quad $[R(h)]_{A_{1}}\leq 2\, \|M\|_{L^{p}(\mathbb{R}^n) }$
\
To verify \eqref{eq:CF}, consider $1<p<p_0$ and apply Holder's inequality to obtain
\begin{eqnarray*}
\|T(f)\|_{L^{p}(\mathbb{R}^n)} & = & \Big( \int_{\mathbb{R}^n} |Tf|^{p}\, (Rf)^{-(p_{0}-p) \frac{p}{p_0}}\,(Rf)^{(p_{0}-p) \frac{p}{p_0}}\,dx \Big)^{1/p}\\
& \le & \Big( \int_{\mathbb{R}^n} |Tf|^{p_{0}}\, (Rf)^{-(p_{0}-p) }\,dx \Big)^{1/p_{0}}\,
\Big( \int_{\mathbb{R}^n} (Rf)^{p}\,dx \Big)^{\frac{p_{0}-p}{pp_{0}}}\\
\end{eqnarray*}
For clarity in the exposition, we denote $w:=(Rf)^{-(p_{0}-p)}$. Then, by the key hypothesis \eqref{eq:weighted} together with properties $(A)$ and $(B)$ of the Rubio de Francia's algorithm, we have that
\begin{eqnarray*}
\|T(f)\|_{L^{p}(\mathbb{R}^n)} & \le & c\, [w]_{A_{p_{0}}}^{\beta}\, \Big( \int_{\mathbb{R}^n} |f|^{p_{0}}\, w\,dx \Big)^{1/p_{0}}\|f\|_{L^{p}(\mathbb{R}^n) }^{\frac{p_{0}-p}{p_{0}}}\\
&\leq & c\, [w]_{A_{p_{0}}}^{\beta}\, \Big( \int_{\mathbb{R}^n} |f|^{p}\, dx \Big)^{1/p_{0}}
\|f\|_{L^{p}(\mathbb{R}^n) }^{1- \frac{p}{p_{0}}}\\
& = & c\, [w]_{A_{p_{0}}}^{ \beta}\,
\|f\|_{L^{p}(\mathbb{R}^n) }\\
& = & c\, [w^{1-p_0'}]_{A_{p'_{0}}}^{\beta(p_0-1)}
\|f\|_{L^{p}(\mathbb{R}^n) }
\end{eqnarray*}
since $[w]_{A_q}= [w^{1-q'}]^{q-1}_{A_{q'}}$. Now, since \, $\frac{p_0-p}{p_0-1}<1$ we can use Jensen's inequality to compute the constant of the weight as follows
\begin{equation*}
[w^{1-p_0'}]_{A_{p'_{0}}}=[(Rf)^\frac{p_0-p}{p_0-1}]_{A_{p'_{0}}}
\le [R(f)]_{A_{p'_{0}}}^{\frac{p_0-p}{p_0-1} }
\le [R(f)]_{ A_{ 1} }^{\frac{p_0-p}{p_0-1} }
\end{equation*}
Finally, by making use of property (C), we conclude that
\begin{equation*}
\|T(f)\|_{L^{p}(\mathbb{R}^n)} \le c\, \|M\|_{L^{p}(\mathbb{R}^n) } ^{ \beta (p_0-p) }\,
\|f\|_{L^{p}(\mathbb{R}^n) },
\end{equation*}
which clearly implies \eqref{eq:CF}.
Once we have proved the key inequality \eqref{eq:CF}, we can relate the exponent on the weighted estimate to the endpoint order of $T$. To that end, we will use the known asymptotic behaviour of the unweighted $L^p$ norm of the maximal function.
It is well known that when $p$ is close to $1$, there is a dimensional constant $c$ such that
\begin{equation}\label{eq:maximal-pto1}
\| M \|_{L^{p}(\mathbb{R}^n)} \leq c\,
\frac{1}{p-1}.
\end{equation}
Then, for $p$ close to 1, we obtain
\begin{equation}
\|T\|_{L^{p}(\mathbb{R}^n) } \le c\, (p-1)^{-\beta (p_0-p)} \le c\, (p-1)^{-\beta (p_0-1)}
\end{equation}
Therefore, multiplying by $(p-1)^{\alpha_T-\varepsilon}$, using the definition of $\alpha_T$ and taking upper limits we have,
\begin{equation*}
+\infty=\limsup_{p\to1}\, (p-1)^{\alpha_T-\varepsilon}\|T\|_{L^{p}(\mathbb{R}^n) }\le c\,\limsup_{p\to1}\, (p-1)^{\alpha_T-\varepsilon-\beta(p_0-1)}.
\end{equation*}
This last inequality implies that $\beta\ge \frac{\alpha_T}{p_0-1}$, so we conclude the first part of the proof of the theorem.
For the proof of the other inequality, $\beta\ge\gamma_T$, we follow the same line of ideas, but with a twist
involving the dual space $L^{p'}(\mathbb{R}^n)$. Fix $p$, $p>p_0$. We perform the iteration technique $R'$ as before changing $p$ with $p'$:
\begin{equation*}
R'(h)= \sum_{k=0}^\infty \frac1{2^k}\frac{M^k
(h)}{\|M\|_{L^{p'}(\mathbb{R}^n)}^k}
\end{equation*}
Then we have
(A') \quad $h\le R'(h)$
\vspace{.2cm}
(B') \quad $\|R'(h)\|_{L^{p'}(\mathbb{R}^n)}\le
2\,\|h\|_{L^{p'}(\mathbb{R}^n)}$
\vspace{.2cm}
(C') \quad $[R'(h)]_{A_{1}}\leq 2\, \|M\|_{L^{p'}(\mathbb{R}^n) }$
\vspace{.2cm}
Fix $f\in L^p(\mathbb{R}^n)$. By duality there exists a non-negative
function $h\in L^{p'}(\mathbb{R}^n)$, $\|h\|_{L^{p'}(\mathbb{R}^n)}=1$, such that,
\begin{eqnarray*}
\|Tf\|_{L^p(\mathbb{R}^n)} & = & \int_{\mathbb{R}^n} |Tf(x)| h(x)\,dx\\
& \le &\int_{\mathbb{R}^n} |Tf| (R' h)^{ \frac{p-p_0}{p_0(p-1)} } \,h^{ \frac{p(p_0-1)}{ p_0(p-1) } } \,dx\\
& \le &\left(\int_{\mathbb{R}^n} |Tf|^{p_0} (R' h)^{ \frac{p-p_0}{p-1} }\,dx\right)^{1/p_0} \left(\int_{\mathbb{R}^n} h^{p'}\,dx\right)^{1/p_0'} \\
& = & \left(\int_{\mathbb{R}^n} |Tf|^{p_0} (R' h)^{ \frac{p-p_0}{p-1} }\,dx\right)^{1/p_0}.
\end{eqnarray*}
Now we use the key hypothesis \eqref{eq:weighted} and H\"older's inequality to obtain
\begin{eqnarray*}
\|Tf\|_{L^p(\mathbb{R}^n)} & \le & c\, [(R' h)^{ \frac{p-p_0}{p-1} } ]_{A_{p_0}} ^{\beta}
\left(\int_{\mathbb{R}^n} |f|^{p_0} (R' h)^{ \frac{p-p_0}{p-1} }\, dx\right)^{1/p_0} \\
& \le & c\, [(R' h)^{ \frac{p-p_0}{p-1} } ]_{A_{p_0}} ^{\beta}
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \left(\int_{\mathbb{R}^n} (R' h)^{ p' }\,dx\right)^{\frac{1}{p'}\frac{p-p_0}{ p_0(p-1) } } \\
& \le & c\, [(R' h)^{ \frac{p-p_0}{p-1} } ]_{A_{p_0}} ^{\beta}
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \qquad \mbox{by (B') }.\\
& \le & c\, [R' h ]_{A_{p_0}} ^{\beta \frac{p-p_0}{p-1} }
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \qquad \mbox{by Jensen's }\\
& \le & c\, \|M\|_{ L^{p'}(\mathbb{R}^n) }^{\beta \frac{p-p_0}{p-1}}
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \quad \mbox{by (C')}.
\end{eqnarray*}
Hence,
\begin{equation}\label{eq:CF-dual}
\|T\|_{L^{p}(\mathbb{R}^n) } \leq c\, \|M\|_{L^{p'}(\mathbb{R}^n) }^{\beta \frac{p-p_0}{p-1}} \qquad p>p_0.
\end{equation}
This estimate is similar and, somehow dual, to \eqref{eq:CF}. To finish the proof we recall that, for large $p$, namely $p>p_1>p_0$, we have the asymptotic estimate, $\| M \|_{p'} \approx \frac{1}{p'-1}\le p$. Therefore, we have that
\begin{equation*}
\|T\|_{L^{p}(\mathbb{R}^n) } \le c\, p ^{\beta\frac{p-p_0}{p-1}}\le c\, p^\beta
\end{equation*}
since $p>p_1>p_0>1$. As before, dividing by $p^{\gamma_T-\varepsilon}$ and taking upper limits, we obtain
\begin{equation*}
+\infty=\limsup_{p\to \infty}\, \frac{\|T\|_{L^{p}(\mathbb{R}^n) }}{p^{\gamma_T-\varepsilon}}\le c\, \limsup_{p\to \infty} \, p^{\beta-\gamma_T+\varepsilon}.
\end{equation*}
This last inequality implies that $\beta\ge \gamma_T$, so we conclude the proof of the theorem.
\end{proof}
\subsection{Two remarks on sharpening the sharp bounds}\label{sec:sharper}
In the previous section, we showed how to prove sharp weighted bound avoiding the use of specific examples. We studied sharpness with respect to the power of the $A_p$ constant of the weight.
However, there are several further improvements that can be made. First, we will consider mixed $A_p-A_\infty$ bounds in the spirit of \cite{HP} (see also \cite{HPR1}, and \cite{LM}).
We also show refined estimates beyond the scale of power functions.
\subsubsection{Mixed bounds} \label{subsec:mixed}
Here we address the problem of finding sharp ``mixed bounds". More precisely, it was shown in \cite{HP} that the maximal function satisfies
\begin{equation}\label{eq:buckmixed}
\|M\|_{L^{ p }(w)} \le c \, [w] ^{ \frac{1}{p} }_{A_{ p }} [\sigma] ^{ \frac{1}{p} }_{A_{ \infty }} \qquad w \in A_{ p}.
\end{equation}
where $\sigma=\sigma_p=w^{1-p'}$ and where
\begin{equation*}
[\sigma]_{A_{\infty}}:= \sup_{Q}\frac{1}{\sigma(Q)}\int_Q M(\chi_Q \sigma)dx
\end{equation*}
is the Fujii-Wilson $A_{\infty}$'s constant which is much smaller than the usual (Hrushev) $A_{\infty}$ constant defined in terms of the exponential average.
Estimate \eqref{eq:buckmixed} was proved in \cite{HP} and it was used to improve the $A_2$ theorem from \cite{Hytonen:A2}. A better argument for proving \eqref{eq:buckmixed} was obtained in \cite{HPR1}. In addition, let us remark that in \cite{PR-twoweight} there is a new proof of this result which avoids completely the use of the delicate reverse H\"older property of $A_\infty$ weights.
We have the following corollary of Theorem \ref{thm:AbstractBuckley}.
\begin{corollary} \label{cor:AbstractBuckley-mixed}
Let $T$ be an operator (not necessarily linear). Let $w \in A_{p_0}$ for some $1<p_0<\infty$, and recall that $\sigma=w^{1-p_0'}$. Suppose further that
\begin{equation}\label{eq:weighted-mixed}
\|T\|_{L^{ p_{0} }(w)}
\le c\, [w] ^{\beta_1}_{A_{ p_0 }} [\sigma] ^{\beta_2}_{A_{ \infty }}.
\end{equation}
Then $\beta_1+\frac{\beta_2}{p_0-1}\ge \max\left \{\gamma_T;\frac{\alpha_T}{p_0-1}\right \}$.
\end{corollary}
\begin{proof}
The proof of this variant reduces to a simple observation based on the duality properties of Muckenhoupt weights. More precisely, for any $A_p$ weight and any pair of positive exponents $\beta_1$ and $\beta_2$, we have that
\begin{equation*}
[w]^{\beta_1}_{A_p}[w^{1-p'}]^{\beta_2}_{A_\infty}\le
[w]^{\beta_1}_{A_p}[w^{1-p'}]^{\beta_2}_{A_{p'}}=
[w]^{\beta_1}_{A_p}[w]^{\frac{\beta_2}{p-1}}_{A_p}=
[w]^{\beta_1+\frac{\beta_2}{p-1}}_{A_p}.
\end{equation*}
By Theorem \ref{thm:AbstractBuckley}, we conclude that $\beta_1+\frac{\beta_2}{p_0-1}\ge \max\left \{\gamma_T;\frac{\alpha_T}{p_0-1}\right \}$.
\end{proof}
Note that this result implies that we cannot consider in \eqref{eq:buckmixed} smaller exponents.
The same argument can be used to show the sharpness of mixed bounds for C--Z operators. For a given C--Z operator $T$ satisfying some size condition on the kernel (see condition \eqref{eq:kernel} below) we have that $\alpha_T=1$. Therefore, any pair of exponents $(\beta_1,\beta_2)$ lying on the \emph{sharpness line} defined by $\beta_1(p-1)+\beta_2=1$ is sharp. We cannot replace any of them with a smaller quantity. For example, for any $T$ it is proved in \cite{HP} that
\begin{equation}\label{eq:mixedCZ1}
\|T\|_{L^p(w)}\le c\, [w]_{A_p}^{2/p}[\sigma]_{A_\infty}^{2/p-1}
\end{equation}
for any $p\in (1,2]$ and any $w\in A_p$. We conclude that this pair of exponents is sharp although this does not mean that it is the best possible result. Indeed, by moving along the sharpness line, we can balance the exponents replacing some power of $[w]_{A_p}$ by the corresponding power of $[w]_{A_\infty}$. Clearly, the best bounds are those involving a larger power in the $A_\infty$ fraction of the weight.
In the case of commutators, we have from \cite{HP} that, for $1<p\le 2$,
\begin{equation*}
\|[b,T]\|_{L^p(w)}\le c\, [w]_{A_p}^{4/p}[\sigma]_{A_\infty}^{4/p-2}.
\end{equation*}
The exponent of this result is also sharp because the commutator satisfies that $\alpha_{[b,T]}=2$.
\subsection{Beyond power functions}
We start this section by recalling that from Buckley's original example one can conclude that inequality \eqref{eq:buckley} is sharp for \emph{arbitrary} perturbations. Our method also allows to conclude
the same perturbation result. More precisely, suppose that for some $p_0 \in(1,\infty)$ and for some non-decreasing function $\varphi:[1,\infty) \to [0,\infty)$ such that
\begin{equation*}
\lim_{t\to \infty}\frac{\varphi(t)}{t^\frac{1}{p_0-1}}=0
\end{equation*}
we have that, for any $w\in A_{p_0}$,
\begin{equation*}
\|M\|_{L^{p_0}(w)} \le c\, \varphi([w]_{A_{p_0}}).
\end{equation*}
We will show that this cannot hold. To see this, we argue as in Theorem \ref{thm:AbstractBuckley} and obtain that, for some positive constants $c_1,c_2$ and for $1<p<p_1<p_0$,
\begin{eqnarray*}
\|Mf\|_{L^{p}(\mathbb{R}^n)} & \le & c_1\, \varphi([Rf]_{A_{1}}^{p_0-p})
\|f\|_{L^p(\mathbb{R}^n)}\\
& \le & c_1\, \varphi(c_2(p-1)^{-(p_0-1)})\|f\|_{L^p(\mathbb{R}^n)}
\end{eqnarray*}
for any function $f\in L^p(\mathbb{R}^n)$. Since $
\|M\|_{L^p(\mathbb{R}^n)} \geq c\frac{1}{p-1} $ for $p\to 1$, we obtain that
\begin{equation*}
(p-1)^{-1} \le c_1\, \varphi(c_2(p-1)^{-(p_0-1)})
\end{equation*}
contradicting the assumption on $\varphi$. Therefore, we conclude that there is no possible such an improvement of \eqref{eq:buckley}.
A similar argument can be used to derive an analogue result for a generic operator $T$ if it is known the precise endpoint behavior of $T$.
\section{Operators with large kernel and commutators}\label{sec:CZ}
Firstly, we address the problem of proving the sharpness of weighted estimates for Calder\'on--Zygmund operators, its commutators with BMO functions and vector valued extensions. We prove here the following corollary of our main result.
\begin{corollary}\label{cor:CZ-Commutators}
Let $T$ be a Calder\'on--Zygmund operator. Denote by $[b,T]$ the commutator with a BMO function $b$. More generally, its $k$-iteration defined recursively by
\begin{equation*}
T_b^k:=[T_b^{k-1},b],\qquad k\ge 1,
\end{equation*}
with $k$ an integer. The following weighted estimates are sharp
\begin{equation}\label{eq:CZ}
\|T\|_{L^{ p }(w)} \le c \, [w]^{ \max\{1, \frac{1}{p-1} \}}_{A_{ p}}, \qquad w \in A_{ p}
\end{equation}
\begin{equation}\label{eq:commutator}
\|[b,T]\|_{L^{ p }(w)} \le c \, \|b\|_{BMO}\, [w]^{ 2\max\{1, \frac{1}{p-1} \}}_{A_{ p}}, \qquad w \in A_{ p}
\end{equation}
\begin{equation}\label{eq:k-commutator}
\|T_b^k\|_{L^{ p }(w)} \le c \, \|b\|_{BMO}\, [w]^{ (k+1)\max\{1, \frac{1}{p-1} \} }_{A_{ p}}, \qquad w \in A_{ p}.
\end{equation}
\end{corollary}
We also have the following application to vector--valued extensions.
\begin{corollary}\label{cor:vector-valued-CZ}
Given a C--Z operator $T$ we define as usual the vector--valued extension $\overline{T}_{q}$ as
\[
\overline{T}_{q}f(x)=
\left(
\sum_{j=1}^{\infty} |Tf_{j}(x)|^{q}
\right)^{1/q}
\]
where ${f}=\{f_j\}_{j=1}^{\infty}$ is a vector--valued function. Then the following estimate is sharp
\begin{equation}\label{eq:CZ-vector-valued}
\|\overline T_q(f)\|_{L^{ p }(w)} \le c\, [w]^{ \max\{1,\frac{1}{p-1} \}}_{A_{p}}
\left\|\overline{f}_q\right\|_{L^p(w)}, \qquad w \in A_{ p}
\end{equation}
where $ \overline{f}_q(x)= \left(\sum_{j=1}^{\infty} |f_{j}(x)|^{q}\right)^{1/q}$.
\end{corollary}
\begin{proof}[Proof of Corollary \ref{cor:CZ-Commutators} and Corollary \ref{cor:vector-valued-CZ}]
All the previous inequalities are known to be true (see \cite{Hytonen:A2} for the case of C--Z operators and \cite{CPP} for the case of commutators). The bound in \eqref{eq:CZ-vector-valued} is a very recent result from \cite{HH} (see also \cite{scurry} for an alternative proof).
The sharpness follows immediately from Theorem \ref{thm:AbstractBuckley} if we check the appropriate values of $\alpha_T$ and $\gamma_T$ for each case.
In order to apply our Theorem \ref{thm:AbstractBuckley} here we need to exploit the bad behaviour at the endpoint. We remark here that the upper bound in \eqref{eq:endpointH} holds for any C--Z operator, but we need to focus on those operators $T$ such that the upper bound in \eqref{eq:endpointH} is attained.
A general condition for this can be found in \cite[p. 42]{ste93}: suppose that the operator kernel $K$ of a C--Z operator $T$ on $\mathbb{R}^n$ satisfies that
\begin{equation}\label{eq:kernel}
|K(x,y)|\ge\frac{c}{|x-y|^n}.
\end{equation}
for some $c>0$ and if $x \neq y$. Then $T$ satisfies the same endpoint behaviour as the Hilbert transform in \eqref{eq:endpointH}:
\begin{equation}\label{eq:endpointCZ}
\|T\|_{L^p(\mathbb{R}^n)}\sim O(\frac{1}{p-1}),
\end{equation}
which clearly implies that $\alpha_T=1$ (we can consider the Hilbert transform $H$ as a model example of this phenomenon in $\mathbb{R}$ and the Riesz transforms for $\mathbb{R}^n$, $n\ge 2$). Therefore, for any of such operators $T$, we have that $\alpha_T=1$. The same kind of arguments shows that $\gamma_T=1$ and then we conclude that \eqref{eq:CZ} is sharp. Since it is clear that the same holds for the vector valued extension, we conclude that \eqref{eq:CZ-vector-valued} is also sharp.
For the case of a commutator of $[b,T]$, if $T$ is a C--Z operator with a kernel $K$ satisfying \eqref{eq:kernel}, we have that $\alpha_{[b,T]}=\gamma_T=2$. Similarly, for the $k$-iterated commutator $T_b^k$, we have that $\alpha_{T_b^k}=\gamma_T=k$. This concludes with the proof of the corollary.
\end{proof}
As a final application of this result for large kernels, we present here the following consequence of our Theorem \ref{thm:AbstractBuckley} for the optimality of weighted estimates of Bochner-Riesz multipliers. For $\lambda>0$ and $R>0$, this operator is defined by the formula
\begin{equation}\label{eq:def-BR}
(B^\lambda_R f)(x)=\int_{\mathbb R^n}\left(1-\left(|\xi|/R\right)^2\right)^\lambda_+ \hat f(\xi)
e^{2\pi i \xi x}\ d\xi,
\end{equation}
where $\hat f$ denotes the Fourier transform of $f$.
For $R=1$ we write simply $B^\lambda$. It is a known fact that this operator has a kernel $K_\lambda(x)$ defined by
\begin{equation}\label{eq:kernel-BR}
K_\lambda(x)= \frac{\Gamma(\lambda+1)}{\pi^\lambda}\frac{J_{n/2+\lambda}(2\pi|x|)}{|x|^{n/2+\lambda}},
\end{equation}
where $\Gamma$ is the Gamma function and $J_\eta$ is the Bessel function of integral order $\eta$ (see \cite[p.197]{GrafakosCF}).
\begin{corollary}\label{cor:BR}
Let $1<p<\infty$. Suppose further that the following estimate holds
\begin{equation}\label{eq:weighted-BR}
\|B^{(n-1)/2}\|_{L^p(w)} \le c \, [w]^{\beta}_{A_p},
\end{equation}
for any $w\in A_p$ and where the constant $c_p$ is independent of the weight. Then $\beta\ge \max\left \{1;\frac{1}{p-1}\right \}$.
\end{corollary}
\begin{proof}
The proof is immediate once we check that the size of the Kernel satisfies \eqref{eq:kernel}. To see this, we use the known asymptotics for Bessel functions, namely
\begin{equation*}
J_\eta(r) = O(r^{-1/2}),
\end{equation*}
(see \cite[p.338, Example 1.4 ]{ste93}). Combining this with \eqref{eq:kernel-BR}, we obtain that
\begin{equation*}
K_{(n-1)/2}(x)=O(|x|^{-n}),
\end{equation*}
and therefore we have that $\alpha_{B^{(n-1)/2}}=\gamma_{B^{(n-1)/2}}=1$.
\end{proof}
In particular, this result shows that the claimed norm inequality for the maximal Bochner-Riesz operator from \cite{Li-Sun} cannot hold (see also \cite{Li-Sun-corrigendum}).
\section{Maximal operators and square functions}\label{sec:maximal-and-square}
In this section we will show how to derive sharp bounds for maximal-related operators. We also include a new result for the $k$-iterated Hardy-Littlewood maximal operator.
\subsection{Iterated maximal operator}\label{sec:iterated-maximal}
Let $k$ be any positive integer, then the $k$-th iteration of the maximal function can be defined by induction as $M^k=M(M^{k-1})$. For this operator, we have the following sharp weighted estimate.
\begin{corollary} \label{cor:iterated-maximal}
Let $M$ be Hardy-Littlewood maximal function and let $1<p<\infty$ and $w\in A_p$. Then
\begin{equation}\label{eq:k-maximal}
\|M^k\|_{L^p(w)}\leq c\,[w]_{A_p}^{\frac{k}{p-1}}.
\end{equation}
and the exponent is sharp.
\end{corollary}
\begin{proof}
The bound follows directly by iterating Buckley's theorem \eqref{eq:buckley} and the sharpness is a consequence of the main result Theorem \ref{thm:AbstractBuckley} since it is not difficult to verify that in this case $\alpha_{M^k}=k$.
For the iterated maximal function we also have that, for large $p$,
\begin{equation*}
\|M^k\|_{L^p(\mathbb{R}^n)}\sim 1.
\end{equation*}
Therefore, we have that $\gamma_{M^k}=0$ and then \eqref{eq:k-maximal} is sharp.
\end{proof}
\subsection{Orlicz-type Maximal functions}\label{sec:orlicz}
In this section we study maximal operators defined in terms of Orlicz norms. This kind of maximal operators allows to consider some sort of intermediate operators between integer iterations of $M$. To be more precise, let us briefly recall some definitions and properties.
A function $\Phi:[0,\infty) \rightarrow [0,\infty)$ is called a Young function if it is continuous, convex, increasing and satisfies $\Phi(0)=0$ and $\Phi(t) \rightarrow \infty$ as $t \rightarrow \infty$. The space $L_{\Phi}$ is a Banach function space with the Luxemburg norm defined by
\[
\|f\|_{\Phi} =\inf\left\{ t >0: \int_{\mathbb{R}^n}
\Phi\left( \frac{ |f|}{t }\right) \, dx \le 1 \right\}.
\]
Given a cube $Q$, we can also define a localized Luxemburg norm on a cube Q as
\begin{equation*}
\|f\|_{\Phi,Q}= \inf\left\{t >0:
\frac{1}{|Q|}\int_{Q} \Phi\left(\frac{ |f|}{ t }\right) \,
dx \le 1\right\}.
\end{equation*}
The corresponding maximal function is
\begin{equation}\label{eq:maximaltype}
M_{\Phi}f(x)= \sup_{x\in Q} \|f\|_{\Phi,Q}.
\end{equation}
We are interested here in the logarithmic scale given by the functions $\Phi_\lambda(t):=t\log^\lambda(e+t)$, $\lambda \in [0,\infty)$. Note that the case $\lambda=0$ corresponds to $M$. The case $\lambda=k\in \mathbb N$ corresponds to $M_{L(\log L)^{k}}$, which is pointwise comparable to $M^{k+1}$ (see, for example, \cite{perez95:JFA}).
For noninteger values of $\lambda$, we denote by $M_{\Phi_\lambda}=M_{L(\log L)^\lambda }$ the associated maximal operator.
By Corollary \ref{cor:iterated-maximal}, we have that the sharp exponent in weighted estimates for these operators is $1/(p-1)$ for $\lambda=0$ and $k/(p-1)$ for $\lambda=k\in\mathbb N$. The following theorem provides a sharp bound for these intermediate exponents in $\mathbb R_{+} \setminus \mathbb N$.
\begin{theorem} \label{thm:Orlicz}
Let $\lambda>0$, $1<p<\infty$ and $w\in A_p$. Then
\begin{equation}\label{eq:orlicz}
\|M_{\Phi_\lambda}\|_{L^p(w)}\leq c\, [w]_{A_p}^{\frac{1}{p}}[\sigma]_{A_\infty}^{\frac{1}{p}+\lambda} \leq c\, [w]_{A_p}^{\frac{1+\lambda}{p-1}},
\end{equation}
%
where $\sigma=w^{1-p'}$. Furthermore, the exponents are sharp.
\end{theorem}
\begin{proof}
We start with the following variant of the classical Fefferman-Stein inequality which holds for any weight $w$. For $t>0$ and any nonnegative function $f$, we have that
\begin{equation}\label{eq:FeffStein-MPhi}
w\left(\left\{x\in \mathbb{R}^n: M_{\Phi_\lambda}f(x)>t \right\}\right)\le c\int_{\mathbb {R}^n}
\Phi_\lambda\left(\frac{f(x)}{t}\right)\, Mw(x)\ dx,
\end{equation}
where is $M$ is the usual Hardy--Littlewood maximal function and $c$ is a constant independent of the weight $w$. The result
can be obtained using a Calder\'on--Zygmund decomposition adapted to $M_{\Phi_\lambda}$ as in Lemma 4.1 from \cite{perez95}. We leave the details for the interested reader.
Now, if the weight $w$ is in $A_1$, then inequality \eqref{eq:FeffStein-MPhi} yields the linear dependence on $[w]_{A_1}$,
\begin{equation*
w\left(\left\{x\in \mathbb{R}^n:M_{\Phi_\lambda}f(x)>t \right\}\right)\le c\,[w]_{A_1}\int_{\mathbb {R}^n}
\Phi_\lambda\left(\frac{f(x)}{t}\right)\, w(x)\ dx.
\end{equation*}
From this estimate and by using an extrapolation type argument as in \cite[Section 4.1]{perez-lecturenotes}, we derive easily that, for any $w\in A_p$
\begin{equation}\label{eq:linearAp-MPhi}
w\left(\left\{x\in \mathbb{R}^n:M_{\Phi_\lambda}f(x)>t \right\}\right)\le c\,[w]_{A_p}\int_{\mathbb {R}^n}
\Phi_\lambda\left(\frac{f(x)}{t}\right)^p\, w(x)\ dx.
\end{equation}
Now, we follow the same ideas from \cite[Theorem 1.3]{HPR1}. We write the $L^p$ norm as
\begin{equation*}
\|M_{\Phi_\lambda} f\|_{L^p(w)}^p \leq c \int_{0}^{\infty} t^{p} w \{x\in \mathbb{R}^n:M_{\Phi_\lambda} f_t(x) > t\}
\frac{dt}{t}
\end{equation*}
where $f_t:=f\chi_{f>t}$. Since $w\in A_p$, then by the precise open property of $A_p$ classes, we have that $w\in A_{p-\varepsilon}$ where $\varepsilon\sim \frac{1}{[\sigma]_{A_\infty}}$. Moreover, the constants satisfy that $[w]_{A_{p-\varepsilon}}\le c[w]_{A_p}$ (see \cite[Theorem 1.2]{HPR1}). We apply \eqref{eq:linearAp-MPhi} with $p-\varepsilon$ instead of $p$ to obtain after a change of variable
\begin{eqnarray*}
\|M_{\Phi_\lambda} f\|_{L^p(w)}^p & \leq & c\, [w]_{A_{p}}\int_{\mathbb{R}^n} f^p \int_{1}^{\infty} \frac{\Phi_\lambda(t)^{p-\varepsilon}}{t^p}\frac{dt}{t}\ w \ dx \\
& \le & c\, [w]_{A_{p}}\int_1^\infty \frac{(\log(e+ t))^{p\lambda}}{t^\varepsilon}\frac{dt}{t}\ \|f\|^p_{L^p(w)}\\
& \le & c\, [w]_{A_{p}}\left(\frac{1}{\varepsilon}\right)^{\lambda p+1}\ \|f\|^p_{L^p(w)}\\
& \le & c\, [w]_{A_{p}}[\sigma]_{A_\infty}^{\lambda p+1}\ \|f\|^p_{L^p(w)}\\
\end{eqnarray*}
Takin $p$-roots we obtain the desired estimate \eqref{eq:orlicz}.
Regarding the sharpness, we will prove now that the exponent in the last term of \eqref{eq:orlicz} cannot be improved. This follows from Theorem \ref{thm:AbstractBuckley} since it is easy to verify that
\begin{equation*}
\|M_{\Phi_\lambda}\|_{L^p(\mathbb{R}^n)}\sim \frac{1}{(p-1)^{1+\lambda}}.
\end{equation*}
From this estimate we conclude that the endpoint order verifies $\alpha_T=1+\lambda$ for $T=M_{\Phi_\lambda}$. As a final remark, we mention that the exponents of the middle term in \eqref{eq:orlicz} are also sharp by the same argument as in Section \ref{subsec:mixed}.
\end{proof}
\subsection{Vector valued maximal functions}\label{sec:vector-valued-maximal}
We now consider the vector-valued extension of the H-L maximal function. Let $1<q<\infty$ and $1<p<\infty$, then this operator is defined as:
\begin{equation*}
\overline{M}_qf(x)=\Big( \sum_{j=1}^{\infty} (Mf_j(x))^q \Big)^{1/q},
\end{equation*}
where ${f}=\{f_j\}_{j=1}^{\infty}$ is a vector-valued function. For this operator we obtain this corollary.
\begin{corollary}\label{cor:vector-valued-maximal}
For $1<p<\infty$, and for any $w\in A_p$, the following norm inequality is sharp.
\begin{equation}\label{eq:vector-valued-maximal}
\|\overline{M}_qf \|_{L^p(w)}\le c\, [w]^{\max\{\frac{1}{q},\frac{1}{p-1}\}}\|\overline{f}_q\|_{L^p(w)}, \qquad w \in A_{ p}.
\end{equation}
\end{corollary}
\begin{proof}
The bound was proved in \cite{CMP-ADV}. For the sharpness of \eqref{eq:vector-valued-maximal}, although we cannot apply directly our main Theorem \ref{thm:AbstractBuckley}, it is easy to see that once we write the $L^p$ norm of $\left(\sum_j M(f_j)^q\right)^{1/q}$, the same arguments yield the desired result, namely, the analogue of Theorem \ref{thm:AbstractBuckley} in the vector-valued setting. Therefore, the sharpness will follow if we check the values of $\alpha_{\overline{M}_q}$ and $\gamma_{\overline{M}_q}$. The fact that $\alpha_{\overline{M}_q}=1$ can be verified in the same way as in the case $q=1$. For $\gamma_{\overline{M}_q}$, we can find an example of a vector- valued function satisfying $\|\overline{M}_qf\|_{L^p}\ge c p^{1/q}\|f\|_{L_{\ell^q}^p}$ which implies that $\gamma_{\overline{M}_q}=1/q$. This was already known; see \cite[p.75]{ste93} for the classic proof.
\end{proof}
\subsection{Square functions}\label{sec:square}
We include here the case of the dyadic square function $S_d$, since it behaves similarly to the vector--valued maximal function. It is defined as follows. Let $\Delta$ denote the collection of dyadic cubes in $\mathbb{R}^n$. Given $Q\in\Delta$, let $\hat Q$ be its dyadic parent, that is, the unique dyadic cube containing $Q$ whose side-length is twice that of $Q$. Then, the dyadic square function is the operator
\begin{equation*}
S_df(x) = \left(\sum_{Q\in\Delta} (f_Q-f_{\hat Q})^2\chi_Q(x)\right)^{1/2}
\end{equation*}
where $f_Q = \Xint-_Qf(x)\ dx$.
For this operator the result is the following corollary of Theorem \ref{thm:AbstractBuckley}.
\begin{corollary}\label{cor:square-dyadic}
For $1<p<\infty$, and for any $w\in A_p$, the following norm inequality is sharp:
\begin{equation}\label{eq:square-dyadic}
\|S_df\|_{L^p(w)}\le c\,[w]^{\max\{\frac{1}{2},\frac{1}{p-1}\}}\|f\|_{L^p(w)}.
\end{equation}
\end{corollary}
\begin{proof}
Again, the inequality is known to be true (see \cite{CMP-ADV} and references therein). For the sharpness, we just check the values of the two endpoint orders. We first note that $\alpha_{S_d}=1$ by looking at the indicator function of the unit cube (as in the case of the maximal function). The value of $\gamma_{S_d}=\frac{1}{2}$ was previously known, see for instance \cite[p. 434]{CMP-ADV}. In particular, there is an explicit example of a function $f$ such that
$\|S_d f\|_{L^p}\ge c p^{1/2}\|f\|_{L^p}$.
It should be mentioned that the case $p\to\infty$ was already implicitly considered in \cite{Fefferman-Pipher}.
\end{proof}
We remark that this type of arguments for the sharpness of weighted estimates were already in the cited article \cite{CMP-ADV} for the square function and the vector-valued maximal function. However, this was used only for these two cases and only for large values of $p$.
\section{Fractional integral operators }\label{sec:fractional}
In the same spirit as in the previous sections, we can prove the sharpness of weighted estimates for fractional integral operators. For $0<\alpha<n$, the fractional integral operator or Riesz
potential $I_\alpha$ is defined by
\begin{equation*}
I_\alpha f(x)=\int_{R^n} \frac{f(y)}{|x-y|^{n-\alpha}}dy.
\end{equation*}
We also consider the related fractional maximal operator $M_\alpha$ given by
\begin{equation*}
M_\alpha f(x)=\sup_{Q\ni x} \frac{1}{|Q|^{1-\alpha/n}}\int_Q |f(y)| \ dy.
\end{equation*}
It is well known (see \cite{MW-fractional}) that these operators are bounded from $L^p(w^p)$ to $L^q(w^q)$ if and only if the exponents $p$ and $q$ are related by the equation $1/q-1/p=\alpha/n$ and $w$ satisfies the so called $A_{p,q}$ condition. More precisely, $w\in A_{p,q}$ if
\begin{equation*}
[w]_{A_{p,q}}\equiv \sup_Q\left(\frac{1}{|Q|}\int_Q w^q \ dx\right)\left(\frac{1}{|Q|}\int_Q w^{-p'}\ dx\right)^{q/p'}<\infty.
\end{equation*}
%
An extrapolation theorem for these classes of weights, often called off-diagonal extrapolation theorem, was obtained for the first time by Harboure, Mac\'ias and Segovia in \cite{HMS} although we will use
a new version from \cite{Javi-Duo-JFA}.
We have the following proposition.
\begin{proposition}
Suppose that $0\leq \alpha <n$, $1<p<n/\alpha$ and $q$ is defined by the relationship $1/q=1/p-\alpha/n$. If $w\in A_{p,q}$, then the following inequalities are sharp
\begin{equation}\label{eq:frac-maximal}
\|M_\alpha \|_{L^p(w^p) \to L^q(w^q)} \leq c\,
[w]_{A_{p,q}}^{\frac{p'}{q}(1-\frac{\alpha}{n})}.
\end{equation}
and
\begin{equation}\label{eq:frac-integral}
\|I_\alpha\|_{L^p(w^p) \to L^q(w^q)}\leq
c\,[w]_{A_{p,q}}^{(1-\frac{\alpha}{n})\max\{1,\frac{p'}{q}\}}.
\end{equation}
\end{proposition}
\begin{proof}
Both inequalities are known to be true and the proof can be found in \cite{LMPT}. There, it was also proved the sharpness by constructing appropriate examples. We show here that we can derive the sharpness by using a version of our approach adapted to the setting of off diagonal extrapolation. Let $1< p_0<\infty$ and $0<q_0<\infty$ such that $1/p_0-1/q_0=\alpha/n$. Suppose that we have, for some $\beta>0$, the following inequality.
\begin{equation*}
\|M_\alpha \|_{L^{p_0}(w^{p_0}) \to L^{q_0}(w^{q_0})} \leq c\,
[w]_{A_{p_0,q_0}}^\beta,
\end{equation*}
for any $w\in A_{p,q}$. We apply Theorem 5.1 from \cite{Javi-Duo-JFA} to obtain, for any $\frac{n}{n-\alpha}<q<q_0$, the unweighted estimate
\begin{equation}\label{eq:fract-unweighted}
\|M_\alpha f \|_{ L^q(\mathbb{R}^n)} \leq c\, \|M\|_{L^{q\frac{n-\alpha}{n}}(\mathbb{R}^n)}^{\beta(q_0-q)\frac{n-\alpha}{n}}\|f \|_{ L^p(\mathbb{R}^n)}
\end{equation}
where $M$ is the usual H--L maximal operator. Now we need to use the analogue of the endpoint order for the fractional maximal operator. From \eqref{eq:fract-unweighted} we can derive the following inequality:
\begin{equation}\label{eq:fract-orders}
\left(q-\frac{n}{n-\alpha}\right)^{-1/q} \leq c\, \left(q-\frac{n}{n-\alpha}\right)^{-\beta(q_0-q)\frac{n-\alpha}{n}}.
\end{equation}
This can be done by estimating the operator norm of the fractional maximal operator.
On the left hand side of \eqref{eq:fract-orders} we used the fact that
\begin{equation*}
\|M_\alpha\|^q_{L^p(\mathbb{R}^n)\to L^q(\mathbb{R}^n)}\ge \frac{1}{q-\frac{n}{n-\alpha}}
\end{equation*}
On the right hand side of \eqref{eq:fract-orders} we just use again that $\|M\|_{L^r}\sim 1/(r-1)$ for $r$ close to 1. Arguing as before, if we let $q$ go to the critical value $\frac{n}{n-\alpha}$ we obtain that
\begin{equation*}
\beta\ge (1-\alpha/n)\frac{1}{q_0\frac{n-\alpha}{n}-1}=(1-\alpha/n)\frac{p'_0}{q_0}
\end{equation*}
The sharpness for the case of the fractional integral, namely inequality \eqref{eq:frac-integral}, follows essentially the same steps. We need to prove that the inequality
\begin{equation*}
\|I_\alpha\|_{L^p(w^p) \to L^q(w^q)}\leq
c\,[w]_{A_{p,q}}^\beta, \qquad w\in A_{p,q}
\end{equation*}
implies that $\beta\ge (1-\frac{\alpha}{n})\max\{1,\frac{p'}{q}\}$. For the bound $\beta\ge(1-\frac{\alpha}{n})\frac{p'}{q}$ we can repeat the previous proof, since the fractional integral also satisfies that
\begin{equation*}
\|I_\alpha\|^q_{L^p(\mathbb{R}^n)\to L^q(\mathbb{R}^n)}\ge \frac{1}{q-\frac{n}{n-\alpha}}
\end{equation*}
The other case, namely $\beta\ge (1-\frac{\alpha}{n})$ follows easily by duality. We left the details for the interested reader.
\end{proof}
\section{Muckenhoupt bases}\label{sec:muckenhoupt-bases}
In this section we address the problem of finding optimal exponents for maximal operators defined over Muckenhoupt bases. Recall that given a family $\mathcal{B}$ of open sets, we can define the maximal operator $M_\mathcal{B}$ as
\begin{equation*}
M_\mathcal{B}f(x)=\sup_{x\in B\in \mathcal{B}}\Xint-_B |f(y)| \ dy,
\end{equation*}
if $x$ belongs to some set $b\in \mathcal{B}$ and $M_\mathcal{B}f(x)=0$ otherwise. The natural classes of weights associated to this operator are defined in the same way as the classical Muckenhoupt classes: $w\in A_{p,\mathcal{B}}$ if
\begin{equation*}
[w]_{A_{p,\mathcal{B}}}:=\sup_{B\in\mathcal{B}}\left(\frac{1}{|B|}\int_{B}w(y)\ dy \right)\left(\frac{1}{|B|}\int_{B}w(y)^{1-p'}\ dy \right)^{p-1}<\infty.
\end{equation*}
We say that a basis $\mathcal{B}$ is a Muckenhoupt basis if $M_\mathcal{B}$ is bounded on $L^p(w)$ whenever $w\in A_{p,\mathcal{B}}$ (see \cite{perez-pubmat}).
In this generality, we also can prove a lower bound for the best possible exponent in a weighted estimate. The only requirement on the operator $M_\mathcal{B}$ is that its $L^p$ norm must blow up when $p$ goes to 1 (no matter the ratio of blow up). Precisely, we have the following theorem.
\begin{theorem}\label{thm:muckenhoupt-bases}
Let $\mathcal{B}$ be a Muckenhoupt basis. Suppose in addition that the associated maximal operator $M_\mathcal{B}$ satisfies the following weighted estimate:
\begin{equation}
\|M_\mathcal{B}\|_{L^{p_0}(w)}\leq c\, [w]_{A_{p_0,\mathcal{B}}}^{\beta}.
\end{equation}
If $\displaystyle\limsup_{p\to 1^+}\|M_\mathcal{B}\|_{L^p(\mathbb{R}^n)}=+\infty$, then $\beta\ge \frac{1}{p_0-1}$.
\end{theorem}
\begin{proof}
The idea is to perform the iteration technique from Theorem \ref{thm:AbstractBuckley} but with $M_\mathcal{B}$ instead of the standard H--L maximal operator. Then we obtain, for $ 1<p<p_0$, that
\begin{equation} \label{eq:CF-muckenhoupt-bases}
\|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) } \leq c\, \|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) } ^{\beta(p_0-p)}\le c\, \|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) } ^{\beta(p_0-1)}.
\end{equation}
The last inequality holds since $\|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) }\ge 1$.
We remark here that, since we are comparing $M_\mathcal{B}$ to itself, it is irrelevant to know the precise quantitative behaviour of its $L^p$ for $p$ close to 1. In fact, we cannot use any estimate like \eqref{eq:maximal-pto1} since we are dealing with a generic basis. Just knowing that the $L^p$ norm blows up when $p$ goes to 1, allows us to conclude that $\beta\ge \frac{1}{p_0-1}$.
\end{proof}
As an example of this result, we can show that the result for Calder\'on weights from \cite{DMRO-calderon} is sharp. Precisely, for the basis $\mathcal{B}_0$ of open sets in $\mathbb{R}$ of the form $(0,b)$, $b>0$, the authors prove that the associated maximal operator $N$ defined as
\begin{equation*}
Nf(t)=\sup_{b>t}\frac{1}{b}\int_0^b |f(x)|\ dx
\end{equation*}
is bounded on $L^p(w)$ if and only if $w\in A_{p,\mathcal{B}_0}$ and, moreover, that
\begin{equation*}
\|N\|_{L^p(w)}\le c\, [w]_{A_{p,\mathcal{B}_0}}^{\frac{1}{p-1}}.
\end{equation*}
By the preceding result, this inequality is sharp with respect to the exponent on the characteristic of the weight.
We can also apply Theorem \ref{thm:muckenhoupt-bases} to the basis of rectangles in $\mathbb R^n$ with sides parallel to the coordinate axes. We detail this case in the following subsection.
\subsection{The strong maximal function}\label{sec:strong}
All the sharp results we have obtained here concern the classical or one--parameter theory, where the operators commute with one-parameter dilations of $\mathbb{R}^n$. A natural question would be to study this kind of sharp quantitative estimates for {\em multi--parameter} operators. As a first step we have tried to apply the approach we have presented here to the most basic example of the multiparameter theory, that is the strong maximal function. However, we have not obtained a satisfactory answer even for this.
Let us recall first some definitions and known estimates to understand why our {\emph{template}} does not work for this operator. For a locally integrable function $f$ on $\mathbb{R}^n$ we will denote by $M_s f$ the strong maximal function:
\begin{align*}
M_sf(x)= \sup_{\substack{R\ni x}} \frac{1}{|R|} \int_R |f(y)|dy,\quad x\in\mathbb R^n,
\end{align*}
where the supremum is taken over all the rectangles in $\mathbb R^n$ with sides parallel to the coordinate axes. This operator is bounded in $L^p(\mathbb{R}^n)$. Indeed,
\begin{equation}\label{eq:strong R}
\| M_s \|_p \approx
(p')^n
\end{equation}
where $1<p<\infty$. We will say that $w$ belongs to the class $A_p ^*$, $1<p<\infty$, whenever
\begin{align*}
[w]_{A_p ^*}=\sup_{R} \bigg(\frac{1}{|R|}\int_R w \bigg) \bigg( \frac{1}{|R|}\int_R w^{1-p'} \bigg)^{p-1}<+\infty
\end{align*}
where the supremum is taken over all the rectangles in $\mathbb R^n$ with sides parallel to the coordinate axes. Thus $A_p^*$ is the class of weights associated naturally with $n$--dimensional intervals. As it happened with the Hardy-Littlewood maximal function, this class of weights characterizes completely the boundedness of the strong maximal function in weighted Lebesgue spaces. In fact, it is not difficult to see that
\begin{equation}\label{eq:strong}
\|M_s\|_{L^{ p }(w)} \le c \, [w]^{ \frac{n}{p-1} }_{A_{ p}^*}, \qquad w \in A_{ p}^*.
\end{equation}
To study which would be the sharp exponent in the last inequality, we could reproduce the proof of Theorem \ref{thm:AbstractBuckley} replacing in the Rubio de Francia's algorithm the maximal function by the strong maximal one and making a suitable use of estimate \eqref{eq:strong R}. The analogue for the multiparameter setting is the following result. Suppose that a given operator $T$, bounded in $L^p(\mathbb{R}^n)$ for $1<p<\infty$, satisfies a weighted inequality like
\begin{equation*}\label{eq:strong-weighted}
\|T\|_{L^p(w)}\le c\, [w]_{A^*_p}^\beta
\end{equation*}
for any $w\in A_p^*$. In addition, define the endpoint order $\alpha_T$ as before. Then, the same arguments from Theorem \ref{thm:AbstractBuckley} allow us to conclude that
any exponent in \eqref{eq:strong-weighted} needs to be
\begin{equation*}
\beta\geq\frac{\alpha_{T}}{n(p-1)}.
\end{equation*}
Going back to the case of the strong maximal function, we have that $\alpha_{M_s}=n$ according to Definition \ref{def:orders} and the estimate \eqref{eq:strong R}. Therefore we just obtain a trivial estimate.
\section{Acknowledgements}
We are deeply in debt to Javier Duoandikoetxea for many valuable comments and suggestions on this problem. In particular, he pointed out and brought to our attention the application of our results to Bochner-Riesz multipliers and Muckenhoupt bases.
The first author is supported by the Spanish Ministry of Science and Innovation grant MTM2012-30748,
the second and third authors are also supported by the Junta de Andaluc\'ia, grant FQM-4745.
\bibliographystyle{alpha}
\section{Introduction and statement of the main result}
\subsection{Introduction}
A main problem in modern Harmonic Analysis is the study of sharp norm inequalities for some of the classical operators on weighted Lebesgue spaces $L^p(w), \, 1<p<\infty$. The usual examples include the Hardy--Littlewood (H--L) maximal operator, the Hilbert transform and more general Calder\'on-Zygmund operators (C--Z operators). Here $w$ denotes a non--negative, locally integrable function, that is a weight. The class of weights for which these operators $T$ are bounded on $L^p(w)$ were identified in \cite{Muckenhoupt:Ap} and in the later works \cite{HMW}, \cite{CF}. This class consists of the Muckenhoupt $A_{p}$ weights defined by the condition
\begin{equation*}
[w]_{A_p}:=\sup_{Q}\left(\frac{1}{|Q|}\int_{Q}w(y)\ dy \right)\left(\frac{1}{|Q|}\int_{Q}w(y)^{1-p'}\ dy \right)^{p-1}<\infty,
\end{equation*}
where the supremum is taken over all the cubes $Q$ in $\mathbb{R}^n$.
Given any of these operators $T$, the first part of this problem is to look for quantitative bounds of the norm $\|T\|_{L^p(w)}$ in terms of the $A_p$ constant of the weight. Then, the following step is to find the sharp dependence, typically with respect to the power of $[w]_{A_p}$. In recent years, the answer to this last question has let a fruitful activity and development of new tools in Harmonic Analysis. Firstly, Buckley \cite{Buckley} identified the sharp exponent in the case of the H--L maximal function, i.e.,
\begin{equation}\label{eq:buckley}
\|M\|_{L^{ p }(w)} \le c \, [w]^{ \frac{1}{p-1} }_{A_{ p}}, \qquad w \in A_{ p},
\end{equation}
and $\frac{1}{p-1}$ cannot be replaced with $\frac{1-\varepsilon}{p-1}$, $\varepsilon>0$.
Afterwards, Petermichl \cite{Petermichl:Riesz} showed that
\begin{equation}\label{eq:Petermichl}
\|T\|_{L^{ p }(w)} \le c \, [w]^{ \max\{1, \frac{1}{p-1} \}}_{A_{ p}}, \qquad w \in A_{ p}
\end{equation}
is sharp when $T$ is any Riesz transform.
Similar weighted estimates are known to be true for other classical operators, such as commutators of C--Z operators with BMO functions, the dyadic square function, vector valued maximal operators and fractional integrals. In the case of sharp bounds with respect to the power of the $A_p$ constant of the weight $w$, the sharpness is most frequently proved by constructing specific examples for each operator.
Throughout this paper, we will use the notation $A\lesssim B$ to indicate that there is a constant $c>0$ independent of $A$ and $B$ such that $A\le c B$. By $A\sim B$ we mean that both $A\lesssim B$ and $B\lesssim A$ hold.
\subsection{Main results}
In order to state our main results, we need to introduce the notion of endpoint order for a given operator $T$. To illustrate the aim of the next definition, consider the following example. Let $H$ be the Hilbert transform. Then, it is known that the size of its kernel implies (see \cite[p. 42]{ste93}) that the unweighted $L^p$ norm satisfies
\begin{equation}\label{eq:endpointH}
\|H\|_{L^p(\mathbb{R}^n)}\sim \frac{1}{p-1}.
\end{equation}
The next definition tries to capture this \emph{endpoint order} by looking at the asymptotic behaviour of the $L^p$ norm of a general operator $T$.
\begin{definition}\label{def:orders}
Given a bounded operator $T$ on $L^p(\mathbb{R}^n)$ for $1<p<\infty$, we define $\alpha_T$ to be the ``endpoint order" of $T$ as follows:
\begin{equation}\label{eq:endpoint1}
\alpha_T=:\sup\{\alpha\ge 0: \forall \varepsilon>0, \limsup_{p \to 1 } (p-1)^{\alpha-\varepsilon} \|T\|_{L^p(\mathbb{R}^n)} =\infty\}.
\end{equation}
The analogue of \eqref{eq:endpoint1} for $p$ large is the following. Let $\gamma_T$ be defined as follows
\begin{equation}\label{eq:endpointINF}
\gamma_T=:\sup\{\gamma\ge 0: \forall \varepsilon>0, \limsup_{p \to \infty } \,\frac{\|T\|_{L^p(\mathbb{R}^n)}}{p^{\gamma-\varepsilon}} =\infty\}.
\end{equation}
\end{definition}
This definition may have appeared previously in the literature but we are not aware of it.
Now we can state our main result.
\begin{theorem} \label{thm:AbstractBuckley}
Let $T$ be an operator (not necessarily linear). Suppose further that
for some $1<p_0<\infty$ and for any $w \in A_{ p_0}$
\begin{equation}\label{eq:weighted}
\|T\|_{L^{ p_{0} }(w)} \le c \, [w]^{\beta}_{A_{ p_0}}.
\end{equation}
Then $\beta\ge \max\left \{\gamma_T;\frac{\alpha_T}{p_0-1}\right \}$.
\end{theorem}
The novelty here is that we can exhibit a close connection between the weighted estimate and the unweighted behaviour of the operator at the endpoints $p=1$ and $p=\infty$.
As an application of the method of proof we can derive a lower bound for the optimal exponent that one could expect in a weighted estimate for a maximal operator associated to a generic Muckenhoupt basis $M_\mathcal{B}$ (see Section \ref{sec:muckenhoupt-bases}).
We note that it is not even possible to have an example working for a general basis.
The only requirement on the operator $M_\mathcal{B}$ is that its $L^p$ norm must blow up when $p$ goes to 1 (no matter the ratio of blow up). Precisely, we have the following theorem.
\begin{theorem}\label{thm:muckenhoupt-bases}
Let $\mathcal{B}$ be a Muckenhoupt basis. Suppose in addition that the associated maximal operator $M_\mathcal{B}$ satisfies the following weighted estimate:
\begin{equation}
\|M_\mathcal{B}\|_{L^{p_0}(w)}\leq c\, [w]_{A_{p_0,\mathcal{B}}}^{\beta}.
\end{equation}
If $\displaystyle\limsup_{p\to 1^+}\|M_\mathcal{B}\|_{L^p(\mathbb{R}^n)}=+\infty$, then $\beta\ge \frac{1}{p_0-1}$.
\end{theorem}
We also obtain new results for a class of maximal functions defined in terms of Orlicz averages (see Section \ref{sec:orlicz}). For $\Phi_\lambda(t)=t\log(e+t)^\lambda$, $\lambda \in [0,\infty)$, we prove new weighted estimates for the Orlicz maximal operator $M_{\Phi_\lambda}$ which, in addition, are sharp
as a consequence of Theorem \ref{thm:AbstractBuckley}. These operators can be seen as continuous versions of the iterated Hardy-Littlewood maximal function. This continuity is reflected in the exponent of the weighted estimates proved in Theorem \ref{thm:Orlicz}.
The operators $M_{\Phi_\lambda}$ are relevant in many situations, in particular for the study of the so called ``$A_p$ bump conjectures" (see \cite[p.187]{CMP-Book}).
Even in the case where it is not known a sharp weighted estimate, we obtain a lower bound for the exponent of the $A_p$ constant. This is the case of Bochner-Riesz multipliers treated in Section \ref{sec:CZ}, Corollary \ref{cor:BR}.
\subsection{Outline}
This article is organized as follows. In Section \ref{sec:proofs} we prove the main result. Then, in Section \ref{sec:applications} we show how to derive the sharpness of some weighted estimates for several classical operators. Finally, in Section \ref{sec:muckenhoupt-bases} our method is used to obtain optimal exponents in the case of maximal functions defined over general Muckenhoupt bases.
\section{Proof of Theorem \ref{thm:AbstractBuckley} }\label{sec:proofs}
We present here the proof of the main result. The key tool is the Rubio de Francia's iteration scheme or algorithm to produce $A_1$ weights with a precise control of the constant of the weight and the main underlying idea comes from extrapolation theory.
The same ideas that we use here were already used to prove sharp weighted estimates for the Hilbert transform with $A_1$ weights in \cite{Fefferman-Pipher}. A more precise and general version was obtained recently in \cite{Javi-Duo-JFA}. We remark that the first part of the proof, namely the proof of inequality \eqref{eq:CF} below, is a consequence of the extrapolation result from \cite{Javi-Duo-JFA} (see Theorem 3.1, first inequality of (3.2), p. 1889). We choose to include the proof for the sake of completeness. For our inequality \eqref{eq:CF-dual}, which is the analogue for large $p$, we perform a slightly different proof.
\begin{proof}[Proof of Theorem \ref{thm:AbstractBuckley}]
We first consider the bound $\beta\ge\frac{\alpha_T}{p_0-1}$. The first step is to prove the following inequality, which can be seen as an unweighted Coifman-Fefferman type inequality relating the operator $T$ to the Hardy--Littlewood maximal function. We have that
\begin{equation} \label{eq:CF}
\|T\|_{L^{p}(\mathbb{R}^n) } \leq c\, \|M\|_{L^{p}(\mathbb{R}^n) } ^{\beta(p_0-p)} \qquad 1<p<p_0.
\end{equation}
Lets start by defining, for $1<p<p_0$, the operator $R$ as follows:
\begin{equation*}
R(h)= \sum_{k=0}^\infty \frac1{2^k}\frac{M^k
(h)}{\|M\|_{L^{p}(\mathbb{R}^n)}^k}.
\end{equation*}
Then we have
(A) \quad $h\le R(h)$
\vspace{.2cm}
(B) \quad $\|R(h)\|_{L^{p}(\mathbb{R}^n)}\le
2\,\|h\|_{L^{p}(\mathbb{R}^n)}$
\vspace{.2cm}
(C) \quad $[R(h)]_{A_{1}}\leq 2\, \|M\|_{L^{p}(\mathbb{R}^n) }$
\
To verify \eqref{eq:CF}, consider $1<p<p_0$ and apply Holder's inequality to obtain
\begin{eqnarray*}
\|T(f)\|_{L^{p}(\mathbb{R}^n)} & = & \Big( \int_{\mathbb{R}^n} |Tf|^{p}\, (Rf)^{-(p_{0}-p) \frac{p}{p_0}}\,(Rf)^{(p_{0}-p) \frac{p}{p_0}}\,dx \Big)^{1/p}\\
& \le & \Big( \int_{\mathbb{R}^n} |Tf|^{p_{0}}\, (Rf)^{-(p_{0}-p) }\,dx \Big)^{1/p_{0}}\,
\Big( \int_{\mathbb{R}^n} (Rf)^{p}\,dx \Big)^{\frac{p_{0}-p}{pp_{0}}}\\
\end{eqnarray*}
For clarity in the exposition, we denote $w:=(Rf)^{-(p_{0}-p)}$. Then, by the key hypothesis \eqref{eq:weighted} together with properties $(A)$ and $(B)$ of the Rubio de Francia's algorithm, we have that
\begin{eqnarray*}
\|T(f)\|_{L^{p}(\mathbb{R}^n)} & \le & c\, [w]_{A_{p_{0}}}^{\beta}\, \Big( \int_{\mathbb{R}^n} |f|^{p_{0}}\, w\,dx \Big)^{1/p_{0}}\|f\|_{L^{p}(\mathbb{R}^n) }^{\frac{p_{0}-p}{p_{0}}}\\
&\leq & c\, [w]_{A_{p_{0}}}^{\beta}\, \Big( \int_{\mathbb{R}^n} |f|^{p}\, dx \Big)^{1/p_{0}}
\|f\|_{L^{p}(\mathbb{R}^n) }^{1- \frac{p}{p_{0}}}\\
& = & c\, [w]_{A_{p_{0}}}^{ \beta}\,
\|f\|_{L^{p}(\mathbb{R}^n) }\\
& = & c\, [w^{1-p_0'}]_{A_{p'_{0}}}^{\beta(p_0-1)}
\|f\|_{L^{p}(\mathbb{R}^n) }
\end{eqnarray*}
since $[w]_{A_q}= [w^{1-q'}]^{q-1}_{A_{q'}}$. Now, since \, $\frac{p_0-p}{p_0-1}<1$ we can use Jensen's inequality to compute the constant of the weight as follows
\begin{equation*}
[w^{1-p_0'}]_{A_{p'_{0}}}=[(Rf)^\frac{p_0-p}{p_0-1}]_{A_{p'_{0}}}
\le [R(f)]_{A_{p'_{0}}}^{\frac{p_0-p}{p_0-1} }
\le [R(f)]_{ A_{ 1} }^{\frac{p_0-p}{p_0-1} }.
\end{equation*}
Finally, by making use of property (C), we conclude that
\begin{equation*}
\|T(f)\|_{L^{p}(\mathbb{R}^n)} \le c\, \|M\|_{L^{p}(\mathbb{R}^n) } ^{ \beta (p_0-p) }\,
\|f\|_{L^{p}(\mathbb{R}^n) },
\end{equation*}
which clearly implies \eqref{eq:CF}.
Once we have proved the key inequality \eqref{eq:CF}, we can relate the exponent on the weighted estimate to the endpoint order of $T$. To that end, we will use the known asymptotic behaviour of the unweighted $L^p$ norm of the maximal function.
It is well known that when $p$ is close to $1$, there is a dimensional constant $c$ such that
\begin{equation}\label{eq:maximal-leq-pto1}
\| M \|_{L^{p}(\mathbb{R}^n)} \leq c\,
\frac{1}{p-1}.
\end{equation}
Then, for $p$ close to 1, we obtain
\begin{equation}
\|T\|_{L^{p}(\mathbb{R}^n) } \le c\, (p-1)^{-\beta (p_0-p)} \le c\, (p-1)^{-\beta (p_0-1)} .
\end{equation}
Therefore, multiplying by $(p-1)^{\alpha_T-\varepsilon}$, using the definition of $\alpha_T$ and taking upper limits we have,
\begin{equation*}
+\infty=\limsup_{p\to1}\, (p-1)^{\alpha_T-\varepsilon}\|T\|_{L^{p}(\mathbb{R}^n) }\le c\,\limsup_{p\to1}\, (p-1)^{\alpha_T-\varepsilon-\beta(p_0-1)}.
\end{equation*}
This last inequality implies that $\beta\ge \frac{\alpha_T}{p_0-1}$, so we conclude the first part of the proof of the theorem.
For the proof of the other inequality, $\beta\ge\gamma_T$, we follow the same line of ideas, but with a twist
involving the dual space $L^{p'}(\mathbb{R}^n)$. Fix $p$, $p>p_0$. We perform the iteration technique $R'$ as before changing $p$ with $p'$:
\begin{equation*}
R'(h)= \sum_{k=0}^\infty \frac1{2^k}\frac{M^k
(h)}{\|M\|_{L^{p'}(\mathbb{R}^n)}^k}.
\end{equation*}
Then we have
(A') \quad $h\le R'(h)$
\vspace{.2cm}
(B') \quad $\|R'(h)\|_{L^{p'}(\mathbb{R}^n)}\le
2\,\|h\|_{L^{p'}(\mathbb{R}^n)}$
\vspace{.2cm}
(C') \quad $[R'(h)]_{A_{1}}\leq 2\, \|M\|_{L^{p'}(\mathbb{R}^n) }$
\vspace{.2cm}
Fix $f\in L^p(\mathbb{R}^n)$. By duality there exists a non-negative
function $h\in L^{p'}(\mathbb{R}^n)$, $\|h\|_{L^{p'}(\mathbb{R}^n)}=1$, such that,
\begin{eqnarray*}
\|Tf\|_{L^p(\mathbb{R}^n)} & = & \int_{\mathbb{R}^n} |Tf(x)| h(x)\,dx\\
& \le &\int_{\mathbb{R}^n} |Tf| (R' h)^{ \frac{p-p_0}{p_0(p-1)} } \,h^{ \frac{p(p_0-1)}{ p_0(p-1) } } \,dx\\
& \le &\left(\int_{\mathbb{R}^n} |Tf|^{p_0} (R' h)^{ \frac{p-p_0}{p-1} }\,dx\right)^{1/p_0} \left(\int_{\mathbb{R}^n} h^{p'}\,dx\right)^{1/p_0'} \\
& = & \left(\int_{\mathbb{R}^n} |Tf|^{p_0} (R' h)^{ \frac{p-p_0}{p-1} }\,dx\right)^{1/p_0}.
\end{eqnarray*}
Now we use the key hypothesis \eqref{eq:weighted} and H\"older's inequality to obtain
\begin{eqnarray*}
\|Tf\|_{L^p(\mathbb{R}^n)} & \le & c\, [(R' h)^{ \frac{p-p_0}{p-1} } ]_{A_{p_0}} ^{\beta}
\left(\int_{\mathbb{R}^n} |f|^{p_0} (R' h)^{ \frac{p-p_0}{p-1} }\, dx\right)^{1/p_0} \\
& \le & c\, [(R' h)^{ \frac{p-p_0}{p-1} } ]_{A_{p_0}} ^{\beta}
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \left(\int_{\mathbb{R}^n} (R' h)^{ p' }\,dx\right)^{\frac{1}{p'}\frac{p-p_0}{ p_0(p-1) } } \\
& \le & c\, [(R' h)^{ \frac{p-p_0}{p-1} } ]_{A_{p_0}} ^{\beta}
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \qquad \mbox{by (B') }.\\
& \le & c\, [R' h ]_{A_{p_0}} ^{\beta \frac{p-p_0}{p-1} }
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \qquad \mbox{by Jensen's }\\
& \le & c\, \|M\|_{ L^{p'}(\mathbb{R}^n) }^{\beta \frac{p-p_0}{p-1}}
\left(\int_{\mathbb{R}^n} |f|^{p} dx\right)^{1/p} \quad \mbox{by (C')}.
\end{eqnarray*}
Hence,
\begin{equation}\label{eq:CF-dual}
\|T\|_{L^{p}(\mathbb{R}^n) } \leq c\, \|M\|_{L^{p'}(\mathbb{R}^n) }^{\beta \frac{p-p_0}{p-1}} \qquad p>p_0.
\end{equation}
This estimate is similar and, somehow dual, to \eqref{eq:CF}. To finish the proof we recall that, for large $p$, namely $p>p_1>p_0$, we have the asymptotic estimate, $\| M \|_{L^{p'}(\mathbb{R}^n)} \sim \frac{1}{p'-1}\le p$. Therefore, we have that
\begin{equation*}
\|T\|_{L^{p}(\mathbb{R}^n) } \le c\, p ^{\beta\frac{p-p_0}{p-1}}\le c\, p^\beta
\end{equation*}
since $p>p_1>p_0>1$. As before, dividing by $p^{\gamma_T-\varepsilon}$ and taking upper limits, we obtain
\begin{equation*}
+\infty=\limsup_{p\to \infty}\, \frac{\|T\|_{L^{p}(\mathbb{R}^n) }}{p^{\gamma_T-\varepsilon}}\le c\, \limsup_{p\to \infty} \, p^{\beta-\gamma_T+\varepsilon}.
\end{equation*}
This last inequality implies that $\beta\ge \gamma_T$, so we conclude the proof of the theorem.
\end{proof}
\begin{remark}
The techniques used in the proof of Theorem \ref{thm:AbstractBuckley} actually allow us to deduce sharper results for some particular cases. For the H--L maximal function $M$, by considering the indicator function of the unit cube, it is easy to conclude that
\begin{equation}\label{eq:maximal-sim-p-1}
\|M\|_{L^p(\mathbb{R}^n)} \sim (p-1)^{-1},
\end{equation}
for $p$ close to 1. This precise endpoint behavior allows us to prove that we cannot replace in the weighted inequality \eqref{eq:buckley} the function $t\mapsto t^{(p-1)^{-1}}$ by any other \emph{smaller} growth function $\varphi$. To be more precise, the following inequality fails
\begin{equation*}
\|M\|_{L^{p}(w)} \le c\, \varphi([w]_{A_{p}})
\end{equation*}
for any non-decreasing function $\varphi:[0,\infty) \to [0,\infty)$ such that
\begin{equation*}
\lim_{t\to \infty}\frac{\varphi(t)}{t^\frac{1}{p-1}}=0.
\end{equation*}
The proof follows the same ideas of Theorem \ref{thm:AbstractBuckley}. We left the details for the interested reader. A similar argument can be used to derive an analogue result for a generic operator $T$ if it is known the precise endpoint behavior of $T$.
\end{remark}
\section{Applications}\label{sec:applications}
In this section we show how to derive from our general result in Theorem \ref{thm:AbstractBuckley} the sharpness of several known weighted inequalities. This will follow from Theorem \ref{thm:AbstractBuckley} if we check the appropriate values of $\alpha_T$ and $\gamma_T$ for each case.
\subsection{Operators with large kernel and commutators}\label{sec:CZ}
Consider any C--Z operator whose kernel $K$ satisfies
\begin{equation}\label{eq:kernel}
|K(x,y)|\ge\frac{c}{|x-y|^n}.
\end{equation}
for some $c>0$ and if $x \neq y$ (we can consider the Hilbert transform $H$ as a model example of this phenomenon in $\mathbb{R}$ and the Riesz transforms for $\mathbb{R}^n$, $n\ge 2$). Then, it is true (see \cite[p. 42]{ste93}) that, for $p\to 1$,
\begin{equation}\label{eq:endpointCZ}
\|T\|_{L^p(\mathbb{R}^n)}\sim \frac{1}{p-1},
\end{equation}
which clearly implies that $\alpha_T=1$. By duality we can see that $\gamma_T=1$. Further, for the commutator $[b,T]$ we use the example from \cite[Section 5, p. 755]{perez97}. There, for the choice of $b(x)=\log(|x|)$ and considering the Hilbert transform $H$, it is shown that
\begin{equation}\label{eq:endpointCZ-commutator}
\|[b,H]\|_{L^p(\mathbb{R}^n)}\gtrsim \frac{1}{(p-1)^2},
\end{equation}
which implies that $\alpha_{[b,H]}=2$. More generally, its $k$-iteration defined recursively by
\begin{equation*}
T_b^k:=[b, T_b^{k-1}],\qquad k\in \mathbb N,
\end{equation*}
satisfies that $\alpha_{H_b^k}=\gamma_{H_b^k}=k$. The value for $\gamma_{H_b^k}$ follows by duality as in the case of C--Z operators.
We then obtain, as an immediate consequence of Theorem \ref{thm:AbstractBuckley}, that the following known weighted inequalities are sharp (for the proofs, see \cite{Hytonen:A2} for the case of C--Z operators and \cite{CPP} for the case of commutators).
\begin{equation}\label{eq:CZ}
\|T\|_{L^{ p }(w)} \le c \, [w]^{ \max\{1, \frac{1}{p-1} \}}_{A_{ p}}, \qquad w \in A_{ p},
\end{equation}
\begin{equation}\label{eq:commutator}
\|[b,T]\|_{L^{ p }(w)} \le c \, \|b\|_{BMO}\, [w]^{ 2\max\{1, \frac{1}{p-1} \}}_{A_{ p}}, \qquad w \in A_{ p},
\end{equation}
\begin{equation}\label{eq:k-commutator}
\|T_b^k\|_{L^{ p }(w)} \le c \, \|b\|_{BMO}\, [w]^{ (k+1)\max\{1, \frac{1}{p-1} \} }_{A_{ p}}, \qquad w \in A_{ p}.
\end{equation}
As a final application of this result for large kernels, we present here the following consequence of our Theorem \ref{thm:AbstractBuckley} for the optimality of weighted estimates of Bochner-Riesz multipliers. For $\lambda>0$ and $R>0$, this operator is defined by the formula
\begin{equation}\label{eq:def-BR}
(B^\lambda_R f)(x)=\int_{\mathbb R^n}\left(1-\left(|\xi|/R\right)^2\right)^\lambda_+ \hat f(\xi)
e^{2\pi i \xi x}\ d\xi,
\end{equation}
where $\hat f$ denotes the Fourier transform of $f$.
For $R=1$ we write simply $B^\lambda$. It is a known fact that this operator has a kernel $K_\lambda(x)$ defined by
\begin{equation}\label{eq:kernel-BR}
K_\lambda(x)= \frac{\Gamma(\lambda+1)}{\pi^\lambda}\frac{J_{n/2+\lambda}(2\pi|x|)}{|x|^{n/2+\lambda}},
\end{equation}
where $\Gamma$ is the Gamma function and $J_\eta$ is the Bessel function of integral order $\eta$ (see \cite[p. 352]{GrafacosMF}).
\begin{corollary}\label{cor:BR}
Let $1<p<\infty$. Suppose further that the following estimate holds
\begin{equation}\label{eq:weighted-BR}
\|B^{(n-1)/2}\|_{L^p(w)} \le c \, [w]^{\beta}_{A_p},
\end{equation}
for any $w\in A_p$ and where the constant $c$ is independent of the weight. Then $\beta\ge \max\left \{1;\frac{1}{p-1}\right \}$.
\end{corollary}
\begin{proof}
We use the known asymptotics for Bessel functions, namely
\begin{equation*}
J_\eta(r) = cr^{-1/2}\cos(r-\tau)+O(r^{-3/2})
\end{equation*}
for some constants $c,\tau>0$, $\tau=\tau_{\eta}$, and $r>r_0\gg 1$ (see \cite[p.338, Example 1.4.1, eq. (14)]{ste93}). Combining this with \eqref{eq:kernel-BR}, we obtain that
\begin{equation}\label{eq:kernel-BR2}
K_{(n-1)/2}(x)\sim \frac{\cos(|x|-\tau)+\varphi(|x|)}{|x|^{n}}.
\end{equation}
for some $\varphi:\mathbb R\to \mathbb R$ such that $|\varphi(r)|\lesssim r^{-1}$. We see that this kernel does not satisfy the size condition \eqref{eq:kernel}. However, \eqref{eq:kernel-BR2} is sufficient to conclude that $\alpha_{B^{(n-1)/2}}=\gamma_{B^{(n-1)/2}}=1$.
Testing on the indicator function of the unit cube (we use again \cite[p. 42]{ste93}) we obtain, after a change of variables and for some
$r_1\ge r_0$,
\begin{equation*}\label{eq:endpoint-BR1}
\|B^{(n-1)/2}\|^p_{L^p(\mathbb{R}^n)} \gtrsim \int_{r>r_1}
\frac{\left|\cos(r-\tau)+\varphi(r)\right|^p}{r^p}\ dr.
\end{equation*}
We choose $r_2\ge r_1$ large enough such that $|\varphi(r)|<1/4$ and consider the set $A=\{r\in R: r > r_2, |\cos(r-\tau)|>1/2\}$. We obtain that
\begin{equation*}\label{eq:endpoint-BR2}
\|B^{(n-1)/2}\|^p_{L^p(\mathbb{R}^n)} \gtrsim \int_{A} \frac{1}{r^p}\ dr \gtrsim \int_{r>1} \frac{1}{r^p}\ dr \gtrsim \frac{1}{p-1}
\end{equation*}
for $p$ close to 1. The estimate in the middle follows by the monotonicity of the function $t\mapsto t^{-p}$ and taking into account that we can find the exact description of the set $A$ as a union of intervals. The value for $\gamma_{B^{(n-1)/2}}=1$ follows by duality.
\end{proof}
In particular, this result shows that the claimed weighted norm inequality for the maximal Bochner-Riesz operator from \cite{Li-Sun} cannot hold (see also \cite{Li-Sun-corrigendum}).
\subsection{Maximal operators and square functions}\label{sec:maximal-and-square}
For $k\in\mathbb N$ the $k$-th iteration of the maximal function is defined by $M^k=M(M^{k-1})$. In this case we have that $\alpha_{M^k}=k$. The case $k=1$ is \eqref{eq:maximal-sim-p-1} and an induction argument yields the case $k>1$.
The fact that $\gamma_{M^k}=0$ is trivial. Then the following weighted inequality is sharp.
\begin{equation}\label{eq:k-maximal}
\|M^k\|_{L^p(w)}\leq c\,[w]_{A_p}^{\frac{k}{p-1}}, \qquad w\in A_p.
\end{equation}
We now consider the vector-valued extension of the H-L maximal function. For $1<q<\infty$ and $1<p<\infty$, this operator is defined as:
\begin{equation*}
\overline{M}_qf(x)=\Big( \sum_{j=1}^{\infty} (Mf_j(x))^q \Big)^{1/q},
\end{equation*}
where ${f}=\{f_j\}_{j=1}^{\infty}$ is a vector-valued function. Here, as usual, we adopt the notation $\overline{f}_q:=\Big( \sum_{j=1}^{\infty} f_j^q \Big)^{1/q}$. The fact that $\alpha_{\overline{M}_q}=1$ can be verified in the same way as in the case $q=1$. For $\gamma_{\overline{M}_q}$, we can find an example of a vector-valued function satisfying $\|\overline{M}_qf\|_{L^p(\mathbb{R}^n)}\ge c p^{1/q}\|\overline{f}_q\|_{L^p(\mathbb{R}^n)}$ which implies that $\gamma_{\overline{M}_q}=1/q$. This is already known; see \cite[p.75]{ste93} for the classic proof. Then the following inequality is sharp.
\begin{equation}\label{eq:vector-valued-maximal}
\|\overline{M}_qf \|_{L^p(w)}\le c\, [w]^{\max\{\frac{1}{q},\frac{1}{p-1}\}}\|\overline{f}_q\|_{L^p(w)}, \qquad w \in A_{ p}.
\end{equation}
We include here the case of the dyadic square function $S_d$ defined as
\begin{equation*}
S_df(x) = \left(\sum_{Q\in\Delta} (f_Q-f_{\hat Q})^2\chi_Q(x)\right)^{1/2}
\end{equation*}
where $f_Q = \Xint-_Qf(x)\ dx$, $\Delta$ is the lattice of dyadic cubes and $\hat Q$ stands for the dyadic parent of a given cube $Q$. We first note that $\alpha_{S_d}=1$ by looking at the indicator function of the unit cube (as in the case of the maximal function). The value of $\gamma_{S_d}=\frac{1}{2}$ was previously known, see for instance \cite[p. 434]{CMP-ADV}. As before, we conclude that the following inequality is sharp.
\begin{equation}\label{eq:square-dyadic}
\|S_df\|_{L^p(w)}\le c\,[w]_{A_{p}}^{\max\{\frac{1}{2},\frac{1}{p-1}\}}\|f\|_{L^p(w)} \qquad w\in A_p.
\end{equation}
The proof of inequalities \eqref{eq:square-dyadic} and \eqref{eq:vector-valued-maximal} can be found in \cite{CMP-ADV}.
\subsection{Orlicz-type Maximal functions}\label{sec:orlicz}
Given a Young function $\Phi$, we define the maximal function as
\begin{equation}\label{eq:maximaltype}
M_{\Phi}f(x)= \sup_{x\in Q} \|f\|_{\Phi,Q},
\end{equation}
where $ \|f\|_{\Phi,Q}$ is the localized Luxemburg norm on a cube Q. We refer to \cite[p. 97]{CMP-Book} for the precise definitions and properties.
We are interested here in the logarithmic scale given by the functions $\Phi_\lambda(t):=t\log^\lambda(e+t)$, $\lambda \in [0,\infty)$. Note that the case $\lambda=0$ corresponds to $M$. The case $\lambda=k\in \mathbb N$ corresponds to $M_{L(\log L)^{k}}$, which is pointwise comparable to $M^{k+1}$ (see, for example, \cite{perez95:JFA}).
For noninteger values of $\lambda$, we denote by $M_{\Phi_\lambda}=M_{L(\log L)^\lambda }$ the associated maximal operator.
We have seen that the sharp exponent in weighted estimates for these operators is $1/(p-1)$ for $\lambda=0$ and $k/(p-1)$ for $\lambda=k\in\mathbb N$. The following theorem provides a sharp bound for these intermediate exponents in $\mathbb R_{+} \setminus \mathbb N$. This theorem is a mixed $A_{p}-A_{\infty}$ result involving the Fujii-Wilson $A_{\infty}$'s constant defined as
\begin{equation*}
[w]_{A_{\infty}}:= \sup_{Q}\frac{1}{w(Q)}\int_Q M(\chi_Q w)dx.
\end{equation*}
\begin{theorem} \label{thm:Orlicz}
Let $\lambda>0$, $1<p<\infty$ and $w\in A_p$. Then
\begin{equation}\label{eq:orlicz}
\|M_{\Phi_\lambda}\|_{L^p(w)}
\leq c\, [w]_{A_p}^{\frac{1}{p}}[\sigma]_{A_\infty}^{\frac{1}{p}+\lambda}
\end{equation}
%
where $\sigma=w^{1-p'}$. As a consequence we have
\begin{equation*}
\|M_{\Phi_\lambda}\|_{L^p(w)} \leq c\, [w]_{A_p}^{\frac{1+\lambda}{p-1}}.
\end{equation*}
%
Furthermore, the exponent is sharp.
\end{theorem}
Results of this type were proved for first time in \cite{HP} and it was used to improve the $A_2$ theorem from \cite{Hytonen:A2}.
\begin{proof}
We start with the following variant of the classical Fefferman-Stein inequality which holds for any weight $w$. For $t>0$ and any nonnegative function $f$, we have that
\begin{equation}\label{eq:FeffStein-MPhi}
w\left(\left\{x\in \mathbb{R}^n: M_{\Phi_\lambda}f(x)>t \right\}\right)\le c\int_{\mathbb {R}^n}
\Phi_\lambda\left(\frac{f(x)}{t}\right)\, Mw(x)\ dx,
\end{equation}
where is $M$ is the usual Hardy--Littlewood maximal function and $c$ is a constant independent of the weight $w$. The result
can be obtained using a Calder\'on--Zygmund decomposition adapted to $M_{\Phi_\lambda}$ as in Lemma 4.1 from \cite{perez95}. We leave the details for the interested reader.
Now, if the weight $w$ is in $A_1$, then inequality \eqref{eq:FeffStein-MPhi} yields the linear dependence on $[w]_{A_1}$,
\begin{equation*}
w\left(\left\{x\in \mathbb{R}^n:M_{\Phi_\lambda}f(x)>t \right\}\right)\le c\,[w]_{A_1}\int_{\mathbb {R}^n}
\Phi_\lambda\left(\frac{f(x)}{t}\right)\, w(x)\ dx.
\end{equation*}
From this estimate and by using an extrapolation type argument as in \cite[Section 4.1]{perez-lecturenotes}, we derive easily that, for any $w\in A_p$
\begin{equation}\label{eq:linearAp-MPhi}
w\left(\left\{x\in \mathbb{R}^n:M_{\Phi_\lambda}f(x)>t \right\}\right)\le c\,[w]_{A_p}\int_{\mathbb {R}^n}
\Phi_\lambda\left(\frac{f(x)}{t}\right)^p\, w(x)\ dx.
\end{equation}
Now, we follow the same ideas from \cite[Theorem 1.3]{HPR1}. We write the $L^p$ norm as
\begin{equation*}
\|M_{\Phi_\lambda} f\|_{L^p(w)}^p \leq c \int_{0}^{\infty} t^{p} w \{x\in \mathbb{R}^n:M_{\Phi_\lambda} f_t(x) > t\}
\frac{dt}{t}
\end{equation*}
where $f_t:=f\chi_{f>t}$. Since $w\in A_p$, then by the precise open property of $A_p$ classes, we have that $w\in A_{p-\varepsilon}$ where $\varepsilon\sim \frac{1}{[\sigma]_{A_\infty}}$. Moreover, the constants satisfy that $[w]_{A_{p-\varepsilon}}\le c[w]_{A_p}$ (see \cite[Theorem 1.2]{HPR1}). We apply \eqref{eq:linearAp-MPhi} with $p-\varepsilon$ instead of $p$ to obtain after a change of variable
\begin{eqnarray*}
\|M_{\Phi_\lambda} f\|_{L^p(w)}^p & \leq & c\, [w]_{A_{p}}\int_{\mathbb{R}^n} f^p \int_{1}^{\infty} \frac{\Phi_\lambda(t)^{p-\varepsilon}}{t^p}\frac{dt}{t}\ w \ dx \\
& \le & c\, [w]_{A_{p}}\int_1^\infty \frac{(\log(e+ t))^{p\lambda}}{t^\varepsilon}\frac{dt}{t}\ \|f\|^p_{L^p(w)}\\
& \le & c\, [w]_{A_{p}}\left(\frac{1}{\varepsilon}\right)^{\lambda p+1}\ \|f\|^p_{L^p(w)}\\
& \le & c\, [w]_{A_{p}}[\sigma]_{A_{\infty}}^{\lambda p+1}\ \|f\|^p_{L^p(w)}.\\
\end{eqnarray*}
Taking $p$-roots we obtain the desired estimate \eqref{eq:orlicz}.
Regarding the sharpness, we will prove now that the exponent in the term on the right hand side \eqref{eq:orlicz} cannot be improved. This follows from Theorem \ref{thm:AbstractBuckley} since it is easy to verify (again by testing on the indicator of the unit cube) that
\begin{equation*}
\|M_{\Phi_\lambda}\|_{L^p(\mathbb{R}^n)}\sim \frac{1}{(p-1)^{1+\lambda}}.
\end{equation*}
From this estimate we conclude that the endpoint order verifies $\alpha_T=1+\lambda$ for $T=M_{\Phi_\lambda}$.
\end{proof}
\subsection{Fractional integral operators }\label{sec:fractional}
For $0<\alpha<n$, the fractional integral operator or Riesz
potential $I_\alpha$ is defined by
\begin{equation*}
I_\alpha f(x)=\int_{R^n} \frac{f(y)}{|x-y|^{n-\alpha}}dy.
\end{equation*}
We also consider the related fractional maximal operator $M_\alpha$ given by
\begin{equation*}
M_\alpha f(x)=\sup_{Q\ni x} \frac{1}{|Q|^{1-\alpha/n}}\int_Q |f(y)| \ dy.
\end{equation*}
It is well known (see \cite{MW-fractional}) that these operators are bounded from $L^p(w^p)$ to $L^q(w^q)$ if and only if the exponents $p$ and $q$ are related by the equation $1/q-1/p=\alpha/n$ and $w$ satisfies the so called $A_{p,q}$ condition. More precisely, $w\in A_{p,q}$ if
\begin{equation*}
[w]_{A_{p,q}}:= \sup_Q\left(\frac{1}{|Q|}\int_Q w^q \ dx\right)\left(\frac{1}{|Q|}\int_Q w^{-p'}\ dx\right)^{q/p'}<\infty.
\end{equation*}
%
We first note that
\begin{equation*}
\|M_\alpha\|^q_{L^p(\mathbb{R}^n)\to L^q(\mathbb{R}^n)}\gtrsim \frac{1}{q-\frac{n}{n-\alpha}}.
\end{equation*}
This can be seen again by considering the indicator of the unit cube. Now we can use an off-diagonal version of the extrapolation theorem for $A_{p,q}$ classes from \cite[ Theorem 5.1]{Javi-Duo-JFA}. Then we obtain, by the same line of ideas from Theorem \ref{thm:AbstractBuckley}, that the following inequality is sharp.
\begin{equation}\label{eq:frac-maximal}
\|M_\alpha \|_{L^p(w^p) \to L^q(w^q)} \leq c\,
[w]_{A_{p,q}}^{\frac{p'}{q}(1-\frac{\alpha}{n})}.
\end{equation}
for $0\leq \alpha <n$, $1<p<n/\alpha$ and $q$ is defined by the relationship $1/q=1/p-\alpha/n$ and $w\in A_{p,q}$.
For the case of the fractional integral we can easily compute that
\begin{equation*}
\|I_\alpha\|^q_{L^p(\mathbb{R}^n)\to L^q(\mathbb{R}^n)}\ge \frac{1}{q-\frac{n}{n-\alpha}}.
\end{equation*}
Then, arguing as above we conclude that the following weighted inequality is also sharp.
\begin{equation}\label{eq:frac-integral}
\|I_\alpha\|_{L^p(w^p) \to L^q(w^q)}\leq
c\,[w]_{A_{p,q}}^{(1-\frac{\alpha}{n})\max\{1,\frac{p'}{q}\}}.
\end{equation}
The proof of inequalities \eqref{eq:frac-maximal} and \eqref{eq:frac-integral} can be found in \cite{LMPT}.
\section{Muckenhoupt bases}\label{sec:muckenhoupt-bases}
In this section we address the problem of finding optimal exponents for maximal operators defined over Muckenhoupt bases. Recall that given a family $\mathcal{B}$ of open sets, we can define the maximal operator $M_\mathcal{B}$ as
\begin{equation*}
M_\mathcal{B}f(x)=\sup_{x\in B\in \mathcal{B}}\Xint-_B |f(y)| \ dy,
\end{equation*}
if $x$ belongs to some set $b\in \mathcal{B}$ and $M_\mathcal{B}f(x)=0$ otherwise. The natural classes of weights associated to this operator are defined in the same way as the classical Muckenhoupt classes: $w\in A_{p,\mathcal{B}}$ if
\begin{equation*}
[w]_{A_{p,\mathcal{B}}}:=\sup_{B\in\mathcal{B}}\left(\frac{1}{|B|}\int_{B}w(y)\ dy \right)\left(\frac{1}{|B|}\int_{B}w(y)^{1-p'}\ dy \right)^{p-1}<\infty.
\end{equation*}
We say that a basis $\mathcal{B}$ is a Muckenhoupt basis if $M_\mathcal{B}$ is bounded on $L^p(w)$ whenever $w\in A_{p,\mathcal{B}}$ (see \cite{perez-pubmat}).
\begin{proof}[Proof of Theorem \ref{thm:muckenhoupt-bases}]
The idea is to perform the iteration technique from Theorem \ref{thm:AbstractBuckley} but with $M_\mathcal{B}$ instead of the standard H--L maximal operator. Then we obtain, for $ 1<p<p_0$, that
\begin{equation} \label{eq:CF-muckenhoupt-bases}
\|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) } \leq c\, \|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) } ^{\beta(p_0-p)}\le c\, \|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) } ^{\beta(p_0-1)}.
\end{equation}
The last inequality holds since $\|M_\mathcal{B}\|_{L^{p}(\mathbb{R}^n) }\ge 1$.
We remark here that, since we are comparing $M_\mathcal{B}$ to itself, it is irrelevant to know the precise quantitative behaviour of its $L^p$ for $p$ close to 1. In fact, we cannot use any estimate like \eqref{eq:maximal-leq-pto1} since we are dealing with a generic basis. Just knowing that the $L^p$ norm blows up when $p$ goes to 1, allows us to conclude that $\beta\ge \frac{1}{p_0-1}$.
\end{proof}
As an example of this result, we can show that the result for Calder\'on weights from \cite{DMRO-calderon} is sharp. Precisely, for the basis $\mathcal{B}_0$ of open sets in $\mathbb{R}$ of the form $(0,b)$, $b>0$, the authors prove that the associated maximal operator $N$ defined as
\begin{equation*}
Nf(t)=\sup_{b>t}\frac{1}{b}\int_0^b |f(x)|\ dx
\end{equation*}
is bounded on $L^p(w)$ if and only if $w\in A_{p,\mathcal{B}_0}$ and, moreover, that
\begin{equation*}
\|N\|_{L^p(w)}\le c\, [w]_{A_{p,\mathcal{B}_0}}^{\frac{1}{p-1}}.
\end{equation*}
By the preceding result, this inequality is sharp with respect to the exponent on the characteristic of the weight.
As another example of a Muckenhoupt basis we can consider the basis $\mathcal{R}$ of rectangles with edges parallel to the axis. The corresponding maximal operator $M_\mathcal{R}$ is bounded in $L^p(\mathbb{R}^n)$. Indeed,
\begin{equation}\label{eq:strong R}
\| M_\mathcal{R} \|_{L^p(\mathbb{R}^n)} \sim
(p')^n
\end{equation}
where $1<p<\infty$. In addition, it is not difficult to see that
\begin{equation}\label{eq:strong}
\|M_{\mathcal{R}}\|_{L^{ p }(w)} \le c \, [w]^{ \frac{n}{p-1} }_{A_{ p, \mathcal{R}}}, \qquad w \in A_{ p, \mathcal{R}}.
\end{equation}
From our Theorem \ref{thm:muckenhoupt-bases} we can only deduce that the exponent on the weight must be greater or equal to $1/(p-1)$ as it is already known. Therefore, the problem of finding the sharp dependence for $M_{\mathcal{R}}$ is still open.
\section*{Acknowledgements}
We are deeply in debt to Javier Duoandikoetxea for many valuable comments and suggestions on this problem. In particular, he pointed out and brought to our attention the application of our results to Bochner-Riesz multipliers and Muckenhoupt bases. Also, we would like to thank the anonymous referee for his/her constructive comments.
The first author is supported by the Spanish Ministry of Science and Innovation grant MTM2012-30748,
the second and third authors are also supported by the Junta de Andaluc\'ia, grant FQM-4745.
\bibliographystyle{mrl}
| {
"timestamp": "2013-12-02T02:14:00",
"yymm": "1307",
"arxiv_id": "1307.5642",
"language": "en",
"url": "https://arxiv.org/abs/1307.5642",
"abstract": "We present a general approach for proving the optimality of the exponents on weighted estimates. We show that if an operator $T$ satisfies a bound like $$ \\|T\\|_{L^{p}(w)}\\le c\\, [w]^{\\beta}_{A_p} \\qquad w \\in A_{p}, $$ then the optimal lower bound for $\\beta$ is closely related to the asymptotic behaviour of the unweighted $L^p$ norm $\\|T\\|_{L^p(\\mathbb{R}^n)}$ as $p$ goes to 1 and $+\\infty$, which is related to Yano's classical extrapolation theorem. By combining these results with the known weighted inequalities, we derive the sharpness of the exponents, without building any specific example, for a wide class of operators including maximal-type, Calderón--Zygmund and fractional operators. In particular, we obtain a lower bound for the best possible exponent for Bochner-Riesz multipliers. We also present a new result concerning a continuum family of maximal operators on the scale of logarithmic Orlicz functions. Further, our method allows to consider in a unified way maximal operators defined over very general Muckenhoupt bases.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Optimal exponents in weighted estimates without examples",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795079712153,
"lm_q2_score": 0.8006920092299293,
"lm_q1q2_score": 0.7902666053062394
} |
https://arxiv.org/abs/2107.04460 | New bounds for Ramsey numbers $R(K_k-e,K_l-e)$ | Let $R(H_1,H_2)$ denote the Ramsey number for the graphs $H_1, H_2$, and let $J_k$ be $K_k{-}e$. We present algorithms which enumerate all circulant and block-circulant Ramsey graphs for different types of graphs, thereby obtaining several new lower bounds on Ramsey numbers including: $49 \leq R(K_3,J_{12})$, $36 \leq R(J_4,K_8)$, $43 \leq R(J_4,J_{10})$, $52 \leq R(K_4,J_8)$, $37 \leq R(J_5,J_6)$, $43 \leq R(J_5,K_6)$, $65\leq R(J_5,J_7)$. We also use a gluing strategy to derive a new upper bound on $R(J_5,J_6)$. With both strategies combined, we prove the value of two Ramsey numbers: $R(J_5,J_6)=37$ and $R(J_5,J_7)=65$. We also show that the 64-vertex extremal Ramsey graph for $R(J_5,J_7)$ is unique. Furthermore, our algorithms also allow to establish new lower bounds and exact values on Ramsey numbers involving wheel graphs and complete bipartite graphs, including: $R(W_7,W_4) = 21$, $R(W_7,W_7) = 19$, $R(K_{3,4},K_{3,4}) = 25$, and $R(K_{3,5}, K_{3,5})=33$. | \section{Introduction}
\label{section:intro}
In this paper all graphs are simple and undirected. A graph $G=(V,E)$ consists of a set of vertices $V$ and a set of edges $E$. A graph $G' = (V',E')$ is a \textit{subgraph} of $G$ if $V' \subseteq V$ and $E' \subseteq E$. If $G'$ is a subgraph of $G$ and $\forall\, v, w \in V'$ the following holds: $\{v,w\} \in E \Rightarrow \{v,w\} \in E'$, then $G'$ is called an \textit{induced} subgraph of $G$. In that case we also refer to $G'$ as the subgraph of $G$ induced by the set of vertices $X = V'$ (denoted by~$G[X]$). We refer to~\cite{graph_theory_diestel} for any standard graph theory terminology which is not explicitly defined here.
For two graphs $H_1$, $H_2$, the \textit{Ramsey number \(R(H_1,H_2)\)} is defined as the smallest integer~$n$ such that every assignment of two colours (e.g.\ blue and red) to the edges of the complete graph $K_n$ contains $H_1$ as a blue subgraph, or $H_2$ as a red subgraph.
A two-coloured $K_n$ containing no blue copy of $H_1$ nor a red copy of $H_2$ is called an \( (H_1,H_2;n) \)-(Ramsey)-graph.
The set of all \( (H_1,H_2;n) \)-graphs is denoted by \( \mathcal{R}(H_1,H_2;n) \).
\( (H_1,H_2;n) \)-graphs with \( n= R(H_1,H_2) -1 \) are called \textit{extremal} Ramsey graphs for $R(H_1,H_2)$.
The concept of Ramsey numbers also generalises to $c$ colours (i.e.\ $R(H_1,H_2,...,H_c)$), but in this article we focus on two colours.
The most-studied Ramsey numbers are those where \( H_1\) and \(H_2 \) are complete graphs, these are also called the \textit{classical} Ramsey numbers. In this paper we will mainly study Ramsey numbers involving \( J_k:=K_k{-}e \), i.e.\ complete graphs with one edge removed.
Finding the exact value of Ramsey numbers is a challenging problem. This line of research already started in 1955 with the computation of \(R(K_3,K_4)\) and \(R(K_3,K_5)\)~\cite{greenwood1955combinatorial}, but in the meantime only a handful of new classical Ramsey numbers have been fully determined. For most cases, there are only lower and upper bounds.
For classical Ramsey numbers, the last exact value was determined by McKay and Radziszowski in 1995~\cite{R45} when they showed that \( R(K_4,K_5)=25 \). The most recent improvement on a bound for a (small) classical Ramsey number dates from 2018 when Angeltveit and McKay~\cite{angeltveitmckay} improved the upper bound $R(K_5,K_5) \leq 49$ (which had been standing since 1997~\cite{mckay1997subgraph}) to 48.
For Ramsey numbers involving \( J_k \), the most recently obtained exact value was \( R(K_3,J_{10})=37 \), which was determined by Goedgebeur and Radziszowski in 2013~\cite{staszek13-2}.
An overview of more values and bounds for Ramsey numbers with small parameters can be found in Radziszowski's dynamic survey~\cite{dynSur}.
The lower bounds on small Ramsey numbers are very often derived from Ramsey graphs which result from heuristic searches, for example using simulated annealing or tabu search as was done in~\cite{exoo2012ramsey, exoo2013some}.
These methods are suitable to find graphs that have no apparent structure, but often fail to generalise to larger cases. In this article however, we will not use heuristic algorithms. Instead we will search for Ramsey graphs in a more constructive and exhaustive way by designing efficient algorithms to generate all circulant and block-circulant Ramsey graphs for various parameters.
This article is organised as follows. In Section~\ref{sect:UB} we improve the upper bound of \(\ram{5}{6}\) from 38 (which was established by Lidick{\'y} and Pfender in~\cite{SemiDef}) to 37 using a gluing technique. Edge-counting restrictions will allow to do this without the help of computers.
In Section~\ref{sect:LB} we present exhaustive algorithms to generate circulant and block-circular Ramsey graphs. Using our implementation of these algorithms, we manage to improve lower bounds on a large variety of Ramsey numbers. By combining this with the new upper bounds from Section~\ref{sect:UB}, we determine the value of two new Ramsey numbers: \( \ram{5}{6}=37 \) and \( \ram{5}{7}=65 \). (The previous bounds for these Ramsey numbers were $31 \leq \ram{5}{6} \leq 38$ and $40 \leq \ram{5}{7} \leq 65$ and these previous lower and upper bounds were established in~\cite{Exoo2000} and~\cite{SemiDef}, respectively).
Moreover, we also show that the 64-vertex extremal Ramsey graph for \( \ram{5}{7} \) is unique.
Our algorithms also allow to establish new lower bounds and exact values of Ramsey numbers involving wheel graphs and complete bipartite graphs, including: $R(W_7,W_4) = 21$, $R(W_7,W_7) = 19$, $R(K_{3,4},K_{3,4})=25$, and $R(K_{3,5}, K_{3,5})=33$, which is also discussed in more detail in Section~\ref{sect:LB}.
Finally, we end this article in Section~\ref{sect:further_research} with an open problem and some suggestions for further research.
\section{Improving upper bounds on Ramsey numbers}
\label{sect:UB}
The primary goal of this section is to improve the upper bound on \( \ram{5}{6} \) from 38 to 37, for which we will use a variation of the well-known gluing method. The previous upper bound was established by Lidick{\'y} and Pfender in~\cite{SemiDef}.
\subsection{Definitions and preliminaries}
\label{subsect:UB_prelim}
Let \( G=(V,E) \) be a complete graph whose edges are 2-coloured by the function $c:E\rightarrow\{1,2\}$. We now define:
\begin{itemize}
\item \( \neigh{G}{i}{v} := G[\{w \in V \ |\ c(\{v,w\})=i \}]\), i.e.\ the subgraph of $G$ induced by all vertices $w$ adjacent to $v$ for which the edge $\{v,w\}$ has colour $i$, with the same colouring $c$.
\item \( \deg_i(v) := |\neigh{G}{i}{v}| \)
\item \( e_{i}(G) := |\{e \in E\ | \ c(e)=i \}| \)
\item \(\bar n_j :=|\{v\in V \ |\ \deg_1(v)=j \}| \), i.e.\ the number of vertices having degree $j$ in colour~1.
\item \( E_{i}(J_k,J_l;n) := \max\{ e_{i}(G)\ |\ G \in \mathcal{R}(J_k,J_l;n) \} \), for \( n < R(J_k,J_l) \)
\end{itemize}
Let $G$ be a \( (J_k,J_l;n) \)-graph, then for an arbitrary vertex $v$, \( \neigh{G}{1}{v} \) is a \( (J_{k-1},J_l;n_1) \)-graph and \( \neigh{G}{2}{v} \) is a \( (J_{k},J_{l-1};n_2) \)-graph, where \( n_1+n_2=n-1 \).
Gluing methods reverse this process and go from these neighbourhoods to the completely-coloured graph by colouring the edges in between the neighbourhoods (see Figure~\ref{fig:glueing}). This method has been used many times to compute upper bounds on Ramsey numbers, for example for \( R(K_4,K_5) \)~\cite{R45}.
To use this glueing strategy for \( \ram{5}{6} \) we need appropriate collections of \( (J_4,J_6) \)-graphs and \( (J_5,J_5) \)-graphs. More specifically for \(\mathcal{R}(J_5,J_6;37)\) we have \( n_1+n_2 = 36 \), but also \( n_1<17 = R(J_4,J_6) \)~\cite{mcnamara1991ramsey} and \( n_2<22=R(J_5,J_5) \)~\cite{clapham1989ramsey}. Therefore we only need the sets with \( n_1 \geq 15 \) and \( n_2 \geq 20 \).
The first set was fully computed by Radziszowski~\cite{sprCom}. The second set was computed for orders 20 and 21 in~\cite{SRK5e}.
We independently verified the correctness of the first set of graphs by writing a plugin for the generator \verb|geng|~\cite{nauty-website, mckay_14} to generate $(J_4,J_6)$-graphs.
For the mentioned \( (J_5,J_5; n) \)-graphs, we wrote a gluing algorithm based on the techniques explained below, and generated all Ramsey graphs for $19\leq n \leq 21$.
The results can be found in Table~\ref{table:countJ4J6} and Table~\ref{table:countJ5J5} in the Appendix.
\begin{figure}
\centering
\begin{tikzpicture}
\path[every node/.append style={circle, fill=black, minimum size=4pt, label distance=0pt, inner sep=0pt}]
(0.5,0) node[label={[label distance=5pt]180:\(v\)}] (v) {}
(1.7,1) node (0) {}
(3.2,0.8) node (1) {}
(2.5,1.3) node (2) {}
(3.3,1.2) node (3) {}
(2.3,0.7) node (4) {}
(2.5,-0.7) node (A) {}
(1.6,-1) node (B) {}
(3,-1) node (C) {}
(2.2,-1.3) node (D) {};
\draw (3) edge[red] (2) edge[red] (4);
\draw (2) edge[red] (4);
\draw (0) edge[blue] (2) edge[blue] (4);
\draw (1) edge[blue] (4) edge[blue] (3);
\draw (A) edge[blue] (B) edge[blue] (C);
\draw (B) edge[red] (C) edge[blue] (D);
\draw (D) edge[red] (C);
\draw (v) -- (1,1) [blue];
\draw (v) -- (1.1,0.79) [blue];
\draw (v) -- (1.3,0.62) [blue];
\draw (v) -- (1,-1) [red];
\draw (v) -- (1.1,-0.79) [red];
\draw (v) -- (1.3,-0.62) [red];
\draw (2.2,-0.4) -- (2.2,0.4) [dashed];
\draw (2.7,-0.4) -- (2.7,0.4) [dashed];
\draw (2.45, 0) node {?};
\draw[] (2.5, 1) ellipse (1.5 and 0.6) node[label={[label distance=20pt]25:$N_G^1(v)$}] {};
\draw[] (2.5, -1) ellipse (1.5 and 0.6)node[label={[label distance=22pt]10:$N_G^2(v)$}] {};
\end{tikzpicture}
\caption{Illustration of the glueing approach.}
\label{fig:glueing}
\end{figure}
\subsection{Triangle constraints}
\label{subsect:UB_tf}
To prove the upper bound \( \ram{5}{6} \leq 37 \), we need to consider every pair of graphs \( G_1 \in \mathcal{R}(J_4,J_6;n_1) \) and \( G_2 \in \mathcal{R}(J_5,J_5;n_2) \) (with $n_1+n_2=36$), and use them as \( \neigh{G}{1}{v} \) and \( \neigh{G}{2}{v} \) respectively. In this way we obtain a graph which is completely coloured except for the edges between \( \neigh{G}{1}{v} \) and \( \neigh{G}{2}{v} \). If we can show that for every such pair it is impossible to colour these remaining edges without creating a \( J_5 \) in the first colour or a \( J_6 \) in the second, we have proved the theorem.
Not all of these pairs $(G_1, G_2)$ have to be considered. By counting the number of monochromatic triangles in the hypothetical Ramsey graphs resulting from such a gluing, we can eliminate certain pairs.
Let $G$ be a two-coloured $K_n$. If the degrees of the vertices in each colour are known, the number of monochromatic triangles \( T(G) \) in $G$ can be computed using the following theorem by Goodman.
\begin{lemma}[Goodman~\cite{Goodman}] Let $G$ be a two-coloured $K_n$. Then
\label{thm:Goodman}
\[
T(G) =\binom{n}{3} - \frac{1}{2}\sum_{i = 0}^{n-1}[\bar n_i\cdot i \cdot (n-1-i)]
\]
\end{lemma}
\noindent
On the other hand, the number of monochromatic triangles in $G$ can also be counted by considering the number of edges coloured with colour $i$ (for $i \in \{1,2\}$) in the neighbourhood $\neigh{G}{i}{v}$ of each vertex~$v$.
\begin{lemma}
\label{thm:triangles}
\[
T(G) = \frac{1}{3}\sum_{v \in V}{[ e_1(\neigh{G}{1}{v}) + e_2(\neigh{G}{2}{v})] }
\]
\end{lemma}
\begin{proof}
Each edge with colour $i$ (for $i \in \{1,2\}$) in \( \neigh{G}{i}{v} \) extends to a monochromatic triangle with colour $i$ when combined with $v$. Summing over all vertices counts each triangle three times.
\end{proof}
From the definitions it is clear that if $G = (V, E)$ is a \( (J_k,J_l;n) \)-graph, then \( \forall \, v \in V: \ e_1(\neigh{G}{1}{v})\leq E_1(J_{k-1},J_l;\deg_1(v)) \), and similarly for the second colour.
For a \( (J_k,J_l) \)-graph, the difference in number of edges from the ``extremal'' case is expressed in the following \textit{deficiency function}:
\[
\delta_G(v) := E_1(J_{k-1},J_l;\deg_1(v)) - e_{1}(\neigh{G}{1}{v}) + E_2(J_k,J_{l-1};\deg_2(v)) - e_2(\neigh{G}{2}{v})
\]
It follows for a \( (J_k,J_l) \)-graph $G$ that \(\forall \, v \in V: \delta_G(v)\geq 0\) and therefore also that \( \sum_{v \in V}{\delta_G(v)}\geq 0 \).
Combining the previous results leads to (see~\cite{SRK5e} for details):
\begin{equation}\label{eq_sum_delta}
\sum_{v \in V}{\delta_G(v)}
= - 3\binom{n}{3} + \sum_{i = 0}^{n-1}
{ \bar n_i [E_1(J_{k-1},J_{l};i) + E_2(J_k,J_{l-1};n-1-i) + \frac{3i\cdot(n-1-i)}{2}] }
\end{equation}
Here $n$ denotes the order of $G$. This gives us the sum of the deficiencies as a function of only the degree sequence of $G$ (given that we know the values of $E_1$ and $E_2$). If this sum is negative, then such a Ramsey graph cannot exist.
\begin{theorem}
\label{thm:ub_r56}
\( \ram{5}{6} \leq 37 \).
\end{theorem}
\begin{proof}
We use the framework outlined above.
From the counts in Table~\ref{table:countJ4J6} and Table~\ref{table:countJ5J5} it follows that:
\begin{tabular}{l l}
\( E_1(J_4,J_6;15)=45 \), & \( E_1(J_4,J_6;16)=50 \), \\
\( E_2(J_5,J_5;20)=100 \), & \( E_2(J_5,J_5;21)=105 \). \\
\end{tabular}
\noindent
Let $G = (V,E)$ be a \( (J_5,J_6;37) \)-graph, then -- as already mentioned in Section~\ref{subsect:UB_prelim} -- every vertex of $G$ has either 15 or 16 neighbours in the first colour.
Equation~(\ref{eq_sum_delta}) now leads to:
\begin{align*}
\sum_{v \in V}{\delta_G(v)}
&= - 23310 + \bar n_{15}\cdot (45 + 105 + 472.5) + \bar n_{16}\cdot (50 + 100 + 480) \\
&= - 23310 + \bar n_{15}\cdot 622.5 + \bar n_{16} \cdot 630
\end{align*}
Under the given constraints, this sum is maximal for $\bar n_{16}=37$ and gives \( \sum_{v \in V}{\delta_G(v)} = 0 \); all other combinations lead to a negative sum.
Therefore the only remaining possibility for a \( (J_5,J_6;37) \)-graph $G$ is one where every vertex $v$ has deficiency $0$, and 16 neighbours in the first colour.
Hence for each $v \in V$, \( \neigh{G}{1}{v} \) contains exactly 50 edges with colour~1. Since every of those edges leads to a monochromatic triangle with colour~1 in $G$ and we count every triangle three times, there must be exactly \( \frac{37\cdot50}{3} \) monochromatic triangles with colour~1 in $G$. However, this is not an integer, hence $G$ does not exist.
\end{proof}
With this new upper bound, the classical inequality on Ramsey numbers yields the following Corollary.
\begin{corollary}\label{corr:j5j7}
$R(J_5,J_7) \leq R(J_4,J_7) + R(J_5,J_6) \leq 28 + 37 = 65.$
\end{corollary}
It should be noted that this upper bound $R(J_5,J_7) \leq 65$ was recently already established in~\cite{SemiDef} by Lidick{\'y} and Pfender using computational techniques. In the next section we will show that the upper bounds $R(J_5,J_6) \leq 37$ and $R(J_5,J_7) \leq 65$ are tight.
\section{Improving lower bounds on Ramsey numbers}
\label{sect:LB}
To establish a lower bound on a Ramsey number, it suffices to construct a single Ramsey graph. In this section we try to find new lower bounds based on circulant and block-circulant graphs, and we do this in an exhaustive way.
\subsection{Circulant graphs}
\label{subsect:circ}
A graph is called \textit{circulant} if it is of the form \( G=(V,E) \) where \( V=\{0,\dots,n-1\} \) and \( \{i,j\}\in E \Leftrightarrow (i-j)\pmod n \in D\) for some \( D \subseteq \{1,\dots,n-1\} \), which is closed under additive inverses modulo $n$.
The adjacency matrix of such a graph has the property that every row can be obtained by rotating the preceding row by one position (always in the same direction). This is also called a \textit{circulant matrix}. The set $D$ is called the \textit{generating set} of this matrix. To see this as a Ramsey graph, let all edges of $G$ be blue and those in the complementary graph (which is also circulant) be red, which yields a two-coloured complete graph.
For classical Ramsey numbers, \( R(K_3,K_3) \), \( R(K_3,K_4) \), \( R(K_3,K_5) \), \( R(K_3,K_9) \), \( R(K_4,K_4) \), and \( R(K_4,K_5) \) all have extremal graphs which are circulant. In some cases these graphs are even unique as extremal graphs, e.g.\ for $R(K_3,K_9)$~\cite{staszek13}.
In this work, we designed and implemented an algorithm which enumerates all circulant Ramsey graphs for various parameters.
It is based on a backtracking algorithm and exploits circularity to speed up the search. For example, to determine if a certain clique is present in the graph, one can limit this to cliques containing certain ``canonical'' edges (see~\cite{thesissteven} for more details). The structure in the adjacency matrix of these graphs also allows to perform bitwise operations to accelerate the search.
No improvements on lower bounds of classical Ramsey numbers were found, agreeing with what was reported in~\cite{HaKr}. Kuznetsov~\cite{Kuznetsov} also calculated the best-possible bounds for certain classical Ramsey numbers within the class of \textit{distance graphs}, a generalisation of circulant graphs.
Our algorithm was extended to generate circulant graphs for \( R(H_1,H_2) \), where \( H_i \) is one of the following graphs:
\( K_n \), \( J_n \), \( C_n \), \( W_n \), \( K_{n,m} \).
Here \(C_n\) denotes a cycle of length $n$, \(W_n\) is a wheel graph on $n$ vertices (i.e.\ a graph obtained by connecting a single vertex to all vertices of a $C_{n-1}$), and $K_{n,m}$ is the complete bipartite graph with partite sets of orders $n$ and $m$.
For these cases, several lower bounds could be improved (shown in Table~\ref{tbl:dropEdge} and Table~\ref{tbl:otherLower} from Section~\ref{subsect:results}), but most of them were later further improved using block-circulant graphs (see Section~\ref{subsect:blockcirc}).
Our algorithm is also suitable to search for multi-colour Ramsey graphs.
\begin{claim}
None of the lower bounds for Ramsey numbers of the form \( R(H_1,H_2) \) with $H_i \in \{ K_n, J_n, C_n, W_n, K_{n,m} \}$ (for $i \in \{1,2\}$) reported in Table~\ref{tbl:dropEdge} or Table~\ref{tbl:otherLower}, or mentioned in any table in~\cite{dynSur} can be improved using circulant Ramsey graphs on 64 or fewer vertices.
\end{claim}
\subsection{Block-circulant graphs}
\label{subsect:blockcirc}
The structure of circulant graphs turns out to be too restrictive to improve challenging lower bounds on small Ramsey numbers.
Therefore we also considered a natural generalisation of circulant graphs: block-circulant graphs.
These are graphs of which the adjacency matrix is composed of equally-sized circulant matrices:
\[ A =
\begin{bmatrix}
C_{11} & C_{12} & \dots & C_{1k}\\
C_{21} & C_{22} & \dots & C_{2k}\\
\vdots & \vdots & & \vdots\\
C_{k1} & C_{k2} & \dots & C_{kk}
\end{bmatrix}
\]
For this adjacency matrix to represent a simple graph, it is necessary that \( C_{i,j}=C_{j,i}^T \) for all \( i, j \). This is possible because the transposition of a circulant matrix is also circulant. If a graph $G$ has an adjacency matrix of the form of $A$, we say that it is a block-circulant graph on $k$ blocks. It is uniquely determined by giving the generating set (i.e.\ the first row) of each block in the upper triangle of $A$. Let \(D_{i,j}\) be this generating set of \(C_{i,j}\).
An example of a block-circulant matrix on 3 blocks would be
\[ A_1 =
\begin{bmatrix}
(3,4,5,6) & (0,1,2) & (0,2,4)\\
(0,7,8) & (2,3,6,7) & (0,4,8)\\
(0,5,7) & (0, 1, 5) & (1,3,6,8)
\end{bmatrix}_{9}
\text{,}
\]
where the brackets denote the generating sets, and the subscript indicates that we are working modulo 9. This represents the adjacency matrix of $O_6^-(2)$, the unique extremal graph for $R(J_4,J_7)$~\cite{mcnamara1991ramsey}.
These block-circulant graphs have been used before in search of lower bounds on Ramsey numbers. For example in~\cite{exoo1998some, Exoo2000} heuristic searches were performed for block-circulant Ramsey graphs for Ramsey numbers of the form \(R(K_k, K_l)\) and \(\ram{k}{l}\). They were also used to generate starting points for local search algorithms~\cite{ExT}.
However, we will follow an exhaustive approach and we will see that in some cases these earlier heuristic searches missed the best-possible values.
\paragraph*{}
The basic idea behind our exhaustive generation algorithm for block-circulant graphs is similar to the idea behind the algorithm for circulant graphs from Section~\ref{subsect:circ}: perform a backtracking search over the free parameters of the graph, in this case the generating sets of the circulant matrices in the upper triangle of the adjacency matrix.
Pruning is done whenever a forbidden subgraph $H_1$ or $H_2$ is formed (when searching for Ramsey graphs for $R(H_1,H_2)$), and the search for these subgraphs is restricted by a representative of the newly-coloured edges.
Extra care was taken to avoid generating isomorphic (partially-coloured) graphs. As will be shown in the next paragraphs, many isomorphisms can be detected directly from the structure of the block-circulant graphs. We did not eliminate further sporadic isomorphisms.
In the following, let \( G \) be a block-circulant graph of order $n$ on $k$ blocks with an adjacency matrix as depicted at the beginning of this subsection.
\begin{lemma}
\label{lem:perm}
Let \( \pi: \range{k}\rightarrow \range{k} \) be a permutation.
Then the graph \( G' \) with adjacency matrix \( (C_{\pi(i),\pi(j)})_{i,j\in\range{k}} \) is also block-circulant and is isomorphic to $G$.
\end{lemma}
Most isomorphisms of this kind can be avoided by defining an ordering on every possible generating set and accepting an adjacency matrix only if the generating sets of the blocks on the diagonal are in non-decreasing order. That is: \( \forall \, i<j: D_{i,i}\leq D_{j,j} \), where ``$\leq$'' denotes the chosen ordering. Only when some blocks on the diagonal are equal, these isomorphisms can remain undetected.
Another form of structural isomorphism in block-circulant graphs originates from \\ ``rotating'' one block relative to the other blocks:
\begin{lemma}
\label{lem:rot}
Let \( d \in \range{k} \) and \( r \in \mathbb{N} \). Then the graph \( G' \) constructed from \( G \) by rotating every \( C_{i,d} \) $r$ times to the right, and \( C_{d,i} \) $r$ times to the left \( (i \neq d) \), is also block-circulant and is isomorphic to $G$ (where rotating $C_{i,d}$ means cyclically rotating each row of that submatrix).
\end{lemma}
To avoid generating graphs which are isomorphic because of this reason, we fix a certain rotation of each block. For this it is mostly sufficient to demand that $C_{1,d}$, the first block of each column, is generated by a \textit{Lyndon word}, i.e.\ a bitstring which is the lexicographically smallest among all of its circular rotations. If there are multiple rotations of $C_{1,d}$ giving the same Lyndon word, then isomorphisms might still occur because of the other blocks $C_{d',d}$. These ties can then be broken by requiring $C_{2,d}$ to be lexicographically smallest among all rotations that fix $C_{1,d}$ etc.
\begin{lemma}
\label{lem:mult}
If $q$ is co-prime with $n/k$ (i.e. $q\in \mathbb{Z}_{n/k}^{*}$), then applying \( D\mapsto q\cdot D :=\{q\cdot d \pmod {n/k} \ | \ d\in D\} \) to each \(D_{i,j}\) leads to a graph which is isomorphic to $G$.
\end{lemma}
We can avoid nearly all isomorphisms of this kind by only proceeding the search if the sequence $(D_{1,1},\dots,D_{k,k})$ is the lexicographically smallest among all multiples \( (q\cdot D_{1,1},\dots,q\cdot D_{k,k}) \), with $q$ co-prime with $n/k$ (again using the ordering defined in Lemma~\ref{lem:perm}). Note that this criterion can already be checked in partially-filled matrices if it is used in combination with the criterion of Lemma~\ref{lem:perm}. Also notice that, since all blocks on the diagonal are symmetric, multiplication with $-1\in \mathbb{Z}_{n/k}^{*}$, will fix all blocks on the diagonal. Therefore, an extra condition can be added (looking at non-diagonal blocks), to decide whether we will accept this graph or its multiplication with $-1$.
\begin{theorem}
\label{thm:canon}
For every block-circulant graph $G$, there exists at least one block-circulant graph $G'\cong G$ that meets all of the above criteria.
\end{theorem}
\begin{proof}
Starting from $G$, we preform the following operations consecutively to make the labelling ``canonical'':
\begin{itemize}
\item Compute a $q \in \mathbb{Z}_{n/k}^{*}$ for which $\{q\cdot D_{i,i}\ | \ {1\leq i \leq k}\}$ is lexicographically minimal (seen as a multiset). Multiply all blocks with this $q$.
\item Apply a permutation of the blocks, such that the blocks on the diagonal are in non-decreasing order. I.e.: sort the diagonal blocks.
\item Rotate each column until all $C_{1,i}$ are generated by Lyndon words, $1<i\leq k$. If there are multiple rotations that minimise $C_{1,i}$, choose the one among them that minimises $C_{2,i}$, and so on.
\item If $D_{1,2}$ is now bigger than the Lyndon rotation of $(-1)\cdot D_{1,2}$, multiply all blocks by $-1$ and repeat step 3.
\end{itemize}
The resulting graph $G'$ is isomorphic to $G$ and is still block-circulant.
\end{proof}
To illustrate this process, note that the adjacency matrix $A_1$ depicted at the beginning of this subsection would pass the tests from Lemma~\ref{lem:rot} and Lemma~\ref{lem:mult}, but the generating sets on the diagonal are not in increasing order. Applying the operations described in Theorem~\ref{thm:canon} results in the following adjacency matrix, which is also the only one generated by our program for $R(J_4,J_7;28)$:
\[ A_1^* =
\begin{bmatrix}
(1,3,6,8) & (0,1,5) & (0,2,4)\\
(0,4,8) & (2,3,6,7) & (0,1,2)\\
(0,5,7) & (0,7,8) & (3,4,5,6)
\end{bmatrix}_{9}
\]
We now give counts of some concrete cases to give an indication of how many isomorphic graphs are avoided by the generator and how many remain. There are 32076 block-circulant Ramsey-$(J_4,J_8;27)$-graphs on three blocks. Of those, only 17 are non-isomorphic. With all of the above restrictions, our program generated 44 graphs.
Of the 26 block-circulant Ramsey-$(K_4,K_8;54)$-graphs generated on 3 blocks, 23 are non-isomorphic.
For block-circulant Ramsey-$(K_4,J_7;36)$-graphs on 4 blocks, the program outputs 2 graphs, which are non-isomorphic.
\paragraph*{}
The order in which the edges are coloured was also taken into consideration. To find illegal subgraphs as soon as possible, we opted to build the partial graphs in such a way that the already-coloured edges are only between a small number of vertices.
In general, this gives a higher probability of creating cliques. We achieve this by filling in the adjacency matrix column by column, from left to right. We refer to~\cite{thesissteven} for more details on the algorithm. The source code of our implementation of this algorithm can be obtained from \url{https://github.com/Steven-VO/circulant-Ramsey}
\subsection{Other graphs}
\label{subsect:other}
In search of new lower bounds, we also tested all vertex-transitive graphs up to 47 vertices for their Ramsey properties concerning \( J_k \) and \( K_k \). This set of graphs was computed by Holt and Royle~\cite{holt2020census}.
Some improved bounds were found on Ramsey numbers of the form \( \ram{k}{l} \) and \( R(K_k,J_l) \), but all of them could also be reached (or even improved) using block-circulant graphs, so we do not list them separately here.
Many known interesting strongly-regular graphs where also checked. These are regular graphs where every two adjacent vertices share the same number of common neighbours, and the same is true for non-adjacent vertices. For some parameter sets, all strongly-regular graphs have been enumerated (see e.g.~\cite{coolsaet2006strongly}). Other sporadic interesting cases are described in~\cite{StrongReg}. This led to the discovery of \( VO_{6}^{-}(2) \) as a \( (J_5,J_7) \)-graph, which improved the previous lower bound of $\ram{5}{7}$~\cite{Exoo2000} by 25 to 65. (See the next section for details).
\subsection{Results}
\label{subsect:results}
With the techniques from Section~\ref{sect:LB}, no lower bounds on classical Ramsey numbers were improved.
However, several new lower bounds where found for Ramsey numbers of the form \( \ram{k}{l} \) and \( R(K_k,J_l) \), including two exact values: \( \ram{5}{6}=37 \) and \( \ram{5}{7}=65 \). These results are presented in Table~\ref{tbl:dropEdge}.
The Ramsey graphs which establish the new lower bounds (and the source code of our algorithms to generate circulant and block-circulant Ramsey graphs) can be obtained from \url{https://github.com/Steven-VO/circulant-Ramsey} as well as from the \textit{House of Graphs}~\cite{hog} through the links in Table~\ref{tbl:dropEdge}. In each case we computationally verified that these graphs are indeed Ramsey graphs for the given parameters using two independent programs (see Section~\ref{subsect:testing} for details).
Many best-known lower bounds could be reproduced within seconds of CPU time using our generators for circulant and block-circulant Ramsey graphs. The complexity of the exhaustive non-existence results for block-circulant graphs grows very rapidly with increasing parameters, and we therefore limited such searches to about 3 days of CPU-time for each case.
Sometimes the largest found block-circulant Ramsey graph could be extended by one extra vertex, connected in a specific way to the other vertices. This is denoted by ``+1'' in Table~\ref{tbl:otherLower}.
In some cases we were able to construct larger Ramsey graphs by performing a \textit{local search} on a block-circulant Ramsey graph, that is: we remove some vertices and then add more new vertices in all possible ways and check if any of them is still a Ramsey graph. This is denoted by ``LS'' in Table~\ref{tbl:dropEdge}.
We also found an interesting link between three different Ramsey numbers. McNamara and Radziszowski~\cite{mcnamara1991ramsey} showed that there is a unique extremal graph for \( \ram{4}{7} \), known as the complement of the Schl\"afli graph. This graph can also be constructed as \( O^{-}_{6}(2) \): the orthogonal points on an elliptic quadric in \( PG(5,2) \) (see~\cite{StrongReg} for details).
\( NO^{-}_6(2) \) is a geometrically related graph, which we found with the block-circulant generator, and which we proved to be extremal as a \( (J_5,J_6) \)-graph. These graphs are combined in \( VO_{6}^{-}(2) \), which turned out to be an extremal Ramsey graph for \( \ram{5}{7} \). These three graphs are all vertex-transitive, strongly-regular and block-circulant.
\begin{theorem}
\( \ram{5}{6}=37 \).
\end{theorem}
\begin{proof}
The upper bound follows from Theorem~\ref{thm:ub_r56}. The lower bound is established by \( NO^{-}_6(2) \) for which we computationally verified that it is a $(J_5,J_6;36)$-graph using two independent programs.
\end{proof}
\begin{theorem}
\( \ram{5}{7}=65 \) and the graph \( VO_{6}^{-}(2) \) is the only extremal Ramsey graph for $\ram{5}{7}$.
\end{theorem}
\begin{proof}
The upper bound follows from~\cite{SemiDef} or Corollary~\ref{corr:j5j7}.
The lower bound is established by \( VO_{6}^{-}(2) \) for which we computationally verified that it is a $(J_5,J_7;64)$-graph using two independent programs.
Since \(O^{-}_{6}(2) \) is unique as extremal \((J_4,J_7) \)-graph~\cite{mcnamara1991ramsey}, and \( \ram{5}{7} = \ram{5}{6} + \ram{4}{7}\), it follows from the simple arguments from Section~\ref{sect:UB} that every \( (J_5,J_7;64) \)-graph $G$ must have the following property: \( \forall \, v \in V: \neigh{G}{1}{v} \cong O^{-}_{6}(2) \). The graphs with this property have been completely characterised (there are only two of them which are connected)~\cite{buekenhout}. The first is \( VO_{6}^{-}(2) \); and the other graph with this property is known as \(TO_{6}^{-}(2) \), but has independence number~$7$.
Therefore \( VO_{6}^{-}(2) \) is the only extremal Ramsey graph for $\ram{5}{7}$.
\end{proof}
\begin{table}[htb!]
\centering
\setlength{\tabcolsep}{4pt}
\small
\begin{tabular}{l|c|c|c|c}
Old bounds & New LB & Method & Old LB reference & HoG id\\ \hline
$ 47 \leq R(K_3,J_{12})\leq 53 $ & 49 & Block-circulant
& Implied by $R(K_3,K_{11})$ & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44120}{44120}\\
$ 60\leq R(K_3,J_{14}) \leq 71 $ & 61 & Block-circulant
& Implied by $R(K_3,K_{13})$ & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44122}{44122}\\
\( 29\leq R(J_4,K_8) \leq 39\) & 36 & Block-circulant
& Implied by $R(J_4,K_7)$ & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44116}{44116}\\
$ 36\leq R(J_4,K_9)\leq 56 $ & 41& Block-circulant
& Implied by $R(K_3,K_9)$ & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44132}{44132} \\
$ 41\leq R(J_4,J_{10})\leq 63 $ & 43& Block-circulant
& Exoo (2000)~\cite{Exoo2000} & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=45620}{45620} \\
\( 41 \leq R(J_4,K_{10}) \leq 65 \) & 49 & Block-circulant
& Implied by $R(J_4,J_{10}$) & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44124}{44124}\\
\( 74 \leq R(J_4, J_{16}) \) & 82 & Strongly regular graph
& Implied by \( R(K_3,K_{15}) \) & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=962}{962} \\
\(49 \leq R(K_4,J_8)\leq 74 \) & 52 & Block-circulant
& Implied by $R(K_4,K_7)$ & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44112}{44112}\\
\(59 \leq R(K_4,J_9)\leq 105 \) & 62 & Circulant
& Implied by $R(K_4,K_8)$ & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44130}{44130}\\
\( 31 \leq R(J_5,J_6) \leq 38 \) & \textbf{37} & Block-circulant
& Exoo (2000)~\cite{Exoo2000} & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=34470}{34470}\\
\( 37 \leq R(J_5,K_6)\leq 53 \) & 43 & Block-circulant (+LS)
& Exoo (2000)~\cite{Exoo2000} & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44118}{44118}\\
\( 40 \leq R(J_5,J_7)\leq 65 \) & \textbf{65} & Strongly regular graph
& Exoo (2000)~\cite{Exoo2000} & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=35441}{35441}\\
\( 80 \leq R(K_5,J_8) \leq 175 \) & 81 & Circulant
& Implied by \( R(K_5,K_7) \) & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=45622}{45622}\\
\( 101 \leq R(K_5, J_9) \leq 275 \) & 121 & Strongly regular graph
& Implied by \( R(K_5,K_8) \) & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44126}{44126}\\
\( 80 \leq R(J_6,J_8) \leq 218 \) & 83 & Circulant
& Implied by \( R(K_5,K_7) \) & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44128}{44128}\\
\end{tabular}
\caption{Improved lower bounds for Ramsey numbers of the form \( R(J_k,J_l) \), together with the best-known bounds prior to this work. Exact values are marked in bold. ``LS'' stands for local search. The last column refers to the ids of these new Ramsey graphs in the \textit{House of Graphs}~\cite{hog}.}
\label{tbl:dropEdge}
\end{table}
We focussed on Ramsey numbers involving \( J_k \), but also executed our algorithms on other combinations of parameters, including multi-colour Ramsey numbers. We obtained several improvements over the current lower bounds, including some exact values for Ramsey numbers on wheels and complete bipartite graphs. These results are shown in Table~\ref{tbl:otherLower}. Note that \(W_n\) (i.e.\ a wheel graph on $n$ vertices) is not contained in \(W_{n+1}\). Therefore these Ramsey numbers are not necessarily increasing in $n$. We believe that more improvements could be possible by applying the same techniques to Ramsey numbers with different sets of parameters. But as there is a very large number of parameter combinations, we focussed on the most common cases.
\begin{table}[htb!]
\centering
\begin{tabular}{l|c|c|c}
Old bounds & New LB & Method & HoG id\\ \hline
\( R(W_7,W_4) \leq 21 \) & \textbf{21} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=35445}{35445}\\
\( R(W_7,W_7)\leq 19 \) & \textbf{19} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=35443}{35443} \\
\( R(W_9,W_9) \) & 21 & Block-circulant& \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44138}{44138} \\
\( R(K_6,W_6) \leq 40 \) & 34 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44134}{44134} \\
\( 43 \leq R(K_7, W_5)\leq 50 \) & 45 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44136}{44136} \\
\( R(K_7,W_6) \leq 55 \) & 45 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44136}{44136}\\
\( 39 \leq R(K_{11},C_4)\leq 44 \) & 40 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44167}{44167}\\
24 \(\leq R(K_{2,6},K_{2,8}) \leq 25 \) & \textbf{25} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44140}{44140}\\
28 \(\leq R(K_{2,7},K_{2,10}) \leq 31 \) & 29 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44142}{44142}\\
32 \(\leq R(K_{2,8},K_{2,10}) \leq 33 \) & \textbf{33} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44144}{44144} \\
\(R(K_{2,8},K_{2,11}) \leq 35 \) & \textbf{35} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44146}{44146}\\
\( 36\leq R(K_{2,9},K_{2,11}) \leq 37 \) & \textbf{37} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44148}{44148}\\
\( R(K_{3,4},K_{2,5}) \leq 20 \) & \textbf{20} & Block-circulant (+1) & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44150}{44150}\\
\( R(K_{3,4},K_{3,3}) \leq 20 \) & 19 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44152}{44152}\\
\( R(K_{3,4},K_{3,4}) \leq 25 \) & \textbf{25} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44154}{44154}\\
\( R(K_{3,5},K_{2,4}) \leq 20 \) & 19 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=45601}{45601} \\
\( R(K_{3,5},K_{2,5}) \leq 23 \) & 21 & Circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44156}{44156} \\
\( R(K_{3,5},K_{3,3}) \leq 24 \) & 21 & Circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44156}{44156}\\
\( R(K_{3,5},K_{3,4}) \leq 29 \) & 25 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44158}{44158}\\
\(30 \leq R(K_{3,5},K_{3,5}) \leq 33 \) & \textbf{33} & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44160}{44160}\\
\(30 \leq R(K_{4,4},K_{4,4}) \leq 49 \) & 33 & Block-circulant & \href{https://hog.grinvin.org/ViewGraphInfo.action?id=44162}{44162}\\
\( 30 \leq R(K_3,J_4,K_4)\leq 40 \) & 31 & Block-circulant & (\href{https://github.com/Steven-VO/circulant-Ramsey}{GitHub})\\
\( 28 \leq R(K_4,J_4,C_4)\leq 36 \) & 29 & Block-circulant & (\href{https://github.com/Steven-VO/circulant-Ramsey}{GitHub})\\
\end{tabular}
\caption{Improved lower bounds for various Ramsey numbers. Exact values are marked in bold. ``+1'' denotes a one-vertex extension. The upper bounds are all from~\cite{SemiDef} or~\cite{dynSur}. The last column refers to the ids of these new Ramsey graphs in the \textit{House of Graphs}~\cite{hog}. The three-coloured graphs are only available on \href{https://github.com/Steven-VO/circulant-Ramsey}{GitHub}.}
\label{tbl:otherLower}
\end{table}
\subsection{Correctness testing}
\label{subsect:testing}
All programs were written in the programming language C. The counts of the generated \( (J_5,J_5) \)-graphs on 20 and 21 vertices (cf.\ Table~\ref{table:countJ5J5}) are in complete agreement with previous results in~\cite{SRK5e}. As a partial verification for the correctness of \( \mathcal{R}(J_5,J_5;19) \), up to 2 vertices were removed and added again in every possible way for every graph in \( \mathcal{R}(J_5,J_5;19) \). This led to exactly the same set of Ramsey graphs.
The correctness of the counts in Table~\ref{table:countJ4J6} was tested by writing a plugin for the program \verb|geng|~\cite{nauty-website, mckay_14}. This yielded exactly the same graphs as those we received from Radziszowski~\cite{sprCom}.
The graphs witnessing the lower bounds reported in Table~\ref{tbl:dropEdge} and Table~\ref{tbl:otherLower} were all independently verified using the Graph-package in \textit{SageMath}.
Together with the source code of our generators, they can be obtained from \url{https://github.com/Steven-VO/circulant-Ramsey}
\section{Further research}
\label{sect:further_research}
\begin{problem}
Is \( NO^{-}_6(2) \) the only extremal Ramsey graph for $\ram{5}{6}$?
\end{problem}
We strongly suspect that this is the case as the closely related extremal Ramsey graphs for \( \ram{4}{7} \) and \( \ram{5}{7} \) are unique as well. The following computational evidence also seems to indicate that \(NO^{-}_6(2) \) is the only $(J_5,J_6;36)$-graph:
\begin{itemize}
\item A local search was performed on $NO^{-}_6(2)$: we removed 2 vertices and then readded them in all possible ways and this did not yield any additional $(J_5,J_6;36)$-graphs. So within a distance of 2 no other $(J_5,J_6;36)$-graphs exist.
\item Up to isomorphism $NO^{-}_6(2)$ is the only block-circulant \( (J_5,J_6;36) \)-graph on 6 blocks or less.
\end{itemize}
It can be observed that Ramsey numbers of the form \( \ram{k}{l} \) seem to ``behave better'' than the Ramsey numbers \( R(K_k,J_l) \): there are extremal graphs with a more apparent structure, and they are often closer to the theoretical upper bound.
More specifically, some ``strictly smaller'' cases than \( \ram{5}{6} \) and \( \ram{5}{7} \) are still unsolved: \(30 \leq R(K_4,J_6)\leq 32 \) and \( 30 \leq R(J_5,K_5) \leq 33 \).
\subsection*{Acknowledgements}
We would like to thank Gunnar Brinkmann and Stanis{\l}aw Radziszowski for useful suggestions.
Several of the computations for this work were carried out using the supercomputer infrastructure provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation Flanders (FWO) and the Flemish Government.
\bibliographystyle{plain}
| {
"timestamp": "2021-07-12T02:18:47",
"yymm": "2107",
"arxiv_id": "2107.04460",
"language": "en",
"url": "https://arxiv.org/abs/2107.04460",
"abstract": "Let $R(H_1,H_2)$ denote the Ramsey number for the graphs $H_1, H_2$, and let $J_k$ be $K_k{-}e$. We present algorithms which enumerate all circulant and block-circulant Ramsey graphs for different types of graphs, thereby obtaining several new lower bounds on Ramsey numbers including: $49 \\leq R(K_3,J_{12})$, $36 \\leq R(J_4,K_8)$, $43 \\leq R(J_4,J_{10})$, $52 \\leq R(K_4,J_8)$, $37 \\leq R(J_5,J_6)$, $43 \\leq R(J_5,K_6)$, $65\\leq R(J_5,J_7)$. We also use a gluing strategy to derive a new upper bound on $R(J_5,J_6)$. With both strategies combined, we prove the value of two Ramsey numbers: $R(J_5,J_6)=37$ and $R(J_5,J_7)=65$. We also show that the 64-vertex extremal Ramsey graph for $R(J_5,J_7)$ is unique. Furthermore, our algorithms also allow to establish new lower bounds and exact values on Ramsey numbers involving wheel graphs and complete bipartite graphs, including: $R(W_7,W_4) = 21$, $R(W_7,W_7) = 19$, $R(K_{3,4},K_{3,4}) = 25$, and $R(K_{3,5}, K_{3,5})=33$.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "New bounds for Ramsey numbers $R(K_k-e,K_l-e)$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9932024686508317,
"lm_q2_score": 0.7956580976404297,
"lm_q1q2_score": 0.7902495867784993
} |
https://arxiv.org/abs/1909.00177 | Functions with ultradifferentiable powers | We study the regularity of smooth functions $f$ defined on an open set of $\mathbb{R}^n$ and such that, for certain integers $p\geq 2$, the powers $f^p :x\mapsto (f(x))^p$ belong to a Denjoy-Carleman class $\mathcal{C}_M$ associated with a suitable weight sequence $M$. Our main result is a statement analogous to a classic theorem of H. Joris on $\mathcal{C}^\infty$ functions: if a function $f:\mathbb{R}\to\mathbb{R}$ is such that both functions $f^p$ and $f^q$ with $\gcd(p,q)=1$ are of class $\mathcal{C}_M$ on $\mathbb{R}$, and if the weight sequence $M$ satisfies the so-called moderate growth assumption, then $f$ itself is of class $\mathcal{C}_M$. Various ancillary results, corollaries and examples are presented. | \section*{Introduction}
It is generally difficult to relate the regularity of a real or complex-valued function $f$ defined on an open set of $\mathbb{R}^n$ to regularity assumptions on some of its powers $f^p :x\mapsto (f(x))^p$ with $p\in \mathbb{N}$, $p\geq 2$. However, in 1982, H. Joris \cite{Jor} proved the following striking result: if a function $f:\mathbb{R}\to \mathbb{R}$ is such that both functions $f^2$ and $f^3$, or more generally $f^p$ and $f^q$ with $\gcd(p,q)=1$, are of class $\mathcal{C}^\infty$ on $\mathbb{R}$, then $f$ itself is of class $\mathcal{C}^\infty$. As pointed out in \cite{DKP, JP}, the result also holds for complex-valued functions. Various generalizations were subsequently established around the notion of pseudo-immersion \cite{DKP, JP, Rai}.
In spite of its innocent-looking statement, Joris's theorem is not easy to establish. The original proof involved an intricate study of the vanishing of the derivatives of $f$ at points of flatness, based on combinatorial relations arising from the Fa\`a di Bruno formula.
However, a much simpler and shorter proof was published in 1989 by I. Amemyia and K. Masuda \cite{AM}. Its key argument is an algebraic lemma stating that the ring of power series with coefficient in a ring $R$ inherits a suitable property of $R$ relative to powers of its elements.
Unexpectedly, in 2018, as Joris's theorem was discussed on the \emph{MathOverflow} website, the anonymous contributor nicknamed ``fedja''
outlined a remarkable alternative proof based on a characterization of smooth functions on the real line by holomorphic approximation. Fedja's argument \cite{Fed} actually yields an even stronger result, as it works for finite differentiability classes: roughly speaking, given $p$ and $q$ with $\gcd(p,q)=1$, there is an integer $m$, depending only on $p$ and $q$, such that for $k$ large enough, the function $f$ is of class $\mathcal{C}^k$ as soon as $f^p$ and $f^q$ are of class $\mathcal{C}^{mk}$, and the proof provides crude estimates for $m$.\\
The main goal of the present paper is to show that the property described by Joris's theorem holds in Denjoy-Carleman ultradifferentiable classes $\mathcal{C}_M$, provided the weight sequence $M$ that defines the class satisfies the so-called \emph{moderate growth} assumption. Our approach will follow closely the path of the aforementioned proof of Fedja \cite{Fed}, while making suitable modifications needed in the Denjoy-Carleman setting.\\
The paper is organized as follows.
Section \ref{DCclasses} gathers the definitions and required material pertaining to weight sequences and Denjoy-Carleman classes.
Section \ref{exposit} begins with a review of some known results on the regularity of $\mathcal{C}^\infty$ functions $f:\mathbb{R}\to\mathbb{R}$ such that $f^p$ is of class $\mathcal{C}_M$ for a given integer $p\geq 2$. Incidentally, Proposition \ref{answer} answers a question asked in \cite{Th3}. These mostly negative results serve as a motivation for a $\mathcal{C}_M$ version of Joris's theorem, which is stated in the second part of Section \ref{exposit} (Theorem \ref{main}). Various comments and corollaries are then given. In particular, the case of functions of several variables is briefly discussed.
Sections \ref{technical} and \ref{final} are entirely devoted to the proof of Theorem \ref{main}. In Section \ref{technical}, we gather the main technical ingredients needed in the proof. In particular, an approximation-theoretic characterization of $\mathcal{C}_M$ regularity on a real interval is established; this result (Proposition \ref{approx}) may be of independent interest. In Section \ref{final}, the technical tools of Section \ref{technical} are finally used to complete the proof of Theorem \ref{main}, following the general pattern of Fedja's argument \cite{Fed}.
\section{Denjoy-Carleman classes}\label{DCclasses}
\subsection{Some properties of sequences}\label{sequences}
A sequence $M=(M_j)_{j\geq 0}$ of positive real numbers will be called a \emph{weight sequence} if it satisfies the following assumptions:
\begin{equation}\label{norm}
M \text{ is increasing and } M_0=1,
\end{equation}
\begin{equation}\label{logc}
M \text{ is logarithmically convex},
\end{equation}
\begin{equation}\label{nonana}
\lim_{j\to\infty} (M_j)^{1/j}=\infty.
\end{equation}
Property \eqref{logc} amounts to saying that the sequence $(M_{j+1}/M_j)_{j\geq 0}$ is nondecreasing. Together with \eqref{norm}, it implies
\begin{equation*}
M_jM_k\leq M_{j+k}\ \textrm{ for any } (j,k)\in\mathbb{N}^2.
\end{equation*}
We say that a weight sequence $M$ has \emph{moderate growth} if there is a positive constant $A$ such that we have
\begin{equation}\label{modg}
M_{j+k}\leq A^{j+k} M_jM_k\ \textrm{ for any } (j,k)\in\mathbb{N}^2.
\end{equation}
We say that a weight sequence $M$ satisfies the \emph{strong non-quasianalyticity} condition if there is a positive constant $A$ such that we have
\begin{equation}\label{snqa}
\sum_{j\geq k}\frac{M_j}{(j+1)M_{j+1}}\leq A \frac{M_k}{M_{k+1}} \textrm{ for any } k\in\mathbb{N}.
\end{equation}
Property \eqref{snqa} obviously implies the classical Denjoy-Carleman \emph{non-quasiana\-lyt\-icity} condition
\begin{equation}\label{nqa}
\sum_{j\geq 0}\frac{M_j}{(j+1)M_{j+1}}<\infty.
\end{equation}
A weight sequence $M$ is said to be \emph{strongly regular} if it satisfies \eqref{modg} and \eqref{snqa}.
\begin{exam}\label{exgev}
Let $ \alpha $ and $\beta$ be real numbers, with $\alpha> 0$. One can define a strongly regular weight sequence $M$ by setting $M_j=(j!)^\alpha(\ln j)^{\beta j}$ for $j$ large enough and choosing suitable first terms. This is the case, in particular, for Gevrey sequences $M_j=(j!)^\alpha$.
\end{exam}
\begin{exam}
For any real $\beta>0$, one can also define a weight sequence $M$ with $M_j=(\ln j)^{\beta j}$ for $j$ large enough. This sequence has moderate growth, and it satisfies the non-quasianalyticity property \eqref{nqa} if and only if $\beta>1$. It does not satisfy the strong non-quasianalyticity property \eqref{snqa}.
\end{exam}
\begin{exam}\label{qgev}
For any real $\lambda>0$, the weight sequence $M^\lambda$ defined by $M^\lambda_j=\exp\big(\frac{\lambda}{4}j^2\big)$ satisfies \eqref{snqa} but it does not have moderate growth. The sequences $M^\lambda$ will reappear in the examples of Section \ref{exposit}.
\end{exam}
With every weight sequence $M$, it is a standard procedure to associate the function $h_M$ defined by $h_M(t)=\inf_{j\geq 0}t^jM_j $ for any real $ t>0 $, and $ h_M(0)=0 $. Using \eqref{norm}, \eqref{logc} and \eqref{nonana}, it is easy to see that
$h_M(t)=t^jM_j$ for $j\geq 1$ and $\frac{M_j}{M_{j+1}}\leq t< \frac{M_{j-1}}{M_j}$, and $ h_M(t)=1 $ for $t\geq 1/M_1 $. In particular, $h_M$ is continuous, nondecreasing and it fully determines $M$ since we have
\begin{equation*}
M_j=\sup_{t>0}t^{-j}h_M(t)\, \text{ for any }\, j\in\mathbb{N}.
\end{equation*}
Setting $t_j=\frac{M_j}{M_{j+1}}$, we also obtain
\begin{equation}\label{Legendre2}
M_j= t_j^{-j}h_M(t_j)\, \text{ with }\, \lim_{j\to\infty}t_j=0.
\end{equation}
\begin{exam}
Let $M$ be as in Example \ref{exgev}, and set $\eta(t)=\exp(-(t\vert\ln t\vert^\beta)^{-1/\alpha})$ for $t>0$ small enough. Elementary computations show that there are constants $a>0$, $b>0$ such that $\eta(at)\leq h_M(t)\leq \eta(bt)$ as $t$ tends to $0$.
\end{exam}
It can be derived from \cite[Proposition 3.6]{Kom} that the moderate growth assumption \eqref{modg} is equivalent to the existence, for any real $s\geq 1$, of a constant $\kappa_s\geq 1 $ such that
\begin{equation}\label{hfunct2}
h_M(t)\leq \big(h_M(\kappa_s t)\big)^s\text{ for any }t\geq 0.
\end{equation}
Other equivalent conditions for \eqref{modg}, or for the strong non-quasianalyticity property \eqref{snqa}, can be found in the state-of-the-art study of weight sequences and weight functions carried out in the recent works \cite{Jim, JSS1, JSS2}, originating in J. Sanz's work on proximate orders \cite{San}.
As a consequence of \eqref{hfunct2} and of the definition of $h_M$, it is easy to see that if a weight sequence $M$ has moderate growth, then we have
\begin{equation}\label{hfunct3}
t^{-j}h_M(t)\leq \kappa_2^jM_j h_M(\kappa_2t)\text{ for any }t> 0 \text{ and any }j\in \mathbb{N}.
\end{equation}
\subsection{Definition of Denjoy-Carleman classes}
In what follows, we denote the length $j_1+\cdots+j_n$ of a multi-index $J=(j_1,\ldots,j_n)\in\mathbb{N}^n$ by the corresponding lower case letter $j$, and we put $\partial^J=\partial^j/\partial x_1^{j_1}\cdots\partial x_n^{j_n}$.
Let $\Omega$ be an open subset of $\mathbb{R}^n$, and let $M$ be a weight sequence. We say that a $\mathcal{C}^\infty$ function $f:\Omega\to \mathbb{C}$ belongs to the \emph{Denjoy-Carleman class} $\mathcal{C}_M(\Omega)$ if for any compact subset $X$ of $\Omega$, one can find a real number $\sigma>0$ and a constant $C\geq 0$ such that
\begin{equation}
\vert \partial^Jf(x)\vert \leq C\sigma^j j!M_j\ \text{ for any }\, J\in\mathbb{N}^n\, \text{ and }\, x\in X.
\end{equation}
A germ of function at the origin in $\mathbb{R}^n$ is said to be of class $\mathcal{C}_M$ if it has a representative in $\mathcal{C}_M(\Omega)$ for some open neighborhood $\Omega$ of $0$. We denote by $\mathcal{C}_M(\mathbb{R}^n,0)$ the set of all such germs.
Corresponding definitions for functions on segments of $\mathbb{R}$ instead of an open set will be needed.
Given a segment $[a,b]$ of $\mathbb{R}$, a real number $\sigma>0$, and a $\mathcal{C}^\infty$ function $f:[a,b]\to \mathbb{C}$, we set
\begin{equation*}
\Vert f\Vert_{[a,b],\sigma}=\sup_{x\in [a,b],\ j\in \mathbb{N}}\frac{\vert f^{(j)}(x)\vert}{\sigma^j j!M_j}.
\end{equation*}
We then say that the function $f$ belongs to the space $\mathcal{C}_{M,\sigma}([a,b])$ if it satisfies $\Vert f\Vert_{[a,b],\sigma}<\infty$. It is easy to see that $\mathcal{C}_{M,\sigma}([a,b])$ is a Banach space for the norm $\Vert\cdot\Vert_{[a,b],\sigma}$. Finally, we define the \emph{Denjoy-Carleman class} $\mathcal{C}_M([a,b])$ as the reunion of all spaces $\mathcal{C}_{M,\sigma}([a,b])$ for $\sigma>0$. Given an open subset $\Omega$ of $\mathbb{R}$, it is clear that a function $f:\Omega\to \mathbb{C}$ belongs to $\mathcal{C}_M(\Omega)$ if and only if its restriction to every segment $[a,b]$ contained in $\Omega$ belongs to $\mathcal{C}_M([a,b])$.
We end this section with a brief review of the relationship between conditions on the sequence $M$ and properties of the corresponding classes; we refer to \cite{Th2} for details and references. Conditions \eqref{norm} and \eqref{logc} imply that $\mathcal{C}_M(\Omega)$, $\mathcal{C}_M(\mathbb{R}^n,0)$ and $\mathcal{C}_M([a,b])$ are algebras, and that $\mathcal{C}_M$ regularity is stable under composition. Condition \eqref{nonana} ensures that $\mathcal{C}_M(\Omega)$ (resp. $\mathcal{C}_M(\mathbb{R}^n,0)$) strictly contains the algebra of real-analytic functions in $\Omega$ (resp. real-analytic germs at the origin). The moderate growth assumption \eqref{modg} can be interpreted in terms of stability of $\mathcal{C}_M$ regularity under the action of so-called ultradifferential operators; see \cite{Kom}. It clearly implies the weaker condition
\begin{equation}\label{stabder}
M_{j+1}\leq A^{j+1}M_j \ \textrm{ for any } j\in\mathbb{N}
\end{equation}
which characterizes the stability of $\mathcal{C}_M$ classes under derivation. The non-quasi\-an\-a\-lyt\-icity property \eqref{nqa} characterizes the existence of a non-trivial element of $\mathcal{C}_M(\mathbb{R}^n,0)$ which is flat at $0$, whereas the stronger condition \eqref{snqa} is a necessary and sufficient condition for a $\mathcal{C}_M$ version of Borel's extension theorem.
\section{Functions with ultradifferentiable powers}\label{exposit}
\subsection{Background and known results}\label{background}
Let $M$ be a weight sequence and let $f$ be a germ of complex-valued
function of class $\mathcal{C}^\infty$ at the origin in $\mathbb{R}$. Assume that there is an integer $p\geq 2$ such that the germ $f^p: x\mapsto (f(x))^p$ belongs to $\mathcal{C}_M(\mathbb{R},0)$.
As observed in \cite[Remark 1]{Th3}, it is not difficult the check that if $\mathcal{C}_M(\mathbb{R},0)$ is stable under derivation and quasianalytic, then $f$ also belongs to $\mathcal{C}_M(\mathbb{R},0)$.
This is no longer true in the non-quasianalytic case: indeed, for any real $\lambda>0$, set
\begin{equation}\label{glam}
g_\lambda(x)=\exp\left(-\frac{1}{\lambda}(\ln x)^2\right)\, \text{ for }\, x>0\, \text{ and }\, g_\lambda(x)=0\, \text{ for }\,x\leq 0.
\end{equation}
The proof of \cite[Lemma 1]{Th3} shows that $g_\lambda$ belongs to $\mathcal{C}_{M^\lambda}(\mathbb{R},0)$, where $M^\lambda$ is defined in Example \ref{qgev}, but not to any strictly smaller ring $\mathcal{C}_M(\mathbb{R},0)$. In particular, for $f=g_{p\lambda}$, we see that $f^p$ belongs to $\mathcal{C}_{M^\lambda}(\mathbb{R},0)$ whereas $f$ does not. Thus, the result fails for the weight sequences $M^\lambda$, even though the associated classes are stable under derivation and strongly non-quasianalytic. Since $M^\lambda$ does not have moderate growth, it was asked in \cite{Th3} whether the result would hold for tamer sequences $M$, namely strongly regular ones. The answer is still negative, as shown by the following proposition.
\begin{prop}\label{answer}
Let $M$ be a strongly regular weight sequence. For every integer $p\geq 2$, there is a smooth function germ $f$ at the origin in $\mathbb{R}$ such that $f^p\in\mathcal{C}_M(\mathbb{R},0)$ and $f\notin \mathcal{C}_M(\mathbb{R},0)$.
\end{prop}
\begin{proof} We start with a counter-example in two variables, slightly generalizing a construction of \cite{Th1}. By \cite[Lemma 3.6]{Th1b}, there is an element $\eta$ of $\mathcal{C}_M(\mathbb{R})$ which vanishes at infinite order at the origin and satisfies $\eta(t)\geq h_M(b\vert t\vert)$ for some suitable constant $b>0$. Given an integer $m\geq 2$, we then set, for $(x,y)\in \mathbb{R}^2$,
\begin{equation*}
F(x,y)=(x^2+y^{2m})\left(1+\frac{x^2\eta(y)}{x^2+y^{2m}}\right)^{1/p}.
\end{equation*}
Since $\eta$ is flat at $0$, the $\mathcal{C}^\infty$-smoothness of $F$ is immediate. Moreover, we have
$(F(x,y))^p=(x^2+y^{2m})^p+ x^2(x^2+y^{2m})^{p-1}\eta(y)$, hence $F^p \in \mathcal{C}_M(\mathbb{R}^2,0)$.
Using the power series expansion of $(1+t)^{1/p}$, we obtain, for $(x,y)$ close enough to $(0,0)$, the expansion
\begin{equation*}
F(x,y)=x^2+y^{2m}+ \frac{1}{p}x^2\eta(y)+\sum_{j=1}^{+\infty}(-1)^ja_j\frac{x^{2j+2}}{y^{2mj}}\left(1+\frac{x^2}{y^{2m}}\right)^{-j}(\eta(y))^{j+1}
\end{equation*}
with $a_j= \frac{(p-1)(2p-1)\cdots (jp-1)}{p^{j+1}(j+1)!}$ for $j\geq 1$. Assume $0\leq x< y^m$. Expanding $\left(1+\frac{x^2}{y^{2m}}\right)^{-j}$ in power series, we then obtain the absolutely convergent expansion
\begin{equation}\label{expan1}
F(x,y)=G(x,y)+\sum_{j=1}^{+\infty}\sum_{k=0}^{+\infty}(-1)^{j+k}a_j \binom{j+k-1}{j-1}\frac{x^{2(j+k)+2}}{y^{2m(j+k)}}(\eta(y))^{j+1}
\end{equation}
with $G(x,y)= x^2\big(1+\frac{1}{p}\eta(y)\big)+y^{2m}$. We set $l=j+k$ and exchange the order of summation, so that \eqref{expan1} becomes
\begin{equation}\label{expan}
F(x,y)=G(x,y)+\sum_{l=1}^{+\infty}(-1)^l c_l(y)x^{2l+2} \, \text{ for }\, 0\leq x<y^m,
\end{equation}
with
\begin{equation*}
c_l(y)=y^{-2ml}\sum_{j=1}^l a_j \binom{l-1}{j-1}(\eta(y))^{j+1}\, \text{ for }\, l\geq 1.
\end{equation*}
Clearly, \eqref{expan} implies
\begin{equation*}
\frac{\partial^{2l+2} F}{\partial x^{2l+2}}(0,y)=(-1)^l(2l+2)!\, c_l(y)\, \text{ for }\, y>0\, \text{ and }\, l\geq 1.
\end{equation*}
Observe that $c_l(y)\geq y^{-2ml}a_1 (\eta(y))^2\geq a_1(y^{-ml}h_M(by))^2$. Moreover, by \eqref{Legendre2}, there is a sequence $(y_l)_{l\geq 0}$ of positive real numbers such that $\lim_{l\to\infty} y_l=0$ and $h_M(by_l)=(by_l)^{ml}M_{ml}$, hence $c_l(y_l)\geq a_1 b^{2ml}(M_{ml})^2$. Using \eqref{logc} and \eqref{modg}, we also have $(M_{ml})^2\geq A^{-2ml}M_{2ml}\geq A^{-2ml}(M_{2l})^m\geq A^{-4ml-2m}M_2^{-m}(M_{2l+2})^m$. Thus, we finally see that there is a constant $C>0$ such that
\begin{equation}\label{noreg}
\left\vert\frac{\partial^{2l+2} F}{\partial x^{2l+2}}(0,y_l)\right\vert\geq C^{l+1} (2l+2)!(M_{2l+2})^m,\, \text{ with }\, \lim_{l\to\infty}y_l=0,
\end{equation}
which clearly implies $F\notin\mathcal{C}_M(\mathbb{R}^2,0)$. The existence of a similar counter-example in one variable is now a direct consequence of the results in \cite[Section 3]{KMR}: starting from \eqref{noreg}, it is possible to construct a curve $\gamma:\mathbb{R}\to \mathbb{R}^2$, with components in $\mathcal{C}_M(\mathbb{R})$, such that $\gamma(0)=0$ and $F\circ\gamma\notin \mathcal{C}_M(\mathbb{R},0)$. Thus, setting $f=F\circ\gamma$, we have $f^p= (F)^p\circ\gamma\in \mathcal{C}_M(\mathbb{R},0)$ and $f\notin\mathcal{C}_M(\mathbb{R},0)$.
\end{proof}
As in the classic $\mathcal{C}^\infty$ case of Joris's theorem, it turns out, however, that a positive result can be obtained with assumptions on two suitable powers of $f$.
\subsection{Joris's theorem for Denjoy-Carleman classes}
Due to the local nature of the problem, it is convenient to also state the main result of this article in terms of function germs.
\begin{thm}\label{main}
Let $M$ be a weight sequence that satisfies the moderate growth condition. Let $f$ be a germ of complex-valued function at the origin in $\mathbb{R}$. Assume there is a couple $(p,q)$ of non-zero natural integers with $\gcd(p,q)=1$ such that both germs $f^p$ and $f^q$ belong to $\mathcal{C}_M(\mathbb{R},0)$. Then $f$ belongs to $\mathcal{C}_M(\mathbb{R},0)$.
\end{thm}
Postponing the proof to Sections \ref{technical} and \ref{final}, we shall devote the rest of the present section to comments and corollaries.
\begin{rem}
Obviously, the above statement implies that if $\Omega$ is an open subset of $\mathbb{R}$ and $f:\Omega\to \mathbb{C}$ is a function such that $f^p$ and $f^q$ belong to $\mathcal{C}_M(\Omega)$, with $\gcd(p,q)=1$, then $f$ belongs to $\mathcal{C}_M(\Omega)$.
\end{rem}
\begin{rem}
The result is no longer true without the moderate growth assumption. A counter-example is once again provided by the functions $g_\lambda$ defined in \eqref{glam}. Indeed, assume for instance $p<q$ and set $f=g_{p\lambda}$. We then have $f^p=g_\lambda\in\mathcal{C}_{M^\lambda}(\mathbb{R},0)$ and $f^q=g_{\lambda'}\in \mathcal{C}_{M^{\lambda'}}(\mathbb{R},0)$ with $\lambda'=\frac{p}{q}\lambda<\lambda$, hence $f^q\in \mathcal{C}_{M^\lambda}(\mathbb{R},0)$. However $f$ does not belong to $\mathcal{C}_{M^\lambda}(\mathbb{R},0)$.
\end{rem}
\begin{rem}
As already mentioned in Section \ref{background}, the quasianalytic case does not require moderate growth, but the much weaker assumption of stability under derivation, and the result can then be obtained by straightforward arguments. The interest of Theorem \ref{main} therefore lies in the non-quasianalytic case, although non-quasianalyticity will not be used in the proof.
\end{rem}
As noticed in the article of Joris \cite{Jor}, in the $\mathcal{C}^\infty$ case, a generalization to functions of several variables is immediate, thanks to the classical result of Boman \cite{Bom} stating that $\mathcal{C}^\infty$ smoothness can be tested along curves. Analogously, for non-quasianalytic classes, the contents of \cite[Section 3]{KMR} immediately yield the following corollary of Theorem \ref{main}.
\begin{cor}
Let $M$ be a weight sequence that satisfies the moderate growth and non-quasianalyticity conditions. Let $f$ be a germ of complex-valued function at the origin in $\mathbb{R}^n$. Assume there is a couple $(p,q)$ of non-zero natural integers with $\gcd(p,q)=1$ such that both germs $f^p$ and $f^q$ belong to $\mathcal{C}_M(\mathbb{R}^n,0)$. Then $f$ belongs to $\mathcal{C}_M(\mathbb{R}^n,0)$.
\end{cor}
The quasianalytic case if of a different nature and the results in \cite{Jaf} and \cite{Rai2} show that it cannot be treated directly by an argument of reduction to lower dimensions. The particular situation of quasianalytic classes obtained as intersections of non-quasianalytic ones as in \cite{KMR2} does not seem more immediately tractable, as the classes defining the intersections may not have suitable properties of logarithmic convexity or moderate growth. \\
We now proceed with the proof of Theorem \ref{main}.
\section{Preparations}\label{technical}
\subsection{Uniform estimates for Cauchy-Riemann equations}\label{dbarsol}
In what follows, for $1\leq p\leq \infty$, we denote by $\Vert\cdot\Vert_p$ the usual norm on the space $L^p(\mathbb{C})$ associated with the standard Lebesgue measure $\lambda$. For $z\in \mathbb{C}$ and $r>0$, we denote by $D(z,r)$ the open disk $\{\zeta\in\mathbb{C}: \vert z-\zeta\vert<r\}$. We write $\mathbbm{1}_A$ for the indicator function of a set $A$.
Let $\mathcal{K}$ denote the Cauchy kernel in $\mathbb{C}$, that is, $\mathcal{K}(z)=\frac{1}{\pi z}$.
Let $U$ be a bounded open subset of $\mathbb{C}$. By elementary arguments, for any element $w$ of $L^\infty(\mathbb{C})$ such that $w=0$ in $\mathbb{C}\setminus U$, the convolution $v=\mathcal{K}*w$ defines a bounded continuous function in $\mathbb{C}$ that satisfies $\partial v/\partial\bar{z}=w$
in the sense of distributions in $\mathbb{C}$, and
\begin{equation}\label{estimconvol1}
\Vert v\Vert_\infty\leq C \Vert w\Vert_\infty
\end{equation}
for some suitable constant $C$ depending only on $\max_{\zeta\in U}\vert \zeta\vert$. In order to follow the pattern of \cite{Fed}, more subtle uniform estimates on $v$ are needed. These estimates are described by the following lemma.
\begin{lem}\label{estimconvol2}
Let $U$, $w$ and $v$ be as above. Then for any real number $r\in (0,\frac{1}{2}]$ and any $z\in U$, we have
\begin{equation*}
\vert v(z)\vert\leq C \left(r \Vert w\Vert_\infty+\left(\vert\ln r\vert\right)^{1/2}\Vert w\Vert_2\right)
\end{equation*}
for some suitable constant $C$ depending only on $\max_{\zeta\in U}\vert \zeta\vert$.
\end{lem}
\begin{proof} For the reader's convenience, we include the proof sketched in \cite{Fed}. Choose $R\geq 1$ such that $U\subset D\big(0,\frac{R}{2}\big)$. For $z\in U$ and $\vert \zeta\vert \geq R$ we have $\vert z-\zeta\vert >\frac{R}{2}$, hence $w(z-\zeta)=0$. We can therefore write
$v(z)=\int_{D(0,R)}\mathcal{K}(\zeta)w(z-\zeta)\, \mathrm{d}\lambda(\zeta)= \int_{D(0,r)}\mathcal{K}(\zeta)w(z-\zeta)\, \mathrm{d}\lambda(\zeta)+\int_{\{r\leq\vert \zeta\vert<R\}}\mathcal{K}(\zeta)w(z-\zeta)\, \mathrm{d}\lambda(\zeta)$. A crude majorization immediately yields $\left\vert \int_{D(0,r)}\mathcal{K}(\zeta)w(z-\zeta)\, \mathrm{d}\lambda(\zeta)\right\vert\leq \int_{D(0,r)}\frac{\, \mathrm{d}\lambda(\zeta)}{\pi\vert \zeta\vert}\Vert w\Vert_\infty=2r\Vert w\Vert_\infty$. By the Cauchy-Schwarz inequality, we also have $\left\vert \int_{\{r\leq\vert \zeta\vert< R\}}\mathcal{K}(\zeta)w(z-\zeta)\, \mathrm{d}\lambda(\zeta)\right\vert\leq \left(\int_{\{r\leq\vert \zeta\vert< R\}}\frac{\, \mathrm{d}\lambda(\zeta)}{\pi^2\vert \zeta\vert^2}\right)^{1/2}\Vert w\Vert_2= \big(\frac{2}{\pi}\ln({R}/{r})\big)^{1/2}\Vert w\Vert_2$. The result easily follows.
\end{proof}
\subsection{Technical estimates in ellipses}\label{objects}
\begin{defin}\label{ellipses}
For any $\varepsilon>0$, we put $\Omega_\varepsilon=\varphi_\varepsilon (S)$, where $S$ is the strip $\{z\in\mathbb{C} :\vert \Im z\vert<1\}$ and $\varphi_\varepsilon$ is the mapping of the complex plane defined by $\varphi_\varepsilon(z)=\sin (\varepsilon z)$.
\end{defin}
In other words, the open set $\Omega_\varepsilon$ is the interior of the ellipse with vertices $\pm \cosh\varepsilon$ and co-vertices $ \pm i\sinh\varepsilon $. It contains the real interval $[-1,1]=\varphi_\varepsilon(\mathbb{R})$. and becomes narrower as $\varepsilon$ tends to $0$. \\
The following covering lemma is elementary.
\begin{lem}\label{cover}
For any real number $\varepsilon$ with $0<\varepsilon\leq 1$, there is a radius $\eta_\varepsilon>0$ and a finite family of disks $D(z_{j,\varepsilon},\eta_\varepsilon)$, $j=1,\ldots,N_\varepsilon$, with the following properties:
\begin{equation}\label{cover1}
\Omega_{\varepsilon/2}\subset\bigcup_{j=1}^{N_\varepsilon}D(z_{j,\varepsilon}, \eta_\varepsilon),
\end{equation}
\begin{equation}\label{cover2}
\overline{D(z_{j,\varepsilon},2\eta_\varepsilon)}\subset \Omega_\varepsilon\, \text{ for }\, j=1,\ldots,N_\varepsilon,
\end{equation}
\begin{equation}\label{cover3}
N_\varepsilon \leq C \varepsilon^{-3}\, \text{ for some absolute constant }C.
\end{equation}
\end{lem}
\begin{proof} Basic arguments show that $\dist(\partial\Omega_{\varepsilon/2}, \partial\Omega_{\varepsilon})\geq \frac{1}{4} \varepsilon^2$. Thus, any closed disk of radius $\frac{1}{8}\varepsilon^2$ that intersects $\Omega_{\varepsilon/2}$ is contained in $\Omega_\varepsilon$. Set $\eta_\varepsilon=\frac{1}{16}\varepsilon^2$ and notice that $\Omega_{\varepsilon/2}$ is contained in a rectangle of length $ 2\cosh(\varepsilon/2)$ and width $2\sinh(\varepsilon/2)$. It is an easy exercise to check that such a rectangle can be covered by a family $\mathcal{F}_\varepsilon$ of open disks of radius $\eta_\varepsilon$ with $\card{\mathcal{F}_\varepsilon} \leq C\varepsilon^{-3}$ for some absolute constant $C$. Keeping only the elements of $\mathcal{F}_\varepsilon$ that intersect $\Omega_{\varepsilon/2}$, we obtain a family of disks having all the desired properties.
\end{proof}
We can now obtain technical estimates following closely a key statement in \cite{Fed}, with slight modifications required in our framework. For the reader's convenience, we give a complete proof.
\begin{lem}\label{l2estim}
Let $\varepsilon$ be a real number with $0<\varepsilon\leq 1$, let $g$ be a bounded holomorphic function in $\Omega_\varepsilon$, and let $K$ be a real number such that $\vert g\vert\leq K$ in $\Omega_\varepsilon$. For any real number $r>0$, we have
\begin{equation*}
\int_{\Omega_{\varepsilon/2}}\vert g'\vert^2 \mathbbm{1}_{\{\vert g\vert<r\}} \, \mathrm{d}\lambda \leq C\frac{r^2}{\varepsilon^3}\ln\left(\frac{K^2}{r^2}+1\right)
\end{equation*}
for some absolute constant $C$.
\end{lem}
\begin{proof}
For $j=1,\ldots, N_\varepsilon$, consider the disk $D(z_{j,\varepsilon},\eta_\varepsilon)$ of Lemma \ref{cover}. It is easy to see that
\begin{equation}\label{chvar}
\int_{D(z_{j,\varepsilon},\eta_\varepsilon)} \vert g'\vert^2 \mathbbm{1}_{\{\vert g\vert<r\}} \, \mathrm{d}\lambda=\int_{D(0,\frac{1}{2})}\vert g_{j,\varepsilon}'\vert^2 \mathbbm{1}_{\{\vert g_{j,\varepsilon}\vert<r\}} \, \mathrm{d}\lambda
\end{equation}
where $g_{j,\varepsilon}$ is defined by
\begin{equation*}
g_{j,\varepsilon}(\zeta)=g(z_{j,\varepsilon}+2\eta_\varepsilon \zeta).
\end{equation*}
Property \eqref{cover2} and the assumptions on $g$ ensure that the function $g_{j,\varepsilon}$ is holomorphic in a neighborhood of $\overline{D(0,1)}$. Set
\begin{equation*}
\Psi_{j,\varepsilon}=\ln \left(\vert g_{j,\varepsilon}\vert^2+r^2\right).
\end{equation*}
Then $\Psi_{j,\varepsilon}$ is a smooth subharmonic function in a neighborhood of $\overline{D(0,1)}$ and its Laplacian is
\begin{equation*}
\Delta \Psi_{j,\varepsilon}=4r^2 \frac{\vert g_{j,\varepsilon}'\vert^2}{(\vert g_{j,\varepsilon}\vert^2+r^2)^2}.
\end{equation*}
In particular, we have $\Delta \Psi_{j,\varepsilon}\geq \frac{1}{r^2}\vert g_{j,\varepsilon}'\vert^2 \mathbbm{1}_{\{\vert g_{j,\varepsilon}\vert<r\}}$. Thus, we get
\begin{equation}\label{majorint}
\begin{split}
\int_{D(0,\frac{1}{2})}\vert g_{j,\varepsilon}'\vert^2 \mathbbm{1}_{\{\vert g_{j,\varepsilon}\vert<r\}} \, \mathrm{d}\lambda &\leq r^2\int_{D(0,\frac{1}{2})} \Delta \Psi_{j,\varepsilon} \, \mathrm{d}\lambda \\
& \leq \frac{r^2}{\ln 2} \int_{D(0,\frac{1}{2})} \Delta \Psi_{j,\varepsilon}(\zeta)\ln\left(\frac{1}{\vert \zeta\vert}\right)\, \mathrm{d}\lambda(\zeta)\\
& \leq \frac{r^2}{\ln 2} \int_{D(0,1)} \Delta \Psi_{j,\varepsilon}(\zeta)\ln\left(\frac{1}{\vert \zeta\vert}\right)\, \mathrm{d}\lambda(\zeta).
\end{split}
\end{equation}
Using Green's formula for the Laplacian, together with the obvious estimates $\Psi_{j,\varepsilon}\leq \ln (K^2+r^2)$ and $\Psi_{j,\varepsilon}(0)\geq \ln r^2$, we see that
\begin{equation}\label{green}
\begin{split}
\int_{D(0,1)}\Delta \Psi_{j,\varepsilon}(\zeta)\ln\left(\frac{1}{\vert \zeta\vert}\right)\, \mathrm{d}\lambda(\zeta) &=\int_0^{2\pi} \Psi_{j,\varepsilon}(e^{i\theta})\, \mathrm{d}\theta- 2\pi \Psi_{j,\varepsilon}(0) \\
&\leq 2\pi (\ln(K^2+r^2)-\ln r^2).
\end{split}
\end{equation}
Gathering \eqref{chvar}, \eqref{majorint} and \eqref{green}, we obtain
\begin{equation*}
\int_{D(z_{j,\varepsilon},\eta_\varepsilon)} \vert g'\vert^2 \mathbbm{1}_{\{\vert g\vert<r\}} \, \mathrm{d}\lambda\leq \frac{2\pi}{\ln 2} \ln\left(\frac{K^2}{r^2}+1\right).
\end{equation*}
Together with \eqref{cover1} and \eqref{cover3}, this implies the desired result.
\end{proof}
We end this section with a lemma which, roughly speaking, means that for bounded holomorphic functions in $\Omega_\varepsilon$, a suitable property of ``smallness'' on the interval $[-1,1]$ still holds in $\Omega_{\varepsilon/2}$, up to constants.
\begin{lem}\label{threelines}
Let $\varepsilon$ be a positive real number and let $g$ be a function holomorphic in $\Omega_\varepsilon$ and continuous up to the boundary. Assume that the weight sequence $M$ satisfies the moderate growth property \eqref{modg}, and let $L$, $a_1$ and $a_2$ be positive numbers such that
\begin{equation*}
\vert g\vert \leq L \text{ in }\, \Omega_\varepsilon\quad \text{and }\quad\vert g\vert \leq a_1 h_M(a_2\varepsilon)\, \text{ on }\, [-1,1].
\end{equation*}
Then we have
\begin{equation*}
\vert g\vert\leq a_3 h_M(a_4 \varepsilon)\, \text{ in }\, \Omega_{\varepsilon/2},
\end{equation*}
for suitable positive numbers $a_3$ and $a_4$ depending only on $L$, $a_1$, $a_2$ and on the sequence $M$.
\end{lem}
\begin{proof} With the notation of Definition \ref{ellipses}, put $f=\frac{1}{a_1}g\circ \varphi_\varepsilon$. The function $f$ is holomorphic in the strip $S$ and continuous up to the boundary. Setting $K=\max(1,\frac{L}{a_1})$, we have $\vert f\vert \leq K$ in $S$ and $\vert f\vert\leq h_M(a_2\varepsilon)$ on $\mathbb{R}$. Using Hadamard's three-lines theorem \cite[pp. 33--34]{RS}, we get $\vert f(z)\vert\leq (h_M(a_2\varepsilon))^{1-\vert \Im z\vert}K^{\vert \Im z\vert}$ for every $z\in S$. Notice that $h_M(a_2\varepsilon)\leq 1$ and $K\geq 1$. Since any point $w$ in $\Omega_{\varepsilon/2}$ can be written $w=\varphi_\varepsilon(z)$ with $z\in S$ and $\vert\Im z\vert\leq 1/2$, we therefore get the estimate $\vert g(w)\vert\leq a_1(K h_M(a_2\varepsilon))^{1/2}$ for any such $w$. Since $M$ has moderate growth, it then suffices to use \eqref{hfunct2} to obtain the desired result, with $a_3=\max(a_1^{1/2},L^{1/2})$ and $a_4=\kappa_2a_2$.
\end{proof}
\subsection{An approximation-theoretic characterization of ultradifferentiable functions}
The approach of Joris's theorem in \cite{Fed} relies on a characterization of
$\mathcal{C}^k$ regularity of a function $f$ on a bounded interval $I$ in terms of the rate of approximation of $f$ by uniformly bounded families of holomorphic functions in
narrow neighborhoods of $I$ in $\mathbb{C}$. In this section, we obtain, in the same spirit, a characterization of $\mathcal{C}_M$ regularity under the moderate growth assumption.
\begin{defin}\label{pm}
Let $M$ be a weight sequence. We shall say that a complex-valued function $f$ defined on $[-1,1]$ satisfies property $(\mathcal{P}_M)$ if there are positive constants $K$, $c_1$, $c_2$ and a family $(f_\varepsilon)_{0<\varepsilon\leq \varepsilon_0}$ of continuous functions
in $\mathbb{C}$ such that, for any $\varepsilon\in (0,\varepsilon_0]$, the following conditions are satisfied:
\begin{align}
& \text{the function } f_\varepsilon \text{ is holomorphic in } \Omega_\varepsilon, \label{pm1}\\
& \vert f_\varepsilon\vert \leq K\, \text{ in }\, \Omega_\varepsilon, \label{pm2} \\
& \vert f-f_\varepsilon\vert\leq c_1 h_M(c_2\varepsilon)\, \textrm{ on }\, [-1,1]. \label{pm3}
\end{align}
\end{defin}
\begin{prop}\label{approx}
Every element of $\mathcal{C}_M([-1,1])$ satisfies property $(\mathcal{P}_M)$. Conversely, if a complex-valued function defined on $[-1,1]$ satisfies $(\mathcal{P}_M)$, then it belongs to $\mathcal{C}_M([-b,b])$ for any real number $b$ with $0<b<1$.
\end{prop}
\begin{proof} Let $f$ be an element of $\mathcal{C}_M([-1,1])$. By Dynkin's theorem on $\bar\partial$-flat extensions \cite{Dy}, there are positive constants $c_1$ and $c_2$, and a function $g$ of class $\mathcal{C}^1$ with compact support in $\mathbb{C}$, such that $g=f$ on $[-1,1]$ and, for any $z\in \mathbb{C}$,
\begin{equation}\label{dbarflat}
\left\vert\frac{\partial g}{\partial \bar z}(z)\right\vert\leq c_1 h_M(c_2 \dist(z,[-1,1])).
\end{equation}
For every $\varepsilon\in (0,1]$, put
\begin{equation*}
w_\varepsilon=\mathbbm{1}_{\Omega_\varepsilon}\frac{\partial g}{\partial \bar z}.
\end{equation*}
Then $w_\varepsilon$ is an element of $L^\infty(\mathbb{C})$, with $ w_\varepsilon=0$ in $\mathbb{C}\setminus \Omega_\varepsilon$. Besides, it is easy to see that for $z\in \Omega_\varepsilon$, we have $\dist(z,[-1,1]) \leq C \varepsilon$ for some absolute constant $C$. After multiplying $c_2$ by $C$, \eqref{dbarflat} implies
\begin{equation}\label{majin2e}
\Vert w_\varepsilon\Vert_\infty\leq c_1 h_M(c_2\varepsilon).
\end{equation}
Now, set $v_\varepsilon=\mathcal{K}*w_\varepsilon$ where $\mathcal{K}$ is the Cauchy kernel. As explained in Section \ref{dbarsol}, $v_\varepsilon$ is a continuous function in $\mathbb{C}$ such that
$\partial v_\varepsilon/\partial\bar{z}=w_\varepsilon$
in the sense of distributions in $\mathbb{C}$, hence
\begin{equation}\label{eqd1}
\frac{\partial v_\varepsilon}{\partial \bar z}=\frac{\partial g}{\partial \bar z}\, \text{ in }\, \Omega_\varepsilon.
\end{equation}
Moreover, by \eqref{estimconvol1} and \eqref{majin2e}, it satisfies
\begin{equation}\label{majsol}
\Vert v_\varepsilon\Vert_\infty\leq c_1 h_M(c_2\varepsilon)
\end{equation}
after multiplying $c_1$ by a suitable absolute constant. Define $f_\varepsilon=g-v_\varepsilon$. Then $f_\varepsilon$ is a bounded continuous function in $\mathbb{C}$ and we have $\Vert f_\varepsilon\Vert_\infty\leq \Vert g\Vert_\infty + c_1 h_M(c_2\varepsilon)$, hence \eqref{pm2} with $K= \Vert g\Vert_\infty + c_1 h_M(c_2)$. By \eqref{eqd1}, we have $ \partial f_\varepsilon/\partial\bar{z}= 0$ in $\Omega_\varepsilon$, hence \eqref{pm1}. Finally, \eqref{majsol} implies \eqref{pm3} since $f$ and $g$ coincide on $[-1,1]$. Thus, property $(\mathcal{P}_M)$ is established, with $\varepsilon_0=1$.
Conversely, let $f:[-1,1]\to \mathbb{C}$ be a function that satisfies $(\mathcal{P}_M)$. For $0<\varepsilon\leq \varepsilon_0/2$, it is readily seen that the function $f_\varepsilon-f_{2\varepsilon}$ meets the assumptions of Lemma \ref{threelines} with $L=2K$, $a_1=2c_1$ and $a_2=2c_2$. We therefore get
\begin{equation}\label{majg}
\vert f_\varepsilon-f_{2\varepsilon}\vert\leq a_3 h_M(a_4 \varepsilon)\, \text{ in }\, \Omega_{\varepsilon/2},
\end{equation}
for some suitable constants $a_3$ and $a_4$ depending only on $K$, $c_1$ and $c_2$
Now, let $b$ be a real number with $0<b<1$. By elementary geometric considerations, there is an absolute positive constant $C$ such that for any $x\in [-b,b]$, the closed disk centered at $x$ with radius $C(b-1)\varepsilon$ is contained in $\Omega_{\varepsilon/2}$. Using the Cauchy formula and \eqref{majg}, we therefore get $\vert (f_\varepsilon-f_{2\varepsilon})^{(j)}(x)\vert \leq a_3(C(b-1))^{-j} j! \varepsilon^{-j} h_M(a_4\varepsilon)$ for any $x\in [-b,b]$ and any $j\in \mathbb{N}$. Taking \eqref{hfunct3} into account, we get
\begin{equation}
\Vert f_\varepsilon-f_{2\varepsilon}\Vert_{[-b,b],\sigma}\leq a_3h_M(a_5\varepsilon)
\end{equation}
with $\sigma=\kappa_2a_4(C(b-1))^{-1}$ and $a_5=\kappa_2a_4$. Since $h_M(a_5\varepsilon)\leq a_5M_1\varepsilon$, this clearly implies the absolute convergence of the series $f_{\varepsilon_0}+\sum_{j\geq 1} \big(f_{\varepsilon_02^{-j}}-f_{\varepsilon_02^{-(j-1)}}\big)$ in the Banach space $\mathcal{C}_{M,\sigma}([-b,b])$. Let $g$ denote its sum. For every integer $J\geq 1$, we have
\begin{equation*}
g=f_{\varepsilon_02^{-J}}+\sum_{j\geq J+1} \big(f_{\varepsilon_02^{-j}}-f_{\varepsilon_02^{-(j-1)}}\big).
\end{equation*}
For $x\in [-b,b]$, we infer $\vert f(x)-g(x)\vert\leq \big\vert f(x)-f_{\varepsilon_02^{-J}}(x)\big\vert+\sum_{j\geq J+1} \big\vert f_{\varepsilon_02^{-j}}(x)-f_{\varepsilon_02^{-(j-1)}}(x)\big\vert \leq c_1h_M(c_2\varepsilon_02^{-J})+\sum_{j\geq J+1} \big\Vert f_{\varepsilon_02^{-j}}-f_{\varepsilon_02^{-(j-1)}}\big\Vert_{[-b,b],\sigma} $. Letting $J$ tend to $\infty$, we obtain $f(x)=g(x)$, hence $f\in\mathcal{C}_M([-b,b])$.
\end{proof}
\begin{rem}
The moderate growth assumption is crucial in the proof of the converse part of Proposition \ref{approx}, but the fact that the elements of $\mathcal{C}_M([-1,1])$ satisfy property $(\mathcal{P}_M)$ is still true under the weaker condition \eqref{stabder} of stability under derivation, which is required by Dynkin's result on $\bar\partial$-flat extensions.
\end{rem}
\section{Proof of the main result}\label{final}
\subsection{Reduction to a special case} Consider two positive integers $p$ and $q$ such that $\gcd(p,q)=1$ and let $f$ be a function germ at the origin in $\mathbb{R}$ such that $f^p$ and $f^q$ belong to $\mathcal{C}_M(\mathbb{R},0)$. Up to a linear change of variable, we can assume that $f^p$ and $f^q$ belong to $\mathcal{C}_M([-1,1])$. One can easily find $m\in\mathbb{N}$ such that any integer $j\geq m$ can be written
$j=pk+ql$ with $(k,l)\in\mathbb{N}^2$. We then have $f^j=(f^p)^k(f^q)^l$ and, since $\mathcal{C}_M([-1,1])$ is an algebra, we see that $f^j$ belongs to $\mathcal{C}_M([-1,1])$. In particular, we have
\begin{equation}\label{red}
f^m\in \mathcal{C}_M([-1,1])\, \text{ and }\, f^{m+1} \in\mathcal{C}_M([-1,1]).
\end{equation}
In order to conclude that $f$ belongs to $\mathcal{C}_M(\mathbb{R},0)$, it then suffices to prove that \eqref{red} implies $f\in \mathcal{C}_M([-b,b])$ for $0<b< 1$.
\subsection{Construction of approximants}
By Proposition \ref{approx}, there are constants $K\geq 1$, $c_1>0$, $c_2>0$ and families $(g_\varepsilon)_{0<\varepsilon\leq \varepsilon_0}$ and $(h_\varepsilon)_{0<\varepsilon\leq \varepsilon_0}$ of bounded continuous functions in $\mathbb{C}$ such that for $0<\varepsilon\leq \varepsilon_0$, we have the following properties:
\begin{align}
& \text{the functions } g_\varepsilon\text{ and } h_\varepsilon \text{ are holomorphic in }\Omega_{\varepsilon}, \\
& \vert g_\varepsilon\vert \leq K\, \text{ and }\, \vert h_\varepsilon\vert_\infty\leq K\, \text{ in }\, \Omega_{\varepsilon}, \label{fgh2} \\
& \vert f^m-g_\varepsilon\vert\leq c_1h_M(c_2\varepsilon)\, \text{ and }\, \vert f^{m+1}-h_\varepsilon\vert\leq c_1h_M(c_2\varepsilon)\,\text{ on }\, [-1,1]. \label{fgh3}
\end{align}
In view of the above, the intuitive candidate for an holomorphic approximation of $f$ on $[-1,1]$ is the quotient ${h_\varepsilon}/{g_\varepsilon}$, but it has to be modified to avoid small denominators. We therefore define \begin{equation*}
u_\varepsilon=\chi_\varepsilon \frac{\overline{g_\varepsilon}h_\varepsilon}{(\max(\vert g_\varepsilon\vert, r_\varepsilon))^2}
\end{equation*}
where $r_\varepsilon$ is a positive real number, and $\chi_\varepsilon:\mathbb{C}\to [0,1]$ is a smooth cutoff function with $\chi_\varepsilon=1$ in $\Omega_{\varepsilon/2}$ and $\supp\chi_\varepsilon\subset \Omega_{\varepsilon}$. The function $u_\varepsilon$ is well-defined, continuous with compact support in $\mathbb{C}$ and it coincides with $ h_\varepsilon/g_\varepsilon$ in $\Omega_{\varepsilon/2} \cap \{\vert g_\varepsilon\vert>r_\varepsilon\}$, but it is obviously not holomorphic in a whole neighborhood of $[-1,1]$. In the rest of the proof, we shall however see that for a suitable choice of $r_\varepsilon$, this function satisfies uniform bounds and is ``close enough'' to $f$ on $[-1,1]$, and we shall then recover a holomorphic approximant \emph{via} a $\bar\partial$-problem.\\
Using \eqref{fgh2}, \eqref{fgh3} and the elementary inequality
$\vert z^j-\zeta^j\vert\leq j\max(\vert z\vert, \vert \zeta\vert)^{j-1}\vert z-\zeta\vert$
with $j=m$ and with $j=m+1$, we see that there is a constant $c_3$ depending only on $K$, $c_1$ and $m$, such that $\vert h_\varepsilon^m-g_\varepsilon^{m+1}\vert \leq c_3h_M(c_2\varepsilon)$ on $[-1,1]$. Moreover, $h_\varepsilon^m-g_\varepsilon^{m+1}$ is holomorphic in $\Omega_\varepsilon$, continuous up to the boundary and we have $\vert h_\varepsilon^m-u_\varepsilon^{m+1}\vert\leq 2K^{m+1}$ in $\Omega_\varepsilon$. Thus, applying Lemma \ref{threelines} with $L=2K^{m+1}$, $a_1=c_3$ and $a_2=c_2$, we obtain
\begin{equation}\label{hg}
\vert h_\varepsilon^m-g_\varepsilon^{m+1}\vert \leq c_4h_M(c_5\varepsilon)\, \text{ in }\, \Omega_{\varepsilon/2},
\end{equation}
where $c_4$ and $c_5$ depend only on $K$, $c_1$, $c_2$ and $m$. We shall now set
\begin{equation}\label{defdelta}
\delta_\varepsilon=c_4 h_M(c_5\varepsilon)\ \text{ and }\ r_\varepsilon = \delta_\varepsilon^\frac{1}{m+1}.
\end{equation}
Since we can obviously assume
$c_4\geq c_1$ and $c_5\geq c_2$, it is convenient to rewrite \eqref{fgh3} and \eqref{hg} as
\begin{equation}\label{delta}
\begin{split}
&\vert f^{m+1}-h_\varepsilon\vert\leq \delta_\varepsilon\, \text{ and }\, \vert f^m-g_\varepsilon\vert\leq \delta_\varepsilon\, \text{ on }\, [-1,1], \\
&\vert h_\varepsilon^m-g_\varepsilon^{m+1}\vert\leq \delta_\varepsilon\, \text{ in }\, \Omega_{\varepsilon/2}.
\end{split}
\end{equation}
Also, notice that we have $\delta_\varepsilon\leq r_\varepsilon\leq 1$ for $\varepsilon$ small enough.
\begin{lem}\label{ubound}
For any sufficiently small $\varepsilon>0$, we have
\begin{equation*}
\vert u_\varepsilon\vert \leq (2K)^{1/m}\, \text{ in }\, \Omega_{\varepsilon/2}.
\end{equation*}
\end{lem}
\begin{proof} By \eqref{delta}, in $\Omega_{\varepsilon/2}$, we have $\vert h_\varepsilon\vert\leq \vert(\vert g_\varepsilon^{m+1}\vert+\vert h_\varepsilon^m-g_\varepsilon^{m+1}\vert)^{1/m} \leq (\vert g_\varepsilon\vert^{m+1}+r_\varepsilon^{m+1})^{1/m}\leq 2^{1/m} (\max(\vert g_\varepsilon\vert, r_\varepsilon))^\frac{m+1}{m}$, hence $\vert u_\varepsilon\vert\leq 2^{1/m} \vert g_\varepsilon\vert (\max(\vert g_\varepsilon\vert, r_\varepsilon))^{-1+\frac{1}{m}}\leq 2^{1/m}(\max(\vert g_\varepsilon\vert, r_\varepsilon))^\frac{1}{m}$. The result then follows from \eqref{fgh2}.
\end{proof}
\begin{lem}\label{fmu}
There is a constant $c_6$ depending only on $K$ and $m$, such that, for any sufficiently small $\varepsilon>0$, we have
\begin{equation*}
\vert f -u_\varepsilon\vert\leq c_6 \delta_\varepsilon^\frac{1}{m(m+1)}\, \text{ on }\, [-1,1].
\end{equation*}
\end{lem}
\begin{proof} The estimate will be proved separately on the sets $F_\varepsilon=[-1,1]\cap\{\vert g_\varepsilon\vert\leq r_\varepsilon\}$ and $G_\varepsilon=[-1,1]\cap\{\vert g_\varepsilon\vert>r_\varepsilon\}$. On the set $F_\varepsilon$, we have
$f-u_\varepsilon= f- r_\varepsilon^{-2}\, \overline{g_\varepsilon}h_\varepsilon$, hence
\begin{equation*}
\vert f-u_\varepsilon\vert\leq \vert f\vert+r_\varepsilon^{-2}\,\vert g_\varepsilon\vert \vert h_\varepsilon\vert\leq \vert f\vert+r_\varepsilon^{-1} \vert h_\varepsilon\vert.
\end{equation*}
By \eqref{delta}, we also have
$\vert f\vert\leq (\vert g_\varepsilon\vert+\vert f^m -g_\varepsilon\vert)^{1/m}\leq (r_\varepsilon+\delta_\varepsilon)^{1/m}\leq (2r_\varepsilon)^{1/m}$ and $\vert h_\varepsilon\vert\leq (\vert g_\varepsilon^{m+1}\vert+\vert h_\varepsilon^m-g_\varepsilon^{m+1}\vert)^{1/m}\leq (r_\varepsilon^{m+1}+\delta_\varepsilon)^{1/m}=(2r_\varepsilon^{m+1})^{1/m}=r_\varepsilon (2r_\varepsilon)^{1/m}$. Setting $c_7=2^{1+\frac{1}{m}}$, we finally derive
\begin{equation}\label{surF}
\vert f-u_\varepsilon\vert\leq c_7 r_\varepsilon^{1/m}= c_7\delta_\varepsilon^\frac{1}{m(m+1)}\, \text{ on }\, F_\varepsilon.
\end{equation}
On the set $G_\varepsilon$, we have
\begin{equation*}
f-u_\varepsilon = f-\frac{h_\varepsilon}{g_\varepsilon}= \frac{f(g_\varepsilon-f^m)+f^{m+1}-h_\varepsilon}{g_\varepsilon}
\end{equation*}
with
$\vert f\vert\leq (\vert g_\varepsilon\vert+\vert f^m -g_\varepsilon\vert)^{1/m}\leq (K+\delta_\varepsilon)^{1/m}\leq (K+1)^{1/m}$. Thus, using \eqref{delta}, it is easy to obtain
\begin{equation}\label{surG}
\vert f-u_\varepsilon\vert\leq c_8 \frac{\delta_\varepsilon}{r_\varepsilon}=c_8\delta_\varepsilon^\frac{m}{m+1}\, \text{ on }\, G_\varepsilon,
\end{equation}
with $c_8=(K+1)^{1/m}+1$. The lemma clearly follows from \eqref{surF} and \eqref{surG}.
\end{proof}
Now we proceed to obtain a holomorphic modification of $u_\varepsilon$. As a starting point, we need basic information on $\partial u_\varepsilon/\partial\bar{z}$.
\begin{lem} The distributional derivative $\partial u_\varepsilon/\partial\bar{z}$ is an element of $L^\infty(\mathbb{C})$ and we have
\begin{equation}\label{calcdbar}
\frac{\partial u_\varepsilon}{\partial\bar{z}}=\frac{1}{r_\varepsilon^2}\,\overline{g_\varepsilon'}h_\varepsilon \mathbbm{1}_{\{\vert g_\varepsilon\vert<r_\varepsilon\}}\, \text{ in }\, \Omega_{\varepsilon/2}.
\end{equation}
\end{lem}
\begin{proof} We introduce the sets $X_\varepsilon=\Omega_{\varepsilon/2}\cap\{\vert g_\varepsilon\vert<r_\varepsilon\}$, $Y_\varepsilon=\Omega_{\varepsilon/2}\cap\{\vert g_\varepsilon\vert>r_\varepsilon\}$ and $Z_\varepsilon=\Omega_{\varepsilon/2}\cap\{\vert g_\varepsilon\vert=r_\varepsilon\}$. Since $g_\varepsilon$ is holomorphic in $\Omega_\varepsilon$, either the set $Z_\varepsilon$ has measure zero, or $g_\varepsilon$ is constant. In the latter case, $u_\varepsilon$ is a constant times $h_\varepsilon$ and the conclusion of the lemma is immediate. We therefore focus on the general case of a non-constant $g_\varepsilon$. Since $\supp \chi_\varepsilon\subset \Omega_{\varepsilon}$ and $\vert g_\varepsilon\vert^2$ is smooth in $\Omega_{\varepsilon}$, it is readily seen that the denominator $\max(\vert g_\varepsilon\vert^2, r_\varepsilon^2)$ is Lipschitz and bounded away from zero in a neighborhood of $\supp\chi_\varepsilon$. Taking into account the smoothness of $\overline{g_\varepsilon}h_\varepsilon$ in $\Omega_{\varepsilon}$, we infer that $u_\varepsilon$ is a bounded Lipschitz function in $\mathbb{C}$, hence it belongs to the Sobolev space $W^{1,\infty}(\mathbb{C})$ (see \cite[Proposition 9.3]{Bre} or \cite[Theorem 6.12]{Hei}). Thus, the distribution $\partial u_\varepsilon/\partial\bar{z}$ is an element of $L^\infty(\mathbb{C})$. Since $\Omega_{\varepsilon/2}=X_\varepsilon\cup Y_\varepsilon\cup Z_\varepsilon$ and $Z_\varepsilon$ has measure zero, it then suffices to check \eqref{calcdbar} in each of the open sets $X_\varepsilon$ and $Y_\varepsilon$, which boils down to an explicit computation using the holomorphicity of $g_\varepsilon$ and $h_\varepsilon$ in those sets. In $X_\varepsilon$, we have $u_\varepsilon=r_\varepsilon^{-2}\,\overline{g_\varepsilon}h_\varepsilon$, hence $\partial u_\varepsilon/\partial\bar{z}=r_\varepsilon^{-2}\,\overline{g_\varepsilon'}h_\varepsilon$. In $Y_\varepsilon$, we have $u_\varepsilon=h_\varepsilon/g_\varepsilon$, hence $\partial u_\varepsilon/\partial\bar{z}=0$. The lemma is proved.
\end{proof}
We now set
\begin{equation*}
w_\varepsilon= \mathbbm{1}_{\Omega_{\varepsilon/2}}\frac{\partial u_\varepsilon}{\partial\bar{z}}\quad \text{and}\quad v_\varepsilon=\mathcal{K}*w_\varepsilon.
\end{equation*}
The function $w_\varepsilon$ is an element of $L^\infty(\mathbb{C})$ with $w=0$ in $\mathbb{C}\setminus\Omega_{\varepsilon/2}$. Thus, as explained in Section \ref{dbarsol}, $v_\varepsilon$ is a bounded continuous function in $\mathbb{C}$ that satisfies $\partial v_\varepsilon/\partial\bar{z}=w_\varepsilon$ in the sense of distributions in $\mathbb{C}$, hence
\begin{equation}\label{eqd}
\frac{\partial v_\varepsilon}{\partial\bar{z}}=\frac{\partial u_\varepsilon}{\partial\bar{z}}\, \text{ in }\, \Omega_{\varepsilon/2}.
\end{equation}
The last ingredient of the proof will be an estimate for $v_\varepsilon$ in $\Omega_{\varepsilon/2}$.
\begin{lem}\label{vbound}
Let $s$ be a real number, with $s>m(m+1)$. For $\varepsilon>0$ small enough, we have
\begin{equation*}
\vert v_\varepsilon\vert\leq c_9 \delta_\varepsilon^{1/s}\, \text{ in }\, \Omega_{\varepsilon/2},
\end{equation*}
where $c_9$ is a constant depending only on $K$, $m$ and $s$.
\end{lem}
\begin{proof} By Lemma \ref{estimconvol2}, there is a constant $C$ such that for any $\varepsilon>0$ small enough, we have
\begin{equation}\label{estimv}
\vert v_\varepsilon\vert\leq C \left(r_\varepsilon \Vert w_\varepsilon\Vert_\infty+\left(\vert\ln r_\varepsilon\vert\right)^{1/2}\Vert w_\varepsilon\Vert_2\right)\, \text{ in }\, \Omega_{\varepsilon/2}.
\end{equation}
Using \eqref{delta}, we see that in the open set $\Omega_{\varepsilon/2}\cap{\{\vert g_\varepsilon\vert<r_\varepsilon\}}$, we have $\vert h_\varepsilon\vert\leq (\vert g_\varepsilon\vert^{m+1}+\delta_\varepsilon)^{1/m}\leq (r_\varepsilon^{m+1}+\delta_\varepsilon)^{1/m}=2^{1/m}r_\varepsilon^{\frac{m+1}{m}}$. This implies
\begin{equation}\label{majorw}
\vert w_\varepsilon\vert \leq 2^{1/m} r_\varepsilon^{\frac{1}{m}-1}\vert g'_\varepsilon\vert\mathbbm{1}_{\{\vert g_\varepsilon\vert<r_\varepsilon\}}.
\end{equation}
Now recall that $g_\varepsilon$ is holomorphic in $\Omega_\varepsilon$, with $\vert g_\varepsilon\vert\leq K$.
Since any closed disk of radius $\frac{1}{8}\varepsilon^2$ centered in $\Omega_{\varepsilon/2}$ is contained in $\Omega_\varepsilon$, the Cauchy formula then yields
$\vert g'_\varepsilon\vert \leq 8K \varepsilon^{-2}$ in $\Omega_{\varepsilon/2}$. Together with \eqref{majorw}, this implies the uniform estimate
\begin{equation}\label{winfin}
\Vert w_\varepsilon\Vert_\infty\leq c_{10}\frac{r_\varepsilon^{\frac{1}{m}-1}}{\varepsilon^2},
\end{equation}
with $c_{10}= 8\cdot 2^{1/m}K$. Using Lemma \ref{l2estim} and \eqref{majorw}, we also get the $L^2$ estimate
\begin{equation}\label{wl2}
\Vert w_\varepsilon\Vert_2\leq c_{11}\frac{r_\varepsilon^{1/m}}{\varepsilon^{3/2}}\left(\ln\left(\frac{K^2}{r_\varepsilon^2}+1\right)\right)^{1/2}
\end{equation}
for a positive constant $c_{11}$ depending only on $m$. Since $r_\varepsilon=\delta_\varepsilon^\frac{1}{m+1}$ and $\delta_\varepsilon=o(\varepsilon^j)$ for every integer $j\geq 1$, the desired result follows from \eqref{estimv}, \eqref{winfin} and \eqref{wl2}.
\end{proof}
It is now possible to complete the proof of Theorem \ref{main}.
\subsection{End of the proof.} We consider $f_\varepsilon=u_{2\varepsilon}-v_{2\varepsilon}$ for $\varepsilon>0$ small enough.
The function $f_\varepsilon$ is continuous in $\mathbb{C}$, and it is holomorphic in $\Omega_\varepsilon$, since, by \eqref{eqd}, we also have $ \partial f_\varepsilon/\partial\bar{z}=0$ in the sense of distributions in $\Omega_\varepsilon$. Lemma \ref{ubound} and Lemma \ref{vbound} imply
\begin{equation*}
\vert f_\varepsilon \vert\leq K'\, \text{ in }\, \Omega_\varepsilon,
\end{equation*}
with $K'=(2K)^{1/m}+c_9$. Finally, choose a real number $s$ with $s>m(m+1)$. By Lemma \ref{fmu} and Lemma \ref{vbound}, we have $\vert f- f_\varepsilon\vert \leq \vert f-u_\varepsilon\vert+\vert v_\varepsilon\vert\leq c_{12}\delta_{2\varepsilon}^{1/s}$ on $[-1,1]$, for some suitable constant $c_{12}>0$. Using \eqref{defdelta} and the moderate growth property \eqref{hfunct2}, we get $\delta_{2\varepsilon}^{1/s}\leq c_{13} h_M(c_{14}\varepsilon)$ with $c_{13}=c_4^{1/s}$ and $c_{14}=2\kappa_sc_5$. Thus, we obtain
\begin{equation*}
\vert f-f_\varepsilon\vert\leq c'_1 h_M(c'_2\varepsilon)\, \text{ on }\, [-1,1],
\end{equation*} with $c'_1=c_{12}c_{13}$ and $c'_2= c_{14}$. We have therefore proved that, for $\varepsilon'_0$ small enough, the family $(f_\varepsilon)_{0<\varepsilon\leq \varepsilon'_0}$ meets the requirements of property $(\mathcal{P}_M)$. Thus, by Proposition \ref{approx}, the function $f$ belongs to $\mathcal{C}_M([-b,b])$ for any $b$ with $0<b<1$, and Theorem \ref{main} is now established.
| {
"timestamp": "2019-09-04T02:06:16",
"yymm": "1909",
"arxiv_id": "1909.00177",
"language": "en",
"url": "https://arxiv.org/abs/1909.00177",
"abstract": "We study the regularity of smooth functions $f$ defined on an open set of $\\mathbb{R}^n$ and such that, for certain integers $p\\geq 2$, the powers $f^p :x\\mapsto (f(x))^p$ belong to a Denjoy-Carleman class $\\mathcal{C}_M$ associated with a suitable weight sequence $M$. Our main result is a statement analogous to a classic theorem of H. Joris on $\\mathcal{C}^\\infty$ functions: if a function $f:\\mathbb{R}\\to\\mathbb{R}$ is such that both functions $f^p$ and $f^q$ with $\\gcd(p,q)=1$ are of class $\\mathcal{C}_M$ on $\\mathbb{R}$, and if the weight sequence $M$ satisfies the so-called moderate growth assumption, then $f$ itself is of class $\\mathcal{C}_M$. Various ancillary results, corollaries and examples are presented.",
"subjects": "Classical Analysis and ODEs (math.CA); Complex Variables (math.CV)",
"title": "Functions with ultradifferentiable powers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759615719875,
"lm_q2_score": 0.8056321889812553,
"lm_q1q2_score": 0.790225248040334
} |
https://arxiv.org/abs/1305.3433 | Monte Carlo approximation to optimal investment | This paper sets up a methodology for approximately solving optimal investment problems using duality methods combined with Monte Carlo simulations. In particular, we show how to tackle high dimensional problems in incomplete markets, where traditional methods fail due to the curse of dimensionality. | \section{Numerical performance}\label{sec:numerics}
In this Section, we shall compare the results of the Monte Carlo solutions with
special cases of the problem \eqref{Vdef} where we either know the solution
in closed form, or we know highly accurate numerical schemes for approximating the solution.
We start off by analysing complete markets where some of the analysis in the
previous Section simplifies. Recall that, in a complete market
the asset volatility matrix $\sigma$ is invertible.
This means we have a unique\footnote{Up to a multiplicative constant still to
be found.} state-price density for the problem, given by
\begin{align}
\zeta_t = \zeta_0 \exp\left[ - \int_0^t \kappa_s\cdot dW_s - \int_0^t \left( r_s + \frac{1}{2} | \kappa_s |^2 \right)ds \right],
\end{align}
where $\kappa_s \equiv \sigma_s^{-1} (\mu_s - r_s 1)$. Therefore,
provided that \eqref{dual0} holds, our Monte Carlo method should be able to find
the optimal path exactly, modulo numerical errors coming from Monte Carlo
approximation of the expectation operator in \eqref{gdef}, approximating
the derivatives in \eqref{dual1} and \eqref{Vww}, and finally the numerical
optimisation over the (scalar!) value $\zeta$ in \eqref{gdef}. The positive
side is that all these errors can be made small provided we use enough computational power.
With that in mind, we start off with two examples of problems dealing with complete markets
where the benchmark answers are reliable; and finish by analysing runs in incomplete
markets where we provide estimate error bounds, but where
no other solutions methods are available.
\subsection{The Merton problem}\label{Merton}
We start by comparing our results to the solutions of the Merton problem, which
are available in closed form in multiple dimensions. Recall that the Merton problem
assumes that functions $r$, $\mu$ and $\sigma$ in \eqref{wdyn} are constant, and the
utility functions $U$ and $\varphi$ in \eqref{Vdef} take a particular form:
\begin{align}
U(t, c) & = e^{-\rho t} u(c), \\
\varphi(w) & = A u(w),
\end{align}
where $a, b, \rho$ are positive constant, and $u$ is a
constant relative risk aversion utility:
\begin{align}
u(c) = \frac{c^{1- R}}{1-R},
\end{align}
for $R>0$, $R \neq 1$. Then the optimal solution takes the form:
\begin{align}
V(t, w, X) & = f(t) u(w), \\
\theta_t & = \pi_M w_t, \\
c_t &= \gamma(t) w_t,
\end{align}
where
\begin{align}
f(t) & = \left\{ A^{1/ R} e^{-b(T-t) } +
\frac{e^{-\rho t/R}}{b+\rho /R}
( 1 - e^{-(b+\rho/R)(T-t) }) \right\}^{R} ,\\
\pi_M & = R^{-1} (\sigma \sigma^T)^{-1} (\mu - r {\bf 1}), \\
\gamma(t) & = e^{-\rho t / R} f(t) ^{-1/R},
\end{align}
where $b = (R-1)(r + |\kappa|^2/2R)/R$; see \cite{Rogers:2013}, Section 2.1.
Figure \ref{fig:Example1} shows the results of the simulation runs
for the 3-dimensional version of the problem using $M = 1000$ paths.
The top left panel shows the running estimate
of the value function $V_M(t, w_M(t))$ along a particular realization
of Brownian motion $W$.
The top right and bottom left panels show investment and consumption proportions,
respectively. Finally, the bottom right panel depicts the estimated wealth process
compared to the Merton wealth process.
As we see, all the graphs give a very satisfactory approximation to the Merton solution.
This is especially remarkable taking into account that we are already in dimension $3$,
and we have used relatively few paths.
We now present the study of how the accuracy of the solutions to the Merton problem
varies for different values of the number of simulations $M$ and number of dimensions
$K$. We found that the number of time steps $N$ used to discretize the integral
in \eqref{greduced} does not greatly influence the accuracy of the solutions.
We compare the estimates of the optimal starting $\zeta_0$ found by the procedure
\eqref{dual0} in Algorithm \ref{algo1}. For each test, we keep the initial data
of \emph{Step $1$} fixed. We then run \emph{Step $2$} of Algorithm \ref{algo1}, each
time approximating the function $g$ with a different set of Monte Carlo paths.
This way, we can investigate how sensitive our optimized values of $\zeta_0$ are
to the Monte Carlo procedure for approximating the expectation operator.
Table \ref{table3} and Table \ref{table4} present the results of the simulations for
different number of Monte Carlo paths to calculate $g$, $M = 1000$ and $M = 10000$,
respectively. We see that the numerical results work reasonably
well for $K \leq 6$ when we choose to use $1000$ Monte Carlo paths. The average
$\zeta_0$ is pretty close to the true value, and the volatility of the estimates
stays modest. However, for larger values of $K$, we see that the estimates are
either not as accurate, or become more volatile.
For $M=10000$, the results look much better. For $K \leq 9$, we see a considerable
drop in the volatility of the estimates, and all of them lie within two standard
deviations of the true value, with most of them being less than one
standard deviation away.
These results are very encouraging. They show that, even in dimensions up to $10$, having
a reasonably modest number of Monte Carlo paths of $10000$ can provide satisfactory
results when solving the Merton problem. This is particularly interesting since the
traditional HJB approach would struggle in these dimensions unless the problem has
a particular structure such that we can work out the value function explicitly.
One might think that the accuracy of the method relies on the special structure of
the Merton problem. We now show that this is not the case. We consider departures
from the basic problem where accurate numerical solutions are available.
\begin{table}
\begin{center}
{\footnotesize
\begin{tabular}{ | c | c | c | c | c | c | c | c | c | c | c | }
\hline
& $K = 1$ & $K =2$ & $K =3 $ & $K =4$ & $K =5$ & $K =6$ & $K =7$ & $K =8$ & $K =9$ & $K =10$ \\
\hline
Merton & 9.97 & 9.49 & 8.92 & 8.61 & 8.17 & 8.02 & 7.73 & 7.44 & 7.11 & 6.68\\
\hline
Average($\zeta_0$) & 9.72 & 9.33 & 8.64 & 8.86 & 7.53 & 7.85 & 7.36 & 7.54 & 6.44 & 5.31 \\
\hline
Stdev($\zeta_0$) & 0.12 & 0.14 & 0.23 & 0.34 & 0.30 & 0.30 & 0.36 & 0.60 & 0.24 & 0.48 \\
\hline
Time / run (min) & 0.67 & 2.32 & 2.95 & 3.51 & 4.08 & 4.63 & 5.15 & 5.61 & 6.41 & 6.83 \\
\hline
\end{tabular}
}
\end{center}
\vspace{-5mm}
\caption{Comparison of the $\zeta_0$ for the Merton problem and the values
found using the Monte Carlo method for different values of the dimension
parameter $K$. The number of Monte Carlo paths each time was equal to
$\boldsymbol{M = 1000}$. For each set of simulated Monte Carlo paths,
we find the optimal implied value of $\zeta_0$. We then take the average
as the estimate, and calculate its standard deviation.
Here we take $r = 0.05$, $\rho = 0.03$, $R = 3$, $w_0 = 1$, $a = 1$,
$b = 1$, $N = 100$, $dt = 0.05$. The parameters $\mu$ and $\sigma$ were generated randomly: $\mu$ had a $U[10\%, 50\%$ distribution, once the entries of $\sigma$ were drawn from $U[-1, 1]$ until the resulting matrix was positive definite.}
\label{table3}
\end{table}
\begin{table}
\begin{center}
{\footnotesize
\begin{tabular}{ | c | c | c | c | c | c | c | c | c | c | c | }
\hline
& $K = 1$ & $K =2$ & $K =3 $ & $K =4$ & $K =5$ & $K =6$ & $K =7$ & $K =8$ & $K =9$ & $K =10$ \\
\hline
Merton & 10.10 & 9.67 & 9.34 & 8.63 & 8.35 & 8.13 & 7.33 & 7.10 & 6.83 & 6.48\\
\hline
Average($\zeta_0$) & 10.15 & 9.58 & 9.35 & 8.76 & 8.29 & 8.33 & 7.30 & 7.16 & 6.63 & 6.95 \\
\hline
Stdev($\zeta_0$) & 0.04 & 0.08 & 0.06 & 0.08 & 0.08 & 0.12 & 0.15 & 0.19 & 0.12 & 0.49 \\
\hline
Time / run (min) & 6.87 & 22.90 & 28.65 & 34.42 & 40.06 & 46.25 & 52.08 & 57.27& 61.26 & 68.09 \\
\hline
\end{tabular}
}
\end{center}
\vspace{-5mm}
\caption{Comparison of the $\zeta_0$ for the Merton problem and the values
found using the Monte Carlo method for different values of the dimension
parameter $K$. The number of Monte Carlo paths each time was equal to
$\boldsymbol{M = 10000}$. For each set of simulated Monte Carlo paths,
we find the optimal implied value of $\zeta_0$. We then take the
average as the estimate, and calculate its standard deviation.
Here we take $r = 0.05$, $\rho = 0.03$, $R = 3$, $w_0 = 1$, $a = 1$,
$b = 1$, $N = 100$, $dt = 0.05$. The parameters $\mu$ and $\sigma$ were generated randomly: $\mu$ had a $U[10\%, 50\%$ distribution, once the entries of $\sigma$ were drawn from $U[-1, 1]$ until the resulting matrix was positive definite.}
\label{table4}
\end{table}
\subsection{Non-constant relative risk aversion}
The example of the Merton problem has shown us that the Monte Carlo method can handle
situations where we deal with a multi-dimensional Brownian motion. However, the
multiplicative scaling property of the CRRA utility function $u$ means that we are
unable to assess the accuracy in predicting $\theta$. The remarkable accuracy in
prediction in Figure \ref{fig:Example1} is caused by the fact that $g(t, \zeta, X) =
\zeta^{1 - 1/R} \tilde{g}(t,X)$, for some function $\tilde{g}$, and the fact
that the optimal $\theta$ satisfies \eqref{theta1}.
It will therefore be informative the consider an example where the proportion
of money invested in the risky assets varies with wealth. This can be done,
although the price to pay is dimensionality. In this Section, we assume that
the financial market has constant coefficients and that there is only one asset in the market.
For $R_1 > 1 > R_2 > 0$, we define the agent's marginal utility as
\begin{align}\label{ICRRA}
I(t, y) & = a_1^{1 / R_1} e^{-\rho t / R_1} y^{- 1 / R_1} + a_2^{1 / R_2}
e^{-\rho t / R_2} y^{- 1 / R_2}, \\
I_\varphi(y) & = b_1^{1 / R_1} y^{-1 / R_1} + b_2^{1 / R_2} y^{-1/R_2}.
\end{align}
What this means is that, for small values of wealth $w$, the agent's relative risk
aversion is close to $R_1$ and the agent behaves similarly to the Merton investor
from Section \eqref{Merton} with $R = R_1$, $a = a_1$ and $b = b_1$, and value
function $V_1(t, w)$. Conversely, the investor for large values of $w$ is less
risk averse, with risk aversion $R_2$. He behaves like a Merton investor from
Section \eqref{Merton} with $R = R_2$, $a = a_2$, and $b = b_2$, and value function $V_2(t, w)$.
In dimension one, there are two very effective methods for solving this problem:
policy improvement and quantisation\footnote{Both of which are difficult to
generalise to dimensions more than one, though.}. We proceed by briefly describing
each one of them, and then by comparing their performance with the Monte Carlo
scheme we proposed earlier.
\vspace{3mm}
\noindent \textbf{Policy improvement}. We follow the approach described in
Section $3.4$ of \cite{Rogers:2013}. The HJB equation for our problem is
\begin{align}\label{HJBCRRA}
0 = \sup_{c, \theta} \left[ U(t, c) + V_t(t, w) + (rw + \theta(\mu - r) - c)
V_w(t, w) + \frac{1}{2} \theta^2 \sigma^2 V_{ww} (t, w)\right],
\end{align}
and we are given the terminal value
\begin{align}
V(T, w) = \varphi(w).
\end{align}
Given functions \eqref{ICRRA}, functions $U$ and $\varphi$, although not
available in closed form, can be found efficiently using binary search.
We therefore give ourselves a grid of time points $0 < t_1 < t_2 < \dots < t_N = T$
and a grid of space points $w_1 < w_2 < \dots < w_M$ and we wish to find $V$
evaluated at their mesh.
At the boundaries, we know that the solution resembles the Merton solutions:
\begin{align}\label{boundaryMerton}
V(t, w_1) = V_1(t, w_1), \qquad V(t, w_N) = V_2(t, w_N).
\end{align}
Let $\mathcal{L}(c, \theta,w)$ be a functional acting on smooth test functions $\psi(t, w)$ as
\begin{align}\label{Ldef}
\mathcal{L}(c, \theta) \psi(t, w) = (rw + \theta(\mu -r) - c) \varphi'(t, w) +
\frac{1}{2} \theta^2 \sigma^2 \psi''(t, w).
\end{align}
Noticing that
\begin{align}\label{discritizations}
\psi'(t, w) & \approx \frac{\psi(t, w_{i+1}) - \psi(t, w_{i-1})}{\Delta_+ + \Delta_-} \\
\psi''(t, w) & \approx \frac{\Delta_- (\psi(t, w_{i+1}) - \psi(t, w_i))
- \Delta_+(\psi(t, w_i) - \psi(t, w_{i-1}))}{\Delta_+ \Delta_- (\Delta_+ + \Delta_-)},
\end{align}
where $\Delta_+ = w_{i+1} - w_i$ and $\Delta_- = w_i - w_{i-1}$, it is possible
to approximate $\mathcal{L}$ acting on $\psi(t, \cdot)$ by a sparse triagonal
matrix $L(c, \theta)$ acting on a column vector $\psi(t, w_i), i = 2, \dots M-1$,
using approximations \eqref{discritizations} plugged into \eqref{Ldef}\footnote{Where
we consider $w = (w_1, w_2, \dots, w_M)^T$ as a column vector, with the
corresponding controls $(c_1, c_2, \dots, c_M)^T$ and $(\theta_1, \theta_2, \dots, \theta_M)^T$.}.
We now discretize the differential operator appearing in the HJB equation
\eqref{HJBCRRA} on the chosen time and space grid. By letting $V_i^n = V(t_n, w_i)$
and $V^n = (V_i^n)_{i = 1, 2, \dots, M}$, we obtain:
\begin{align}\label{discreteHJB}
0 & = \sup_{c, \theta} \Big[ \frac{V^{n+1} - V^n}{t_{n+1} - t_n}+
\alpha( L(c_n, \theta_n) V^n + U(t_n, \cdot, c_n)) \notag
\\
& \qquad\qquad + (1-\alpha) (L(c_{n+1}, \theta_{n+1})V^{n+1}
+ U(t_{n+1}, \cdot, c_{n+1}))\Big].
\end{align}
We took $\alpha = 0.5$, giving the Crank-Nicholson method.
We define $L$ to act on the boundary points $w_1$ and $w_M$ in such a way
that \eqref{discreteHJB} yields boundary solutions given by \eqref{boundaryMerton}.
Given $(c, \theta)$, \eqref{discreteHJB} is then a sparse set of linear
equations which we solve for $V$. We then improve on $(c, \theta)$ by
maximisation in \eqref{discreteHJB}, given the found $V$. We iterate
the process until convergence.
Figure \ref{fig:Example2V} shows the results of the policy improvement algorithm for $t=0$.
As we see, we were able to recover the whole value function using the method
described above. It is worth pointing out, though, that the method is tricky
to implement even in one dimension, and higher dimensions are almost certainly
out of question. However, once $V$ has been found in one dimension, working
out the optimal consumption and investment around a sample path of Brownian
motion are immediate.
\vspace{3mm}
\noindent \textbf{Quantization}. We proceed to a method which builds on
the observations from Section \ref{sec:problem}, but avoids using the
Monte Carlo method for approximating the expectation operator in \eqref{gdef}.
Instead, quantisation proposes approximating the expectation of the
Brownian functional by a deterministic sum. Here we follow the details
from the website \cite{quantization} and related papers \cite{Pages:2003} and \cite{Corlay:2011}.
The idea is to use the Karhunen-Loeve expansion of Brownian motion $(W_t)_{0 \leq t \leq T}$:
\begin{align}\label{KLexpansion}
W_t = \sum_{k=1}^\infty \xi_n e_n(t),
\end{align}
where $\left( \xi_n \right)_{n \geq 1} \sim N(0, \lambda_n)$ is a
sequence of independent normal random variables with variance $\lambda_n$.
Here the decomposition functions are
\begin{align}
e_n(t) & = \sqrt{\frac{2}{T}} \sin \left( \frac{\pi t}{T} \left(n - \frac{1}{2}\right)\right), \\
\lambda_n & = \left( \frac{T}{\pi(n - \frac{1}{2})}\right)^2.
\end{align}
Brownian motion $W$ is then firstly approximated by choosing the first $d$
terms in the sum \eqref{KLexpansion}. We can then think of
$\xi = (\xi_n)_{n = 1, 2, \dots, d}$ and $e(t) = (e_n(t))_{n = 1, 2, \dots, d}$
as $d$-dimensional vectors, and the Brownian motion as being approximated by the dot product
\begin{align}
W_t \approx \xi \cdot e(t)
\end{align}
We then quantise the random $d$-dimensional vector $\xi$ by a random variable
$X$ taking $n$ distinct values $x_1, x_2, \dots, x_n \in \mathbb{R}^d$ with
respective probabilities $p_1, p_2, \dots, p_n$, and giving us the final approximation
\begin{align}
W_t \approx X \cdot e(t).
\end{align}
Now, if we need to calculate an expected value of a functional
\begin{align}\label{quantfunctional}
\mathbb{E}\left[ \int_0^T f(t, W_t) dt + F(W_T) \right],
\end{align}
we can now approximate it by a deterministic sum
\begin{align}
\sum_{k=1}^n p_k \left[ \int_0^T f(t, x_k \cdot e(t)) dt + F(x_k \cdot e(t)) \right].
\end{align}
The effectiveness of this application depends on the number of terms $n$ taken
in the expansion \eqref{KLexpansion}, as well as the placing of the points and
weights $x_i$ and $p_i$.
Files of the points and weights for many different values of $n$ and for
dimension up to 10 may be freely downloaded from the website
\cite{quantization}. These points and weights are optimal quantizations
of the standard Gaussian distribution, in a sense explained in detail there.
For a chosen number of $n$, we can therefore load up the
optimal $x_i$ and $p_i$ from these files. For our runs, we use $n = 10160$.
The important thing is that, for the current problem, the calculations we need
to perform are of the particular form \eqref{quantfunctional}. Indeed, in a
complete market with one asset, we have
\begin{align}
g(t, \zeta) = \mathbb{E} \left[ \int_t^T \tilde{U}(t, \zeta_s) ds +
\tilde{\varphi}(\zeta_T) \Big| \zeta_t = \zeta \right],
\end{align}
compare it with \eqref{gdef}. Here $\zeta$ has a closed-form expression
\begin{align}
\zeta_s = \zeta_t \exp \left[-\kappa (W_s - W_t) - (r + \frac{1}{2} \kappa^2)(s-t)
\right] \text{ for } s \geq t,
\end{align}
which is of the required form \eqref{quantfunctional}.
Having laid out the problem setup and the accurate numerical methods for solving
the problem, we now show the numerical results of our calculations.
\vspace{3mm}
\noindent \textbf{Comparison of the methods}. Figure \ref{fig:Example2} shows
the results of the simulation runs. It is clear that all the methods proposed
give virtually the same answers; with Monte Carlo being only away from the
two benchmark methods of policy improvement and quantisation. The most
reassuring message here is that the Monte Carlo methodology also does
a very good job on approximating the investment proportions for the
problem as in \eqref{theta1} and \eqref{copt}. This is
the part the the Merton problem example was unable to reveal
due to the special structure.
The time taken to get the answer for the policy improvement was approximately
$10$ minutes, most of which was taken on the calculation of the value
function (evaluating the solution along a chosen path is extremely fast).
In comparison, quantisation has taken roughly $4$ minutes, and Monte Carlo took $8$ minutes.
Of course, each of the methods has their costs and benefits. The value function
takes a time-investment at the start, but is very fast regardless of how many
sample paths we would like to evaluate. This is not the case for quantisation
and the Monte Carlo method. Quantisation is the overall speed-winner here,
however we must remember that this is mainly due to the preloaded files which
we used to quantise the Brownian motion.
Overall, we conclude that the Monte Carlo method performs very well on the
complete market problems, as it should. After all, as mentioned before, the
only errors we are incurring are numerical: approximating the derivatives and
the expectation operator. With sufficient computational power, these should
be possible to be made small.
\subsection{Incomplete markets driven by a diffusion}
Finally, we consider an example where no benchmark methods are available, and
the bounds derived in \eqref{bounds} are the only sensible indicator for how
well our method is doing. We consider an example that is as challenging as
possible: an incomplete market driven by a diffusion.
As a specific example, we consider a market composed of $4$ stocks driven by a
$5$-dimensional Brownian motion. The same Brownian motion drives the $5$-dimensional
factor process $X$, which we assume to a be an OU process with the mean-reversion
and volatility parameters generated randomly. We take a CRRA utility function,
with a number of Monte Carlo paths being equal to $M = 1000$. The results
of the optimisation run are depicted on Figure \ref{fig:K5M10000}. The run
time here took $23h$. The details regarding the
parameters are displayed below the panel.
As we can see from the first two panels on the top, the upper and lower bounds
stay reasonably close during the sample runs, with the error measure defined
in \eqref{error} between $12\%$ and $22\%$, and generally decreasing as we near the end of investment.
This is a positive result, especially in light of the dimensionality of the problem.
Notice that the market is incomplete, and that the value function for this problem
would need to be $7$-dimensional ($1$ dimension for wealth, $1$ for time, and $5$
for the factor process $X$). Hence, any other method for approaching this problem
would really struggle.
We could of course try to improve on the performance of this algorithm. We
lose efficiency when we use the approximation of $\kappa$ in
\eqref{kappa_pi}, and also when we truncate the expression \eqref{theta_star} for
$\theta$.
However, our main goal of the paper has already been achieved here: we have illustrated
how to use our method on a very difficult problem, and derived satisfactory bounds on the efficiency.
\section{Conclusions}\label{sec:conclusions}
This paper presented an effective methodology for tackling optimal investment problems
in incomplete markets driven by a Brownian diffusion. We were able to derive a generic
methodology for numerically tackling these problems by taking some convenient mode
realisation of the market. Secondly, we settle for suboptimal controls which are
close to the optimal control.
These assumptions are not a weakness of the method; they are rather a necessary cost needed
to be taken if we want to get concrete investment advise in a general setup. After all,
they let us derive what we really need in practise: a method for finding a good investment
strategy when faced by a particular realisation of the market!
We have also illustrated the effectiveness of the method in a variety of contexts.
For the problems where other reliable numerical techniques are available, we showed
our method does just as good. For a very complex multi-dimensional problem concluding
Section \ref{sec:numerics}, we have showed that the investment errors can be kept
satisfactory low. No other method was able to provide even estimates of the solutions
in this context. This proves the effectiveness of the method and shows that it has
a potential of giving what we really need: concrete investment prescriptions
facing a particular market environment.
\begin{figure}
\begin{center}
\includegraphics [angle=90,height= 20cm, width=15cm]{Example1.eps}
\caption{\textbf{ Monte Carlo solution to the Merton problem.}
Here we
take $k = 3$, $r = 0.05$, $\rho = 0.03$, $R=3$, $a = 1$, $b = 2$, $N = 100$,
$dt = 0.05$, $M = 1000$, $w_0 = 1$, $\mu = [0.07; 0.25; 0.15]$, $\sigma =
[0.12, 0.01, 0.03; 0.01, 0.50, 0.01; 0.03, 0.01, 0.27]$.}
\label{fig:Example1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics [angle=90,height= 20cm, width=15cm]{977950ValueFun.eps}
\caption{\textbf{Value function found using policy improvement algorithm}.
Here we take $w_0 = 2$, $\mu = 0.10$, $\sigma = 0.20$, $r = 0.05$,
$\rho = 0.03$, $a_1 = 10$, $a_2 = 20$, $b_1 = 30$, $b_2 = 10$, $R_1 = 3$,
$R_2 = 0.5$, $T = 1$, $N = 100$.}
\label{fig:Example2V}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics [angle=90,height= 20cm, width=15cm]{977950Comps.eps}
\caption{\textbf{Comparison of different methods for the non-constant
relative risk
aversion example}. Here we take $w_0 = 2$, $\mu = 0.10$, $\sigma = 0.20$,
$r = 0.05$, $\rho = 0.03$, $a_1 = 10$, $a_2 = 20$, $b_1 = 30$, $b_2 = 10$,
$R_1 = 3$, $R_2 = 0.5$, $T = 1$, $N = 100$. The number of Monte Carlo
paths we took is $M = 10000$.}
\label{fig:Example2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics [angle=90,height= 16cm, width=15cm]{plot4.eps}
\caption{\textbf{An incomplete market driven by a stochastic factor}.
Here we take an incomplete market $5$-dimensional Brownian motion with $4$
independent assets and $M=1000$ Monte Carlo paths in the approximation of $g$.
We start with $w_0 = 1$, $\rho = 0.03$, $T = 1$, $N=100$. We take the
CRRA utility function with parameters $a = 1$, $b =2$, $R=3$. $X$ is
taken to be an OU process with randomly-generated volatility matrix
and mean-reverting drift. The market interest rate $r$ and $\mu$ are
constant and randomly generated. Market volatility $\sigma$ is a
random $4 \times 5$ matrix multiplied by a stochastic scaling
factor $1 + \exp(- {\bf 1} \cdot X_t)$. The randomisation is done by drawing relevant parameters from $U[-1, 1]$ distribution via Gibbs-sampling until the regularity conditions imposed by the paper are met (i.e. $\mu \geq r \geq 0$, $\sigma$ has rank 4 and $\sigma^T \sigma$ is invertible).
The
top panels represent the running upper and lower bounds on the
objective as defined in \eqref{bounds}, error rate as in \eqref{error},
and the corresponding wealth process from \eqref{dual1}. The bottom
panel represents the investment and consumption proportions,
together with the first two components of the factor process $X$.}
\label{fig:K5M10000}
\end{center}
\end{figure}
\pagebreak
\section{Continuous markets driven by a diffusion.}\label{sec:problem}
We shall present the methodology in the context of a
finite-horizon optimal
investment-consumption problem where the volatilities and drifts of the
assets depend on some diffusion factor process. It will become evident that the
general approach is not limited to such examples, but it is easier to
explain in this more concrete setting. We shall also make various assumptions
of boundedness on processes and global Lipschitz properties of coefficients
which could be relaxed, but which simplify the exposition and proof: the
aim is transparency, not maximality.
To begin with, suppose that $X$ is an $\R^k$-valued diffusion process
satisfying
\begin{equation}
dX_t = \sigma_X(X_t)\, dW_t + \mu_X(X_t) \, dt
\equiv \sigma_X\, dW_t + \mu_X \,dt,
\label{Xdyn}
\end{equation}
where $W$ is a $d$-dimensional Brownian motion,
and $\sigma_X: \R^k \rightarrow
\R^{k} \otimes \R^d$ and $\mu_X: \R^k \rightarrow
\R^{k}$ are globally Lipschitz coefficients.
We shall consider an investor who is allowed to invest in a market with
a riskless asset yielding interest at rate $r_t \equiv r(X_t)$, and $n$
stocks having volatility matrix $\sigma_t \equiv \sigma(X_t)$ and drift
$\mu_t \equiv \mu(X_t)$.
Here, $r: \R^k \rightarrow \R$,
$\sigma: \R^k \rightarrow
\R^n \otimes \R^d$, and $\mu: \R^k \rightarrow
\R^n$ are bounded measurable functions.
We assume non-degeneracy of the market, that is, $d \geq n$,
and that the row rank of $\sigma$
equals $n$. When $n = d$, the matrix $\sigma$ is then invertible,
and we have a special case of a complete market.
With these assumptions in place, the investor's wealth $w_t$ at time $t$
evolves\footnote{We use the notations $a\cdot b$ for the
scalar product of two vectors $a$ and $b$, and ${\bf 1}$ for the column
vector of ones.} as
\begin{align}\label{wdyn}
dw_t = r_t w_t\, dt + \theta_t \cdot (\sigma_t \,dW_t + (\mu_t - r_t {\bf 1})
\, dt) - c_t \,dt,
\end{align}
where the $n$-vector process
$\theta_t$ represents the cash holdings in each of the stocks,
and $c_t$ denotes the agent's consumption rate.
The agent's objective at time $t$ is to achieve
\begin{align}\label{Vdef}
\sup_{(c,\theta) \in \sA} E \left[ \int_t^T U(s, c_s) ds +
\varphi(w_T) \Big| w_t = w, X_t = x \right] \equiv V(t, w, x),
\end{align}
where $U$ and $\varphi$ are strictly concave $C^2$ utility functions satisfying
the Inada conditions\footnote{These are the conditions
$ \lim_{c \downarrow 0} U_c(t, c) = \infty =\lim_{w \downarrow -\infty} \varphi'(w)$,
$ \lim_{c \uparrow \infty} U_c(t, c) = 0 = \lim_{w \uparrow \infty} \varphi'(w)$. }, and $\sA$ denotes the set of admissible
consumption-portfolio pairs:
\begin{equation}
\sA = \{ (c,\theta): \hbox{\rm $c$ and $\theta$ are previsible}, \;
c \geq 0, \hbox{\rm and for some $K<\infty$,} \; \| \theta_t \| \leq K \}.
\label{sAdef}
\end{equation}
\medskip\noindent
{\sc Remarks.} (i) Notice
that the function $\varphi$ is defined on the whole of $\R$.
\medskip\noindent
(ii) The above definition of admissibility \eqref{sAdef}
is not the usual one\footnote{One typically imposes a non-negativity
constraint on the wealth process associated with the trading
strategy $\theta$.}.
We do not
expect that the supremum in \eqref{Vdef} will be attained within the
set $\sA$, but as our goal is to come up with good sub-optimal strategies,
this does not matter for our current purposes. Admissibility is imposed to eliminate doubling strategies, where wealth may go arbitrarily negative
before time $T$, but ends up at a high value at time $T$. The assumptions
made here rule this out; if we were to go to large negative wealth at some
time in $(0,T)$, boundedness of $\sigma$, $\mu$ and $\theta$ prevent us
returning to positive wealth with certainty by time $T$, and the penalty
imposed by the concave function $\varphi$ then makes this a bad thing to do.
\medskip\noindent
(iii) If the dimension $k$ of the statespace of the factor diffusion
$X$ were not very small, it is not feasible to calculate and store the
value function $V$. The approach we develop in this paper allows us to
determine approximately optimal policies {\em without} the need to calculate
$V$.
\bigbreak
We shall require one technical condition on $U$, which is expressed as
a condition on the inverse marginal utility $I$, defined by
\begin{equation}
U_c(s, I(s,z) ) = z, \qquad\qquad (z>0).
\label{Idef}
\end{equation}
We require
\begin{equation}
\hbox{ {\sc Assumption:} there exists $\alpha, \; A>0$
such that $I(t,z) \leq A(1+z^{-\alpha})$ }.
\label{assumption}
\end{equation}
The inequality has to hold for all $z>0$ and all $t \in [0,T].$
\bigbreak
We are now ready to state the main result of the paper, which allows us
to derive effective Monte Carlo bounds on the value, and to find
good sub-optimal strategies {\em pathwise}. The proof uses
duality arguments similar to those presented in \cite{Cox:1989},
\cite{Karatzas:1987}, and later described in a more general setting in
\cite{Karatzas:1989}.
\begin{theorem}\label{thm1}
Suppose that $\kappa$ is a bounded previsible process such that
\begin{equation}
\mu_t - r_t {\bf 1} - \sigma_t \kappa_t = 0,
\label{kappa}
\end{equation}
and that $\zeta$ solves the linear SDE
\begin{equation}
d\zeta_t = \zeta_t ( -\kappa_t \, dW_t - r_t \, dt).
\label{zeta}
\end{equation}
Define the function $g$ by\footnote{
The functions $\tilde U$, $\tilde \varphi$ are the convex
dual functions,
$\tilde U(t,z) \equiv \sup_x\{ U(t,x) -zx\}$, $\tilde\varphi(z)
\equiv \sup_x \{ \varphi(x) - zx\}$.
}
\begin{equation}
g(t,z,x) = E \Big[ \int_t^T \tilde{U}(s, \zeta_s) ds
+ \tilde{\varphi}(\zeta_T)\; \Big | \; \zeta_t = z, X_t = x \; \Big]
\label{gdef}
\end{equation}
for $ t \in [0,T]$, $z>0$, $x \in \R^k$.
Then for any $t \in [0,T]$, $z>0$, $w\in\R$, $x \in \R^k$,
and bounded previsible $\theta$, we have the inequalities
\setlength{\fboxsep}{15pt}
\begin{equation}\label{bounds}
\framebox{$g(t, z, x) + w z - h(t, w,z, x, \theta )
\leq V(t, w, x) \leq g(t, z, x) + w z,$}
\end{equation}
where
\begin{equation}\label{hdef}
h(t, w,z, x, \theta ) \equiv \mathbb{E} \Big[
\; \tilde{\varphi}(\zeta_T)-
\varphi(w_T^{{\theta}}) + \zeta_T\, w_T^{{\theta}}
\; \Big| \; w_t = w, \zeta_t = z, X_t = x \;\Big] ,
\end{equation}
and the process $w^\theta$ is the solution to the wealth
evolution \eqref{wdyn} with portfolio process $\theta$ and
consumption process
\begin{equation}
c_s = I(s,\zeta_s), \qquad\qquad (s \geq t).
\label{copt}
\end{equation}
\end{theorem}
\bigbreak
\noindent
{\sc Remarks.} (i) In general the matrix $\sigma$ is not even
square, so not invertible, but we could try to find $\kappa$ to
satisfy \eqref{kappa} by taking the pseudo-inverse of $\sigma$:
\begin{equation}
\kappa_t = \sigma_t^T (\sigma_t\sigma_t^T)^{-1}(\mu_t - r_t {\bf 1}).
\label{kappa_pi}
\end{equation}
This can be done if $(\sigma_t\sigma_t^T)^{-1}$ is bounded, in effect
a uniform ellipticity condition of the kind commonly imposed in
such problems.
\medskip\noindent
(ii) From the definition of the convex dual function $\tilde\varphi$,
it is clear that $h$ is always non-negative. Since $h$ dominates the
gap between the lower and upper bounds, we should aim to make $h$ as
small as we can. Ideally, we would have that $h$ was zero, which would
require us to have
\begin{equation}
\varphi'(w_T) = \zeta_T.
\label{transversality}
\end{equation}
If we demanded that this happens, then the problem becomes a BSDE with
\eqref{transversality} as the terminal condition. As it seems that there
are as yet no efficient numerical methods for solving BSDEs in high
dimensions, this does not help much. What we are attempting to do with
this approach is in effect relax the demand that the solution we construct
hits the terminal condition \eqref{transversality}, but instead to estimate
the error we make when we fail to match the terminal condition.
\bigbreak\noindent
{\sc Proof. (a) The upper bound.}
The process $\zeta$ is determined by \eqref{kappa} and \eqref{zeta};
in what follows, we shall suppose that $c$ is determined from $\zeta$
by \eqref{copt}.
Consider the It\^{o} expansion of $\zeta_T w_T$. We have:
\begin{eqnarray}
0 &=& - \zeta_T w_T + \zeta_t w_t + \int_t^T
(\zeta_s dw_s + w_s d\zeta_s + d[\zeta, w]_s)
\nonumber
\\
&=& - \zeta_T w_T + \zeta_t w_t + \int_t^T \zeta_s(
\theta_s \cdot \sigma_s - w_s\kappa_s) \; dW_s
\nonumber
\\
\nonumber
&&\qquad +\int_t^T \zeta_s( r_s w_s + \theta_s \cdot (\mu_s -r_s {\bf 1}) - c_s
-r_s w_s -\theta_s \cdot\sigma_s \kappa_s
)\; ds
\\
&=&- \zeta_T w_T + \zeta_t w_t + \int_t^T \zeta_s(
\theta_s \cdot \sigma_s - w_s\kappa_s) \; dW_s - \int_t^T \zeta_s c_s \; ds
\end{eqnarray}
using \eqref{kappa} and \eqref{zeta}.
We claim that the stochastic integral has zero mean, and in order
to establish this, it is necessary to control the integrand. The
processes $\kappa$, $\theta$, and $\sigma$ are all bounded
by hypothesis, so we need to have control on $\zeta$ and $w$.
Since $\zeta$ satisfies the linear SDE \eqref{zeta}
with bounded coefficients $\kappa$ and $r$, it is not hard to
establish a bound on $E[ (\zeta^*_t)^p]$ for any $t>0$,
and for any $ p \geq 2$, where
$\zeta^*_t \equiv \sup_{0 \leq s \leq t} |\zeta_s|$; see, for
example, Lemma V.11.5 of \cite{RW2}. Similarly,
we may bound $E[ (\zeta^*_t)^{-p}]$ for any $t>0$,
and for any $ p \geq 2$, by considering the linear SDE for
$\zeta^{-1}$.
All that remains is to
establish a similar bound for $w^*_t$, where $w$ is given by
\eqref{wdyn}. The only problematic part of this estimation is
in controlling $c$, but this is where the Assumption \eqref{assumption}
comes in, since $\zeta^{-1}_t$ is controlled as before, and $c$ is
bounded by some power of $\zeta$.
We therefore conclude that
\begin{equation}
0 = E \biggl[\;
- \zeta_T w_T + \zeta_t w_t
- \int_t^T \zeta_s c_s \; ds
\;\biggr].
\label{zero}
\end{equation}
We can add this equality to \eqref{Vdef} to find\footnote{We use
\eqref{copt} at the first step.}
\begin{eqnarray}
V(t,w,x) &=& \sup_{(c,\theta) \in \sA} E \biggl[ \int_t^T
\{ U(s, c_s) - \zeta_s c_s \}\; ds +
\varphi(w_T) - \zeta_T w_T +
\nonumber
\\ &&
\qquad\qquad\qquad\qquad\qquad
+ \zeta_t w_t
\Big| w_t = w, X_t = x,\zeta_t = \zeta \biggr]
\label{ub0}
\\
&\leq & E \left[ \int_t^T
\tilde U(s, \zeta_s)\; ds +
\tilde\varphi(\zeta_T) + \zeta_t w_t
\Big| w_t = w, X_t = x,\zeta_t = \zeta \right]
\nonumber
\\
&=& E \left[ \int_t^T
\tilde U(s, \zeta_s)\; ds +
\tilde\varphi(\zeta_T)
\Big| w_t = w, X_t = x,\zeta_t = \zeta \right] + \zeta w
\nonumber
\\
&=& g(t,\zeta,x) + \zeta w.
\label{ub1}
\end{eqnarray}
This is the upper bound in \eqref{bounds}.
\medskip\noindent
{\sc (B) The lower bound.} The argument reuses elements of the proof
of the upper bound. The task this time is to propose some admissible
$(c,\theta)$ and deduce a lower bound from it.
Given the state-price density process $\zeta$
as in \eqref{zeta}, our intention is to use the process $c$ to be defined
from it by \eqref{copt}. Doing this, we see that the integral term
appearing in the right-hand side of
\eqref{ub0} is equal to
\begin{equation}
E \int_t^T \tilde U(s, \zeta_s) \; ds,
\nonumber
\end{equation}
and moreover that \eqref{zero} still holds by the same argument as
before.
For any bounded previsible $\theta$, the pair $(c,\theta)$ is admissible,
so if we use that admissible pair we find as at \eqref{ub0} that
\begin{eqnarray}
V(t,w,x) &\geq& E \biggl[ \int_t^T
\tilde U(s, \zeta_s)\; ds +
\varphi(w^\theta_T) - \zeta_T w^\theta_T + \zeta_t w^\theta_t
\Big| w_t = w, X_t = x,\zeta_t = \zeta \bigr]
\nonumber
\\
&=& g(t,\zeta,x) + w \zeta - h(t,w,\zeta,x,\theta)
\label{lb0}
\end{eqnarray}
when we recall the definitions \eqref{gdef} and \eqref{hdef} of
$g$ and $h$.
\hfill$\square$
\bigbreak\noindent
{\sc Remarks.} (i) For any bounded previsible $\theta$ and $\kappa$
the result \eqref{bounds} of Theorem \ref{thm1} gives two-sided bounds
on the value function. Importantly, the numerical values of $g$ and
$h$ can be estimated {\em by forward simulation from current values}.
It is also worth noting that the methodology does not require any
`simulation within simulations' which substantially increases the
computation times; we will be evaluating the state-price density and
the portfolio process along {\em just one trajectory.}
All we need to do is to simulate
sufficiently many sample paths to approximate the
expectation operator in \eqref{gdef} and \eqref{hdef}.
\\
(ii) We need to have a measure for comparison
between the bounds in \eqref{bounds}. Since utility
functionals are defined up to affine transformations, our
measure needs to be invariant under those. Thus
the difference between the upper and lower bounds is not
informative.
We can however think of giving up a fraction of the initial wealth $\alpha w$
and look for the minimal $\alpha$ such that the upper bound corresponding to
$(1-\alpha)w$ initial wealth is at most as large as the lower bound for
starting with wealth $w$. This $\alpha$ is of course:
\begin{equation}
\alpha(t, w, \zeta, X, {\theta}) \equiv
\frac{h(t, w, \zeta, X, {\theta})}{\zeta w},
\label{error}
\end{equation}
which will from now on be our efficiency measure. Notice that \eqref{error} is
a dimensionless quantity.
\\
(iii) The key issue for obtaining good bounds is of course the
choice of the processes $\kappa$ and $\theta$.
The traditional way to approach solving the problem \eqref{Vdef} would
be to write down the HJB equation, derive the corresponding PDEs,
and try to solve them. However, these PDEs are typically highly
non-linear, and we only stand a chance of getting reasonably
stable solutions in dimensions one or two.
Nevertheless, we can deduce some worthwhile information from the HJB equation.
Dropping the $t$ subscript, and remembering the function $V$
takes $(t, w, X)$ as arguments, the HJB equation is
\begin{eqnarray}
0 &=& \sup_{c, \theta} \bigl[\; U(t, c) + V_t + \left( rw + \theta
\cdot(\mu - r{\bf 1})
- c\right)V_w + \mu_X \cdot V_X +
\nonumber
\\
&&\qquad\qquad
+ \frac{1}{2} |\theta^T \sigma|^2 V_{ww}
+ \theta\cdot\sigma\sigma_X^T\, V_{Xw} + \half\hbox{\rm tr} (\sigma_X\sigma_X^T V_{XX})\;\bigr].
\label{hjb1}
\end{eqnarray}
Optimizing over $c$ leads to the conclusion that $c_t = I(t,V_w)$, and
optimizing over $\theta$ tells us that we should have
\begin{equation}
\theta = - (\sigma \sigma^T)^{-1} \bigl\lbrace\;
(\mu - r{\bf 1}) V_w + \sigma\sigma_X^T \,V_{Xw}
\;\bigr\rbrace/V_{ww}.
\label{theta_star}
\end{equation}
Here $\sigma \sigma^T$ is invertible by our non-degeneracy assumptions on the market.
Assuming that $V$ and $g$ are dual (as we would expect from \eqref{bounds}), in that
\begin{equation}
V(t,w,x) = \inf_\zeta\{ g(t,\zeta,x)+w\zeta\},
\qquad g(t,\zeta,x) = \sup_w\{ V(t,w,x)-w\zeta\},
\label{dual0}
\end{equation}
this would lead us to the relations
\begin{equation}
w = -g_z(t,z,x), \qquad \zeta = V_w(t,w,x).
\label{dual1}
\end{equation}
Straightforward calculus then leads to
\begin{align}\label{Vww}
V_{ww}(t, w, x) = -1/g_{\zeta \zeta}(t, \zeta, x).
\end{align}
These relations help us to make choices of $\kappa$ and $\theta$.
We will use \eqref{kappa_pi} to make our (pathwise) choice for $\kappa$,
and then we will use the truncated form
\begin{equation}
\theta = - (\sigma \sigma^T)^{-1}(\mu - r{\bf 1}) V_w /V_{ww}
= (\sigma \sigma^T)^{-1} (\mu - r{\bf 1}) \,\zeta g_{\zeta \zeta}(t, \zeta, X)
\label{theta1}
\end{equation}
for the pathwise choice of $\theta$. We should in principle include the
cross derivative term
from \eqref{theta_star} in the choice of $\theta$, and in some situations it
might well be worth doing this, but the cost is that we have to get hold
of the derivative of $\zeta$ with respect to $X$, and doing this by simulation
is cumbersome. The virtue of the form \eqref{theta1} is that we just need
the second derivative of the convex function $g$ with respect to its scalar
argument $\zeta$, and determining this by simulation is computationally
feasible.
\medskip\noindent
(iv) In practice, it will be clumsy to form an estimate of the term
$h$ in \eqref{bounds} if we are determining the portfolio process
$\theta$ according to the recipe just outlined, because if we are to
simulate an evolution of $(X,w)$ we will at each step need to
identify derivatives of $g$, and this is a simulation within a
simulation.
We envisage the
lower bound in \eqref{bounds} being used as a means to {\em assess}
a particular portfolio rule which may be expressed explicitly as
some function of $(t,X,w)$. In a high-dimensional problem,
we do not expect the optimal portfolio rule to be something we can
characterize, but we may well have some heuristic for some `good'
portfolio rule, and \eqref{bounds} gives us a way to tell how
good that heuristic may be.
\vspace{3mm}
\noindent \textbf{Summarising:} Given an initial state $(t, w, \zeta)$, we can
follow the dynamics of $w$, $\zeta$, and $X$, using
\eqref{Xdyn}, \eqref{wdyn}, \eqref{zeta},
$\kappa$ given by \eqref{kappa},
$c$ given by \eqref{copt}, and $\theta$ given by \eqref{theta1} (or perhaps
\eqref{theta_star}).
The key advantage of this formulation is that all we need to do now is to optimise
the bounds \eqref{bounds} for a one-dimensional starting value of the dual
process $\zeta$. This is a quick procedure numerically.
\section{Algorithms.}\label{algos}
We will now describe an algorithm for simulating the optimal path and controls for
the problem \eqref{Vdef}, given \emph{a particular realisation of the Brownian motion}.
That is, we do not attempt to recover the whole value function, as this is bound to
fail in higher dimensions. Our method, which is effectively local, will follow a
particular realisation of the Brownian motion $W$ and tell us how to invest
and consume in that particular case. After all, one is predominantly interested
in how to invest in the current market conditions, and does not necessarily care
about all possible versions of reality!
\vspace{3mm}
\renewcommand{\tablename}{Algorithm}
\begin{table}
\caption{Computing the optimal path.}
\begin{tabular}{ l p{14cm}}
\hline
\hline
\emph{Step 1:} & \textbf{Initialisation.} Pick starting values $w = w_0$, $X = X_0$
and a grid of time steps $0 = t_0 < t_1 < t_2 < \dots < t_N = T$ along which we want
to know the solution. Simulate a realisation of the Brownian motion $W$ along which
we want to calculate the optimal path. \\
\emph{Step 2:} & \textbf{Finding the optimal $\zeta_0$.} For any $\zeta$, we can
calculate $g(0, \zeta, X) + w \zeta_0$. This function is convex in $\zeta$, so we
can use the golden Section search to find the minimum in \eqref{dual0}. This gives
us the value of $V(0, w_0, X_0)$ and the optimal starting value of the dual process $\zeta_0$. \\
\emph{Step 3:} & \textbf{Calculating the optimal path.} For each $n = 0, 1, \dots, N-1$,
we have $(t_n, \zeta_{t_n}, X_{t_n})$ available. We use \eqref{copt} to work
out $c_{t_n}$, \eqref{theta1} to work out $\theta_{t_n}$, and \eqref{dual1}
to work out $w_{t_n}$. We then cacluate $\kappa_{t_n}$ wth \eqref{kappa} and use
the Euler scheme to move to time $t_{n+1}$ using \eqref{Xdyn} and \eqref{zeta}. \\
\hline
\hline
\end{tabular}\label{algo1}
\end{table}
\renewcommand{\tablename}{Table}
\vspace{3mm}
Algorithm \ref{algo1} describes how to compute the best bounds numerically.
The cost of running this algorithm will be $\mathcal{O}(N) \times \mathcal{O}(g)$,
where $\mathcal{O}(g)$ is the average cost of evaluation of the function $g$ and $h$.
In Algorithm \ref{algo1}, we have not yet given the details of how to calculate
the function $g$ numerically (which
will be the business of Algorithm \ref{algo2}).
That is, we want to be able to numerically calculate
the expectation in \eqref{gdef} and \eqref{hdef} for $t = t_n$, being one of
the grid points in the time discretization. We approach the calculation
numerically with Monte Carlo methods, sampling $M$ paths of Brownian
motion $W$ for $t = t_n, t_{n+1}, \dots, t_N$, simulating the values
of the functional in the expectation of \eqref{gdef} and \eqref{hdef},
and finally averaging over the sampled paths.
In practice, we find that it might be necessary to use importance sampling
in order to decrease the volatility of our estimates. In order to do that,
define the change of measure martingale
\begin{align}\label{Zdyn}
dZ^{-1}_s = Z_s^{-1} \sigma^Z_s dW_s \text{ for } t \leq s \leq T, \qquad Z_t = 1,
\end{align}
and set $\frac{d \mathbb{Q}}{d \mathbb{P}} |_{\mathcal{F}_t} = Z_t^{-1}$.
Then we can rewrite \eqref{gdef} and \eqref{hdef} as
\begin{align}\label{greduced}
g(t, \zeta, X) & = \mathbb{E}^\mathbb{Q} \left[\int_t^T Z_s \tilde{U}(s, \zeta_s) ds
+ Z_T \tilde{\varphi}(\zeta_T) \Big| \zeta_t = \zeta, X_t = X \right],
\\
h(t, w, \zeta, X,{\theta}) & = \mathbb{E}^\mathbb{Q} \Big[
Z_T \varphi( w_T^{{\theta}}) - Z_T w_T^{{\theta}} \zeta_T -
Z_T \tilde{\varphi}(\zeta_T)\Big| w_t = w,
\zeta_t = \zeta, X_t = X \Big] .
\end{align}
with a new Brownian motion $W^{\mathbb{Q}}$ under $\mathbb{Q}$ defined by
\begin{align}\label{WQdef}
d \bar{W}_t = dW_t - \sigma^Z_t dt.
\end{align}
The idea now is to choose $\sigma_Z$ in a way that the Ito expansion of the
term $Z_T \tilde{\varphi}(\zeta_T)$ has no $d\bar{W}$ term.
This has a variance reducing property. Writing $\dot{=}$ whenever two
sides of an equality differ only by integrals with respect to $ds$, we have
\begin{align}\label{variancereduction}
Z_T \tilde{\varphi}(\zeta_T) & = Z_t \tilde{\varphi}(\zeta_t) + \int_t^T
d(Z_s d\tilde{\varphi}(\zeta_s)) \dot{=} Z_t \tilde{\varphi}(\zeta_t) + \int_t^T \left( Z_s
\tilde{\varphi}'(\zeta_s) d\zeta_s + dZ_s \tilde{\varphi}(\zeta_s) \right) \\
& = Z_t \tilde{\varphi}(\zeta_t) + \int_t^T Z_s \left( -\kappa_s
\tilde{\varphi}'(\zeta_s) \zeta_s - \sigma^Z_s \tilde{\varphi}(\zeta_s) \right)d\bar{W}_s.
\end{align}
Therefore, we set:
\begin{align}\label{sigmaZdef}
\sigma_Z \equiv - \kappa_s \frac{\zeta_s \tilde{\varphi}'(\zeta_s) }
{ \tilde{\varphi}(\zeta_s) },
\end{align}
which cancels the $d\bar{W}$ term in \eqref{variancereduction}, and in turn in \eqref{greduced}.
With this in mind, we now present the numerical algorithm for calculating $g(t, \zeta, X)$.
\vspace{3mm}
\renewcommand{\tablename}{Algorithm}
\begin{table}
\caption{Computing $g(t_n, \zeta, X)$ and $h(t_n, w, \zeta, X, \tilde{\theta})$}
\begin{tabular}{ l p{14cm}}
\hline
\hline
\emph{Step 1:} & \textbf{Initialisation.} Recall $t = t_n$. Generate $M$ paths
of Brownian motion $\bar{W}^i_t$, $i = 1, 2, \dots, M$, with values evaluated
at $t = t_n, t_{n+1}, \dots, t_N$. The corresponding paths for $\zeta$, $X$,
$w$ and $Z$ are denoted by $\zeta^i$, $X^i$, $w^i$ and $Z^i$ with
$\zeta^i_{t_n} = \zeta$, $X^i_{t_n} = X$, $Z^i_{t_n} = 1$ and $w^i_{t_n} = w$. \\
\emph{Step 2:} & \textbf{Simulation.} For $k = n, n+1, \dots, N-1$, update
$\zeta^i_{t_{k+1}}$, $X^i_{t_{k+1}} $, $Z^i_{t_{k+1}}$ and $w^i_{t_{k+1}}$
as follows. Equations \eqref{sigmaZdef} and \eqref{WQdef} give us the
corresponding $dW_{t_k}$. We then use \eqref{zeta}, \eqref{Xdyn},
\eqref{Zdyn} and \eqref{wdyn} to move to the next time point using the Euler scheme.
\\
\emph{Step 3:} & \textbf{Averaging.} Having calculated paths $\zeta^i$, $X^i$, $Z^i$
and $w^i$ corresponding to $M$ paths of $\bar{W}^i$, we return the approximate
values of $g$ and $h$:
\begin{align}
g(t_n, \zeta, X) \approx \frac{1}{M} \sum_{i=1}^M \left( \sum_{k=n}^{N-1}
Z^i_{t_k} \tilde{U}(t_k, \zeta^i_{t_k}) + Z_{t_N}^i \tilde{\varphi}(\zeta_{t_N}^i) \right)
\label{gdiscretized}
\\
h(t_n, w, \zeta, X, \tilde{\theta}) \approx \frac{1}{M} \sum_{i = 1}^M Z^i_{t_N}
\left(\varphi(w_{t_N}^i) - w_{t_N}^i \zeta_{t_N}^i - \tilde{\varphi}(\zeta_{t_N}^i) \right).
\end{align} \\
\hline
\hline
\end{tabular} \label{algo2}
\end{table}
\renewcommand{\tablename}{Table}
\vspace{3mm}
The computational complexity of Algorithm \ref{algo2} comes from
\eqref{gdiscretized}, where we clearly see that we need
$\mathcal{O}(N) \times \mathcal{O}(M)$ operations. Therefore,
we deduce that $\mathcal{O}(g) = \mathcal{O}(MN)$.
The key to performance of the method is of course the accuracy of the Monte Carlo
simulation. As we shall see in the following Section, the numerical results are
promising. Even a fairly moderate number of Monte Carlo paths can provide a good
approximation to the true value of $g$ and $h$. With this in mind, we proceed to
examine the numerical results for the performance of the method.
\vspace{3mm}
\section{Introduction.}\label{sec:introduction}
From the early work of Merton and his seminal papers \cite{Merton:1971}
and \cite{Merton:1969}, the optimal investment literature has been
trying to {determine} how to invest in financial
markets when facing uncertainty. Over the {following
twenty} years, many general results {were proved}, and many techniques for
tackling the questions {developed}.
Deriving the abstract forms of the solutions is a great
achievement of mathematical finance. However, anyone who wants
to use them to guide them in making investment decisions
will quickly find out that they are typically rather uninformative.
This is because, apart from a couple of highly stylized examples,
concrete numerical answers in optimal investment problems are simply
unobtainable, largely due to the curse of dimensionality.
See \cite{Rogers:2013} for a survey of the traditional methods
and a {range} of examples where answers can actually be {found}.
The goal of this paper is to take a pragmatic approach. We take the
point of view of an investor who is facing a particular market and
is interested in knowing a \emph{good} thing to do at a particular
time. Hence, we want to be able to describe what a good investment
strategy is in a particular market environment \emph{without}
{computing} the whole value function for the problem, {and we
want to quantify what we mean by a good investment strategy, in
terms of bounds on the objective.}
Taking this standpoint lets us make progress by combining various
optimization techniques that would fail individually when applied
to difficult optimal investment problems. Namely, we shall use
the Pontryagin--Lagrange approach to determine locally optimal
trajectories; the dual formulation of the optimal investment
problem to derive bounds on the optimal trajectory; and
Monte Carlo techniques to approximate the expectation operator.
Combining these related methods lets us handle a surprisingly large
class of problems. We will show how to find approximately optimal
investment paths for any continuous-path incomplete market driven
by a {diffusion} factor process. As an illustration
of the effectiveness of the method, we shall provide a couple of
numerical examples. We shall start with the benchmark Merton
problem, moving on to problems that are increasingly more
difficult to handle numerically and mathematically.
This paper is structured as follows. In Section \ref{sec:problem} we
present the general problem and the methodology for solving it. Section \ref{algos} describes the algorithms used in the method. In
Section \ref{sec:numerics} we give numerical evidence for the
performance of the method, considering examples of the Merton
problem, the non-constant relative risk aversion, and finally
a multi-dimensional incomplete market driven by a diffusion.
Section \ref{sec:conclusions} concludes.
| {
"timestamp": "2013-05-16T02:01:15",
"yymm": "1305",
"arxiv_id": "1305.3433",
"language": "en",
"url": "https://arxiv.org/abs/1305.3433",
"abstract": "This paper sets up a methodology for approximately solving optimal investment problems using duality methods combined with Monte Carlo simulations. In particular, we show how to tackle high dimensional problems in incomplete markets, where traditional methods fail due to the curse of dimensionality.",
"subjects": "Computational Finance (q-fin.CP)",
"title": "Monte Carlo approximation to optimal investment",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759649262344,
"lm_q2_score": 0.8056321843145404,
"lm_q1q2_score": 0.7902252461651548
} |
https://arxiv.org/abs/1901.01803 | Solving Eigenvalue Problems in a Discontinuous Approximation Space by Patch Reconstruction | We adapt a symmetric interior penalty discontinuous Galerkin method using a patch reconstructed approximation space to solve elliptic eigenvalue problems, including both second and fourth order problems in 2D and 3D. It is a direct extension of the method recently proposed to solve corresponding boundary value problems, and the optimal error estimates of the approximation to eigenfunctions and eigenvalues are instant consequences from existing results. The method enjoys the advantage that it uses only one degree of freedom on each element to achieve very high order accuracy, which is highly preferred for eigenvalue problems as implied by Zhang's recent study [J. Sci. Comput. 65(2), 2015]. By numerical results, we illustrate that higher order methods can provide much more reliable eigenvalues. To justify that our method is the right one for eigenvalue problems, we show that the patch reconstructed approximation space attains the same accuracy with fewer degrees of freedom than classical discontinuous Galerkin methods. With the increasing of the polynomial order, our method can even achieve a better performance than conforming finite element methods, such methods are traditionally the methods of choice to solve problems with high regularities. | \section{Introduction}\label{sec:intro}
In this paper, we consider the numerical method for
solving eigenvalue problems of $2p$-th order elliptic operator for
$p=1$ and $2$. Those problems arise in many important applications.
The Laplace eigenvalue problem occurs naturally in vibrating elastic
membranes, electromagnetic waveguides and acoustic theory, and the
biharmonic eigenvalue problem appears in mechanics and inverse
scatting theory.
The conforming finite element method (FEM) for eigenvalue problems has
been well investigated. We refer to the review papers of Kuttler and
Sigillito \cite{kuttler1984eigenvalues} and Boffi
\cite{boffi2010finite} for the details. For the biharmonic operator,
we have the commonly used $C^1$ Argyris element \cite{argyris1968tuba}
and the $C^0$ interior penalty Galerkin method ($C^0$ IPG)
\cite{engel2002continuous, brenner2011c, brenner2015C0}. An old but
hot topic for eigenvalue problems is the upper and lower bounds since
\cite{forsythe1954asymptotic}. It is well known that the conforming
FEM can easily achieve the upper bound of the eigenvalues. In
\cite{armentano2003mass} and \cite{hu2004lower}, the lower bound was
achieved by mass lumping, see also other methods in
\cite{liu2013verified, boffi2010finite, armentano2004asymptotic}. Hu
et al. \cite{hu2014lower1, hu2014lower2, hu2016guaranteed} proposed a
systematic method to produce lower bounds by nonconforming
approximation spaces. The discontinuous Galerkin (DG) method, see for
example \cite{cockburn2000development, arnold2002unified,
brenner2008locally}, has been applied to the Laplace eigenvalue
problem \cite{antonietti2006discontinuous} and the Maxwell eigenvalue
problem~\cite{hesthaven2003high, warburton2006role}. As a
nonconforming approximation, the DG method admits the totally
discontinuous polynomial space which leads to a great flexibility
though it is challenged \cite{hughes2000comparison} on its efficiency
in number of degrees of freedom (DOF).
In a recent work \cite{zhang2015how}, Zhang studied an interesting
issue on the number of "trusted" eigenvalues by finite element
approximation for the elliptic eigenvalue problems. It was pointed out
therein that only eigenvalues lower in the spectrum can
achieve optimal convergence rate. Furthermore, the percentage of
reliable eigenvalues will decrease on a finer mesh even if we relax
the convergence rate to linear. Typically, the optimal convergence
rate of the elliptic eigenvalue problem is $h^{2(m+1-p)}$, where $m$
is the polynomial degree. It is implied that high order
methods are more likely to provide a greater number of reliable
eigenvalues, measured relatively to the DOFs used, than a lower order
method.
Motivated by Zhang's result, in this paper we aim to
apply a symmetric interior penalty discontinuous Galerkin method to
elliptic eigenvalue problems. The method adopts a discontinuous
approximation space proposed in \cite{li2016discontinuous}, where it
was applied to solve elliptic boundary value problems. The core of the
method is to construct an approximation space by the patch
reconstruction technique in a way that one DOF is used in each
element. The reconstructed space is a piecewise polynomial space and
is discontinuous across the element face, thus it is a subspace of the
traditional DG space. The idea has been applied smoothly to the
biharmonic equation \cite{li2017discontinuous} and the Stokes equation
\cite{li2018finite, li2018discontinuous}. For elliptic eigenvalue
problems, it is a direct extension of the method for boundary value
problems. Consequently, the optimal error estimates of the
approximation to eigenfunctions and eigenvalues can be obtained
instantly from existing results for arbitrary order accuracy.
We present all details on the numerical results to verify that higher
order methods can provide much more reliable eigenvalues, which
perfectly agrees with the theoretical prediction in
\cite{zhang2015how}. In comparison to the classical DG method, one may
see that the patch reconstructed approximation space attains the same
accuracy with much less degrees of freedom. In case of using higher
order polynomials, the numerical results show that a better efficiency
in number of DOFs can be achieved by our method even than conforming
finite element methods. We note that for problems with high
regularities, the conforming finite element methods
traditionally outperform the other methods in number of DOFs. The new
observation here in efficiency gives us an enthusiastic encouragement
to apply our method with high order polynomials to elliptic eigenvalue
problems.
The rest of this paper is organized as follows. To be self-contained,
we describe in section \ref{sec:basis} the detailed process to
construct the approximation space and the approximation properties of
the corresponding space. The symmetric interior penalty method for
elliptic operators is presented in section \ref{sec:weakform}, and
the optimal error estimates are then given for the eigenvalues and
eigenfunctions. In section \ref{sec:examples}, we present the
numerical results to illustrate that the proposed method is efficient
for elliptic eigenvalue problems.
\section{Approximation Space}\label{sec:basis}
Let us consider a convex polygonal domain $\Omega$ in $\mb{R}^D$, $D=2,3$.
$\mathcal{T}_h$ is a polygonal partition of the domain $\Omega$. For each polygon
$K$, $h_K$ and $\abs{K}$ denote its diameter and area, respectively.
Besides, let $h{:}=\max_{K\in\mathcal{T}_h}h_K$. For the optimal convergence
analysis, the partition $\mathcal{T}_h$ is assumed to satisfy some shape
regularity conditions. Those regularity conditions are commonly used
in mimetic finite difference
schemes~\cite{Brezzi:2009,DaVeiga2014,Cangiani2011Convergence} and
discontinuous Galerkin method~\cite{Mu:2014}, which are stated as
follows:
\begin{enumerate}
\item[{\bf A1}\;]Any element $K \in \mathcal{T}_h$ admits a sub-decomposition
$\wt{\mathcal{T}_h}|_K$ that consists of at most $N_s$ triangles, where $N_s$
is an integer independent of $h$;
\item[{\bf A2}\;]If all the triangles $T\in\wt{\mathcal{T}_h}$ are
shape-regular in the sense of
Ciarlet-Raviart~\cite{ciarlet2002finite}: there exists a real
positive number $\sigma$ independent of $h$ such that
$h_T/\rho_T\le\sigma$, where $\rho_T$ is the radius of the largest
ball inscribed in $T$. Then the $\wt{\mathcal{T}_h}$ is a compatible
sub-decomposition.
\end{enumerate}
The above regularity assumptions lead to some useful estimates, such
as Agmon inequality, approximation property and inverse inequality.
Those inequalities are the foundations to derive the approximation
error estimates for the finite element method. We refer
to~\cite{li2016discontinuous} for the detailed discussion.
The reconstruction operator $\mc{R}$ can be constructed with the given
partition $\mathcal{T}_h$. The degrees of freedom of $\mc{R}$ are located at
one point $x_K\in K$ on each element which are called the sampling
nodes or collocation points. We usually assign the barycenter of $K$
as the sampling node $x_K$. Furthermore, the reconstruction operator
$\mc{R}$ is defined element-wise. An element patch denoted as $S(K)$
is constructed for each element $K$. $S(K)$ is an agglomeration of
elements including $K$ itself and other elements nearby $K$. Let
$\mc{I}_K$ denote the set of sampling nodes belonging to $S(K)$, $\#
S(K)$ and $\# \mc{I}_K$ denote the number of elements belonging to
$S(K)$ and the number of sampling nodes belonging to $\mc{I}_K$,
respectively. Obviously, these two numbers are equal to each other. We
define $d_K{:}=\text{diam}\;S(K)$ and $d{:}=\max_{K\in\mathcal{T}_h}d_K$.
Here we specify the way to construct the element patch while it can be
quite flexible, see~\cite{li2012efficient,li2016discontinuous} for the
alternative approaches. First, a constant number $t$ is assigned to
$\# S(K)$ which is determined by the degree of polynomials. Then we
initialize $S(K)$ as $\{ K \}$, and fill $S(K)$ by adding the nearest
Von Neumann neighbor (adjacent edge-neighboring elements) of the
current geometry $S(K)$. We terminate the recursive process until the
number $\# S(K)$ reaches the number $t$. With such an approach, the
element patches are obtained with a constant number, which is
convenient for the implementation. Meanwhile, the shape regularity of
the geometry of $S(K)$ preserves. All the sampling nodes $x_K$ are
located in element $K$ and all element patches are connected set, that
the stability of reconstruction is fair promising. The reconstruction
process can be conducted element-wise after the sampling nodes
$\mc{I}_K$ and element patch $S(K)$ are specified.
Let $U_h$ be the piecewise constant space associated with $\mathcal{T}_h$,
i.e.,
\[
U_h{:}=\set{v\in L^2(\Omega)}{v|_K\in\mb{P}^0(K), \ \forall K \in \mathcal{T}_h}.
\]
For a piecewise constant function $v\in U_h$ and an element $K$, a
high-order approximation polynomial $\mc{R}_{K} v$ of degree $m$ can
be obtained by solving the following discrete local least-squares:
\begin{equation}\label{eq:leastsquares}
\mc{R}_{K} v=\arg
\min_{p\in\mb{P}^m(S(K))}\sum_{x\in\mc{I}_K}\abs{v(x)-p(x)}^2.
\end{equation}
We assume the problem \eqref{eq:leastsquares} has a unique solution
\cite{li2016discontinuous}. Now, we concentrate on the reconstruction
operator and the corresponding finite element space. Although
$\mc{R}_K v$ gives an approximation polynomial on element patch
$S(K)$, we only use it on element $K$. The global reconstruction
operator $\mc{R}$ is defined as:
\[
(\mc{R} v)|_K:= (\mc{R}_{K} v)|_K, \quad \forall K \in \mathcal{T}_h.
\]
The reconstruction operator $\mc{R}$ actually defines a linear
operator which maps $U_h$ into a piecewise polynomial space,
denoted by
\[
V_h:=\mc{R}(U_h).
\]
Here, $V_h$ is the reconstructed finite element space which is
spanned by the basis functions $\{\psi_K\}$. Here the basis functions
are defined by the reconstruction operator,
\[
\psi_K:= \mc{R} e_K,
\]
where $e_K\in U_h$ is the characteristic function corresponding to
$K$,
\begin{displaymath}
e_{K}(x)=\begin{cases} 1,\ x \in K,\\ 0,\ x \notin K.\\
\end{cases}
\end{displaymath}
Thereafter the reconstruction operator can be explicitly expressed
\[
\mc{R} g =\sum_{K \in \mathcal{T}_h} g(x_K) \psi_K(x) , \quad \forall g \in
U_h.
\]
We present a 3D example below to illustrate the implementation of
reconstruction process, while the details for 1D implementation and 2D
implementation can be found in \cite{li2017discontinuous} and
\cite{li2018finite}, respectively. We consider a linear reconstruction
on a cubic domain $[0,1]^3$. The domain is partitioned into
quasi-uniform tetrahedron elements using {\it
Gmsh}~\cite{geuzaine2009gmsh}, which is shown in Figure
\ref{tetra_mesh}. We take element $K_0$ as an instance (see Figure
\ref{tetra_mesh}). The number of degrees of freedom demanded by linear
reconstruction is $4$. Therefore, the $\# S(K_0)$ could be taken as
$5$. In this case, the element patch is containing the element itself
and 4 Von Neumann neighbors coincidentally. Figure \ref{tetra_patch}
shows the geometry of the element patch and the corresponding sampling
nodes. The element patch $S(K_0)$ is chosen as
\[
S(K_0)=\left\{K_0,K_1,K_2,K_3,K_4\right\},
\]
and the sampling nodes are as follows,
\[
\mc I_{K_0} = \left\{ (x_{K_{i}}, y_{K_{i}}, z_{K_{i}}),\quad
i=0,1,2,3,4 \right\}.
\]
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{tetra_mesh.pdf}
\includegraphics[width=0.48\textwidth]{tetra_mesh_ele.pdf}
\caption{The tetrahedron mesh (left) and the element
$K_0$(right).}
\label{tetra_mesh}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{tetra_mesh_patch.pdf}
\includegraphics[width=0.48\textwidth]{tetra_patch.pdf}
\caption{The shape of element patch (left) and the perspective
view of element patch and sampling nodes (right).}
\label{tetra_patch}
\end{center}
\end{figure}
For any continuous function $g$, we consider the linear
approximation for an illustration. For the polynomial degree $m =1$,
the least
squares problem \eqref{eq:leastsquares} is specified as
\begin{displaymath}
\mc R_{K_0} g = \mathop{\arg \min}_{ (a, b, c, d) \in \mathbb R}
\sum_{i=0}^{4} |g(x_{K_{i}},y_{K_{i}},z_{K_{i}}) - (a + bx_{K_{i}} +
cy_{K_{i}}+ d z_{K_{i}})|^2.
\end{displaymath}
The solution of the problem is given by the generalized inverse of
matrix,
\begin{displaymath}
[a,b,c,d]^{T}=(A^TA)^{-1}A^T q,
\end{displaymath}
where $A$ and $q$ are
\begin{displaymath}
A = \begin{bmatrix} 1 & x_{K_{0}} & y_{K_{0}} & z_{K_{0}} \\ 1 &
x_{K_{1}} & y_{K_{1}} & z_{K_{1}} \\ 1 & x_{K_{2}} & y_{K_{2}} &
z_{K_{2}} \\ 1 & x_{K_{3}} & y_{K_{3}} & z_{K_{3}} \\ 1 &
x_{K_{4}} & y_{K_{4}} & z_{K_{4}}
\end{bmatrix}, \quad
q = \begin{bmatrix} g(x_{K_{0}},y_{K_{0}},z_{K_{0}})
\\ g(x_{K_{1}},y_{K_{1}},z_{K_{1}})
\\ g(x_{K_{2}},y_{K_{2}},z_{K_{2}})
\\ g(x_{K_{3}},y_{K_{3}},z_{K_{3}})
\\ g(x_{K_{4}},y_{K_{4}},z_{K_{4}})
\end{bmatrix}.
\end{displaymath}
A direct observation is that matrix $(A^T A)^{-1}A^T$ is not relevant
to the interpolation function $g$. Moreover, the matrix $(A^T
A)^{-1}A^T$ actually stores the polynomial basis function coefficients
corresponding to $\psi_{K_i}$, $i=0, \cdots, 4$. All the basis
functions and the finite element space $V_h$ are determined after the
reconstruction process on each element $\forall K\in
\mathcal{T}_h$. Clearly, the basis functions are discontinuous across the
interface.
Next, for completeness, we report the results on the
properties of the reconstruction operator. Following
\cite{li2012efficient}, we first make the following assumption.
{\bf Assumption A}\; For any $K\in\mathcal{T}_h$ and $g\in\mb{P}^m(S(K))$,
\begin{equation}\label{assumption:uniqueness}
g|_{\mc{I}(K)}=0\quad\text{implies}\quad g|_{S(K)}\equiv 0.
\end{equation}
This assumption implies the uniqueness for least squares problem
\eqref{eq:leastsquares}. A necessary condition for {\bf Assumption A}
is that the number $\# \mc{I}_K$ needs to be greater than
$\text{dim}(\mb{P}^m)$, whose quantities are $m+1$, $(m+1)(m+2)/2$ and
$(3m^2+3m+2)/2$ corresponding to 1D,2D and 3D, respectively. A
constant $\Lambda(m,\mathcal{I}_K)$ is defined as
\cite{li2012efficient}:
\begin{equation}\label{eq:cons}
\Lambda(m, \mathcal{I}_K){:}=\max_{p\in \mathbb{P}^m(S(K))}
\dfrac{\nm{p}{L^\infty(S(K))}}{\nm{p|_{\mc{I}_K}}{\ell_\infty}}.
\end{equation}
Then, the uniform upper bound can be obtained by adding some
constrains on element patches and the partition, see also \cite{
li2016discontinuous} for the details. We have the following
properties of the reconstruction operator $\mc{R}_K$.
\begin{lemma}\label{theorem:localapp}
\cite[Theorem 3.3]{li2012efficient} If {\em Assumption A} holds, then
there exists a unique solution to~\eqref{eq:leastsquares}. Moreover
$\mc{R}_{K}$ satisfies
\begin{equation}\label{eq:invariance}
\mc{R}_Kg=g\quad\text{for all\quad}g\in\mb{P}^m(S(K)).
\end{equation}
The stability property holds true for any $K\in\mathcal{T}_h$ and $g\in
C^0(S(K))$ as
\begin{equation}\label{eq:continuous}
\nm{\mc{R}_K g}{L^{\infty}(K)}\le\Lambda(m , \mc{I}_K) \sqrt{\#
\mc{I}_K}\nm{g|_{\mc{I}(K)}}{\ell_\infty},
\end{equation}
and the quasi-optimal approximation property is valid in the sense
\begin{equation}\label{eq:approximation}
\nm{g -\mc{R}_K g}{L^{\infty}(K)}\le\Lambda_m
\inf_{p\in\mb{P}^m(S(K))} \nm{g - p}{L^{\infty}(S(K))}, \quad
\forall K \in \mathcal{T}_h,
\end{equation}
where $\Lambda_m{:}=\max_{K\in \mathcal{T}_h}
\{1+\Lambda(m,\mathcal{I}_K)\sqrt{\# \mathcal{I}_K}\}$.
\end{lemma}
With Lemma \ref{theorem:localapp} and the interpolation result in
\cite{DupontScott:1980}, the local estimates on element $K$ can be
obtained.
\begin{lemma} \label{theorem:normapp}
\cite[Lemma 2.4]{li2016discontinuous} Let $u \in C^0
\left(\Omega \right) \cap H^{m+1}(\Omega)$, then there exists a
constant $C$ that depends on $N_s$ and $\sigma$, but independent of
$h$, such that
\begin{equation}\label{eq:l2app}
\nm{g-\mc{R} g}{L^{2}(K)}\le C\Lambda_m h_Kd_K^m\snm{g}{H^{m+1}(K)},
\end{equation}
and
\begin{equation}\label{eq:h1app}
\nm{\nabla(g-\mathcal{R} g)}{L^2(K)} \le
C\Lr{h_K^m+\Lambda_{m}d_K^m}\snm{g}{H^{m+1}(K)}.
\end{equation}
\end{lemma}
\section{Elliptic Eigenvalue Problems}\label{sec:weakform}
Let us consider the $2p$-th $(p=1,2)$ order elliptic eigenvalue
problems, for $p=1$, the second order elliptic eigenvalue problem
reads:
\begin{equation}\label{eq:elliptic}
\left\{
\begin{array}{ll}
- \Delta u = \lambda u, & \text{in } \Omega,\\ [1.5ex] u =0, &
\text{on } \partial \Omega,
\end{array}\right.
\end{equation}
and the corresponding weak form is: find $\lambda \in \mathbb{R}$ and
$ u\in V=H^{1}_{0}(\Omega)$, with $ u \neq 0$, such that
\begin{equation*}
a(u,v)=\lambda(u,v),\quad \forall v\in V,
\end{equation*}
where $a(u,v)=\int_{\Omega} \nabla u \cdot \nabla v\mathrm{d}x$ and $(u, v) :=
\int_\Omega u v \mathrm{d}x$.
For $p=2$, the biharmonic eigenvalue problem reads:
\begin{equation}\label{eq:harmonic}
\left\{
\begin{array}{ll}
\Delta^2 u = \lambda u, & \text{in } \ \Omega,\\[1.5ex] u
=\dfrac{\partial u}{\partial \boldsymbol{\mathrm{n}}}=0, & \text{on } \partial \Omega,
\end{array}\right.
\end{equation}
and the corresponding weak form is: find $\lambda \in \mathbb{R}$ and
$ u\in V=H^{2}_{0}(\Omega)$, with $ u \neq 0$, such that
\begin{equation*}
a(u,v)=\lambda(u,v),\quad \forall v\in V,
\end{equation*}
where $a(u,v)=\int_{\Omega} \Delta u \Delta v\mathrm{d}x$.
The discretized variational problem for equations \eqref{eq:elliptic}
and \eqref{eq:harmonic} reads: find $\lambda_h \in \mathbb{R}$ and $
u_h\in U_h$, with $ u_h \neq 0$, such that
\begin{equation}\label{eq:weak_form}
a_{h}(\mc{R} u_h,\mc{R}v_h)=\lambda_h(\mc{R} u_h, \mc{R} v_h),\quad
\forall v_h \in U_h.
\end{equation}
Here we use the notations $a$, $a_h$ for unification. In the rest of
the paper, we will specify the sense of the notation when a particular
equation is considered.
The symmetric interior penalty method is employed to discretize the
elliptic operators. For the second order elliptic operator,
$a_h(\cdot, \cdot)$ is
\begin{equation}\label{elliptic_operator}
\begin{aligned}
a_h(v,w){:}=&\sum_{K\in \mathcal{T}_h}\int_{K}\nabla v\cdot\nabla
w\mathrm{d}x\\ &-\sum_{e\in \mc{E}_h}\int_{e}\Lr{\jump{\nabla
v}\aver{w}+\jump{\nabla w}\aver{v}}\mathrm{d}s\\ &+\sum_{e\in
\mc{E}_h}\int_{e}\eta_eh_e^{-1}\jump{v}\cdot\jump{w}\mathrm{d}s,
\end{aligned}
\end{equation}
and for the biharmonic operator, $a_h(\cdot, \cdot)$ is
\begin{equation}\label{biharmonic_operator}
\begin{aligned}
a_h(v, w){:}=&\sum_{K\in \mathcal{T}_h}\int_{K}\Delta v\Delta
w\mathrm{d}x\\ &+\sum_{e\in \mc{E}_h}\int_{e}\Lr{ \jump{v}
\aver{\nabla\Delta w} +\jump{w} \aver{\nabla\Delta v} } \mathrm{d}s
\\ &-\sum_{e\in \mc{E}_h}\int_{e}\Lr{ \aver{\Delta w}\jump{\nabla
v}+\aver{\Delta v} \jump{\nabla w} } \mathrm{d}s\\ &+\sum_{e\in
\mc{E}_h}\int_{e}\Lr{ \alpha_e h_e^{-3 }\jump{v} \cdot \jump{w}
+\beta_e h_e^{-1} \jump{\nabla u} \jump{\nabla v} } \mathrm{d}s,
\end{aligned}
\end{equation}
where $\eta_e,\alpha_e,\beta_e$ are positive constants. Here we let
$\mc{E}_h$ denote the collection of all the faces of $\mathcal{T}_h$,
$\mc{E}_h^i$ denote the collection of the interior faces. The set of
boundary faces is denoted as $\mc{E}_h^b$, and then
$\mc{E}_h=\mc{E}_h^i\cup \mc{E}_h^b$. Let $e$ be an interior face
shared by two neighbouring elements $K^+, K^-$, and $\boldsymbol{\mathrm n}^+$
and $\boldsymbol{\mathrm n}^-$ denote the corresponding outward unit normal.
For the scalar-valued function $q$ and the vector-valued function
$\boldsymbol{v}$, the \emph{average} operator $\aver{\cdot}$ and the
\emph{jump} operator $\jump{\cdot}$ are defined as
\begin{displaymath}
\{q\}=\frac12(q^++q^-), \quad\{\boldsymbol v\}=\frac12(\boldsymbol v^++\boldsymbol v^-),
\end{displaymath}
and
\begin{displaymath}
[ \hspace{-2pt} [ q] \hspace{-2pt} ]=\boldsymbol{\mathrm n^+}q^++\boldsymbol{\mathrm n^-}q^-, \quad[ \hspace{-2pt} [\boldsymbol v] \hspace{-2pt} ]=
\boldsymbol{\mathrm n^-}\cdot\boldsymbol v^++\boldsymbol{\mathrm n^-}\cdot\boldsymbol v^-.
\end{displaymath}
Here $q^+=q|_{K^+}$, $\boldsymbol v^+=\boldsymbol v|_{K^+}$ and $q^-=q|_{K^-}$, $\boldsymbol
v^-=\boldsymbol v|_{K^-}$. For $e\in\mc{E}_h^b$, we set
\begin{displaymath}
\{q\}=q|_{K},\quad [ \hspace{-2pt} [ q] \hspace{-2pt} ]=\boldsymbol{\mathrm n}q|_{K},
\end{displaymath}
and
\begin{displaymath}
\{\boldsymbol v\}=\boldsymbol v|_{K},\quad [ \hspace{-2pt} [\boldsymbol v] \hspace{-2pt} ]=\boldsymbol{\mathrm n}\cdot\boldsymbol
v|_{K}.
\end{displaymath}
We note that the problem \eqref{eq:weak_form} is equivalent to the
following problem: find $\lambda_h \in \mathbb{R}$ and $\varphi_h\in
V_h$, with $ \varphi_h \neq 0$, such that
\[
a_h(\varphi_h,\psi_h)=\lambda_h(\varphi_h,\psi_h),\quad \forall
\psi_h\in V_h.
\]
This is a more standard formulation for finite element methods. By the
formulation \eqref{eq:weak_form}, it is emphasized that the number of
DOFs of the approximation space is always $\text{dim}(U_h)$.
We define the energy norms $\|\cdot\|_{h}$ and $\enernm{\cdot}_{h}$
for any $v\in V_h=\mc{R}(U_h)$ as:
\begin{equation}\label{eq:energy_norms}
\begin{aligned}
\|v\|_{h}^2&=\sum_{K\in \mathcal{T}_h}\nm{\nabla v}{L^2(K)}^{2} + \sum_{e\in
\mc{E}_h}h_e^{-1}\nm{\jump{v}}{L^2(e)}^2,\\
\enernm{v}_{h}^2&=\sum_{K\in \mathcal{T}_h}\nm{\Delta v}{L^2(K)}^{2} +
\sum_{e\in \mc{E}_h}h_e^{-3}\nm{\jump{v}}{L^2(e)}^2 + \sum_{e\in
\mc{E}_h}h_e^{-1}\nm{\jump{\nabla v}}{L^2(e)}^2.
\end{aligned}
\end{equation}
From the Lemma \ref{theorem:normapp} and Agmon inequality, the
following interpolation estimates are straightforward results for
the reconstruction operator in the energy norm.
\begin{lemma}\label{lemma:approximate_energy_error}
\cite[Equation 3.4]{li2016discontinuous} \cite[Theorem
2.1]{li2017discontinuous} Let $u\in H^{m+1}(\Omega)$, and
$\mc{R}u\in V_h$ be the interpolation polynomial of $u$, there
exists a constant $C$ that depends on $N_s$, $\sigma$ and $m$,
but independent of $h$, such that
\begin{equation}\label{eq:approx_energy_error}
\begin{split}
\|u-\mc{R}u \|_h \leq & C (h^{m}+\Lambda_{m}d^{m}) | u
|_{H^{m+1}(\Omega)} ,\\ \enernm{u-\mc{R}u}_{h}\leq & C
(h^{m-1}+\Lambda_{m}d^{m-1}) | u |_{H^{m+1}(\Omega)}.
\end{split}
\end{equation}
\end{lemma}
Next, the boundedness and coercivity of the bilinear operator
$a_h(\cdot,\cdot)$ in \eqref{elliptic_operator} and
\eqref{biharmonic_operator} are as below.
\begin{lemma}\label{lemma:bilinear_operator_property}
\cite[Equations 4.4,4.10]{arnold2002unified} If the penalty
constant $\eta_e$ is sufficiently large, then the bilinear operator
\eqref{elliptic_operator} is bounded and coercive, indeed there exist constants $C_b$
and $C_s$, such that
\begin{equation}\label{eq:elliptic_property}
\begin{split}
a_h(\mc{R}v_h,\mc{R}v_h)&\geq C_b \|\mc{R}v_h\|_{h}^2, \quad \forall
v_h \in U_h,\\ a_h(\mc{R}u_h,\mc{R}v_h)&\leq C_s \|\mc{R}u_h\|_{h}
\|\mc{R}v_h\|_{h}, \quad \forall u_h, v_h \in U_h.
\end{split}
\end{equation}
\cite[Lemmata 3.1, 3.2]{li2017discontinuous} If the penalty constants
$\alpha_e,\beta_e$ are sufficiently large , then there exist constants
$C_b$ and $C_s$, such that the bilinear operator
\eqref{biharmonic_operator} satisfies
\begin{equation}\label{eq:biharmonic_property}
\begin{split}
a_h(\mc{R}v_h,\mc{R}v_h)&\geq C_b \enernm{\mc{R}v_h}_{h}^2, \quad
\forall v_h \in U_h,\\ a_h(\mc{R}u_h,\mc{R}v_h)&\leq C_s
\enernm{\mc{R}u_h}_{h} \enernm{\mc{R}v_h}_{h}, \quad \forall u_h, v_h
\in U_h.
\end{split}
\end{equation}
\end{lemma}
We refer to \cite{arnold2002unified,li2017discontinuous} for the
proof.
To derive the error estimates, we introduce the sum space
$V(h)=V+\mc{R}(U_h)$, and endows it with the energy norm
\eqref{eq:energy_norms}, denoted as $\|\cdot\|_{V(h)}$ for
unification,
\begin{displaymath}
\|\cdot\|_{V(h)}=
\begin{cases}
\|\cdot\|_{h},\ p=1,\\ \enernm{\cdot}_{h},\ p=2.\\
\end{cases}
\end{displaymath}
Let $\lambda^{(i)}$, $i\in \mathbb{N}$, denote the sequence of
eigenvalues of \eqref{eq:elliptic} and \eqref{eq:harmonic} with the
natural numbering
\begin{displaymath}
\lambda^{(1)}\leq \lambda^{(2)} \leq \cdots \leq \lambda^{(i)}\leq
\cdots,
\end{displaymath}
and the corresponding eigenfunctions with the standard normalization
$\|u^{(i)}\|=1$
\begin{displaymath}
u^{(1)},u^{(2)},\cdots,u^{(i)},\cdots,
\end{displaymath}
which are orthogonal to each other
\begin{displaymath}
(u^{(i)},u^{(j)})=0, \quad \text{if}\ i\neq j.
\end{displaymath}
Let $N=\text{dim}(V_h)$, thus the discrete eigenvalues of
\eqref{eq:weak_form} can be ordered as follows:
\begin{displaymath}
\lambda^{(1)}_{h}\leq \lambda^{(2)}_{h} \leq \cdots \leq
\lambda^{(N)}_{h},
\end{displaymath}
and the discrete eigenfunctions with the normalization
$\|\mc{R}u^{(i)}_{h}\|=1$,
\begin{displaymath}
\mc{R}u^{(1)}_{h},\mc{R}u^{(2)}_h,\cdots,\mc{R}u^{(N)}_{h},
\end{displaymath}
which satisfy the same orthogonalities
\begin{displaymath}
(\mc{R}u^{(i)}_{h},\mc{R}u^{(j)}_{h})=0, \quad \text{if}\ i\neq j.
\end{displaymath}
The convergence analysis for the eigenvalue problem
\eqref{eq:weak_form} can be obtained by the Babu\v{s}ka-Osborn
theory~\cite{Osborn1991Eigenvalue}. We define the following continuous
and discrete solution operators:
\begin{equation}\label{eq:solution_operator}
\begin{split}
T&:L^{2}(\Omega) \rightarrow V \quad a(Tf,v)=(f,v), \ \forall v\in
V,\\ T_h&:L^{2}(\Omega) \rightarrow \mc{R}(U_h) \quad a_{h}(T_h
f,\mc{R}v)=(f,\mc{R}v), \ \forall v\in U_h.
\end{split}
\end{equation}
Obviously the operator $T$ and $T_h$ are self-adjoint and from the
elliptic regularity, there exists $\epsilon>0 $ such that
\[
\|Tf - T_hf\|_{V(h)}\leq C h^{\epsilon} \|f\|_{L^{2}(\Omega)}.
\]
And the operators have the gradual approximation property,
\begin{equation}\label{solution_operator_approx}
\lim_{h\rightarrow 0} \|T - T_h\|_{\mc{L}(V(h))}=0.
\end{equation}
Let $\sigma(T),\sigma(T_h)$ and $\rho(T),\rho(T_h)$ denote the
spectrum and the resolvent set of the solution operator $T$ and $T_h$,
respectively. Define the resolvent operators as follows
\begin{equation*}
\begin{split}
R_z(T):=&(z-T)^{-1},\ \forall z\in \rho(T),\quad V\rightarrow
V,\\ R_z(T_h):=& (z-T_h)^{-1},\ \forall z\in \rho(T),\quad
\mc{R}(U_h)\rightarrow \mc{R}(U_h).
\end{split}
\end{equation*}
Then the first result of convergence is that there is no pollution of
the spectrum.
\begin{theorem}\label{Non-pollution of the spectrum}
\cite[Theorem 9.1]{boffi2010finite} Assume the convergence in norm
\eqref{solution_operator_approx} is satisfied, for any compact set
$K \subset \rho(T)$, there exists $h_0>0$, such that, for all
$h<h_0$, we have
\[
K \subset \rho(T_h).
\]
If $\mu \in\sigma(T)$ is a non-zero eigenvalue with algebraic
multiplicity $k$, then exactly $k$ discrete eigenvalues of $T_h$,
convergence to $\mu$ as $h$ tend to zero.
\end{theorem}
Let $\Gamma$ be an arbitrary closed smooth curve $\Gamma \in \rho(T)$
which encloses $\mu\in\sigma(T)$, and no other elements of
$\sigma(T)$, we define the Riesz spectral projection operators $E$, $E_h$ by:
\begin{equation*}\label{eq:spectral_operator}
\begin{split}
E&:L^{2}(\Omega) \rightarrow V \quad E(\lambda)=\frac{1}{2\pi
i}\int_{\Gamma} R_z(T)\,dz, \\ E_h&:L^{2}(\Omega) \rightarrow
\mc{R}(U_h) \quad E_{h}(\lambda)=\frac{1}{2\pi i}\int_{\Gamma}
R_z(T_h)\,dz.
\end{split}
\end{equation*}
When $h$ is sufficiently small, we have $\Gamma \in \rho(T_h)$ and
$\Gamma$ encloses exactly $k$ eigenvalues of $T_h$. More precisely,
the dimension of $E(\mu)V$ and $E_{h}(\mu)\mc{R}(U_h)$ is equal to
$k$. Further we have
\begin{equation}\label{eigenspace_approx}
\lim_{h\rightarrow 0} \|E-E_h\|_{\mc{L}(L^2(\Omega),V(h))}=0.
\end{equation}
The convergence of the generalized eigenvectors has been claimed.
The gap between the eigenspaces is defined as follows,
\begin{align*}
\delta(E,F)&=\sup_{u\in E,\|u\|=1} \inf_{v\in F} \|u-v\|
,\\ \hat{\delta}(E,F)&= \max(\delta(E,F),\delta(F,E)) .
\end{align*}
\begin{lemma}\label{non-pollution of eigenspace}
\cite[Theorem 9.3]{boffi2010finite} Let $\mu$ be a non-zero eigenvalue
of $T$, let $E =E(\mu) V$ be its generalized eigenspace, and let
$E_h=E_h(\mu) \mc{R}(U_h)$. Then
\[
\hat{\delta}(E,E_h)\leq C \|(T-T_h)_{|E}\|_{\mc{L}(V(h))}.
\]
\end{lemma}
\begin{lemma}\label{corollary_non-pollution}
\cite[Corollary 9.4]{boffi2010finite} Let $\lambda$ be a non-zero
eigenvalue of \eqref{eq:elliptic} and \eqref{eq:harmonic},
respectively, and $E =E(\lambda^{-1}) V$ be its generalized
eigenspace, and let $E_h=E_h(\lambda^{-1}) \mc{R}(U_h)$. Then
\[
\hat{\delta}(E,E_h)\leq C\sup_{u\in E,\|u\|_{V(h)}=1} \inf_{v\in U_h}
\|u-\mc{R}v\|_{V(h)}.
\]
\end{lemma}
We now claim the approximation estimate for the solution operator, and
we refer to~\cite{li2016discontinuous,li2017discontinuous} for more
details.
\begin{lemma}\label{lemma:appro_eigenvector}
Let $\lambda$ be a non-zero eigenvalue of
\eqref{eq:elliptic} and \eqref{eq:harmonic}, respectively, let $E$
be the eigenspace associated with $\lambda$, and its regularity
satisfy
$E\subset H^{m+1}(\Omega)$, $m \geq 2p-1$, then
\[
\|(T-T_h)_{|E}\|_{\mc{L}(V(h))}\leq C \Lr{h^{\tau}+\Lambda_md^{\tau}},
\]
where $\tau=m+1-p$.
\end{lemma}
\begin{proof}
The source problem corresponding to~\eqref{eq:elliptic} takes the form
\[
- \Delta u_s = f \quad \text{in }\Omega,\quad
u_s =0 \quad \text{on }\partial \Omega,
\]
and the source problem corresponding to~\eqref{eq:harmonic} takes the
form
\[
\Delta^2 u_s = f \quad \text{in } \Omega,\quad u_s=\dfrac{\partial
u_s}{\partial \boldsymbol{\mathrm{n}}}=0 \quad \text{on } \partial \Omega.
\]
The discrete variational problem for the
source problem reads: find $u_h\in U_h$ such that
\begin{equation}
a_h(\mc{R} u_h, \mc{R} v_h) = (f, \mc{R} v_h), \quad \forall v_h \in
U_h.
\label{eq:dsource}
\end{equation}
From~\cite[Theorem 3.1]{li2016discontinuous} \cite[Theorem
3.1]{li2017discontinuous}, we conclude that there exists a unique
solution to \eqref{eq:dsource}. Furthermore, if $u_s \in
H^{m+1}(\Omega)$, we have the following estimate:
\[
\|u_s-\mc{R}u_h\|_{V(h)} \leq C (h^{\tau}+\Lambda_m
d^{\tau})|u_s|_{H^{m+1}(\Omega)},
\]
where $\tau=m+1-p$. This estimate directly implies
\[
\|(T-T_h)_{|E}\|_{\mc{L}(V(h))}\leq C \Lr{h^{\tau}+\Lambda_md^{\tau}},
\]
which completes the proof.
\end{proof}
Then, the error estimates for the eigenfunctions can be directly derived.
\begin{theorem}\label{approx_eigenvector}
Let $u^{(i)}$ be a unit
eigenfunction associated with an eigenvalue $\lambda^{(i)}$ of
multiplicity $k$, such that $\lambda^{(i)}=\cdots=\lambda^{(i+k-1)}$,
and $\mc{R}u^{(i)}_h,\cdots, \mc{R}u^{(i+k-1)}_h$ denote the discrete
eigenfunctions associated with the $k$ discrete eigenvalues converging
to $\lambda^{(i)}$. Then there exists
\begin{equation}\label{eq:discrete_eigenspace}
\mc{R}w^{(i)}_h\in \text{span} \{\mc{R}u^{(i)}_h,\cdots,
\mc{R}u^{(i+k-1)}_h\},
\end{equation}
such that
\begin{equation}\label{eq:best_appr_eigenspace}
\|u^{(i)}-\mc{R}w^{(i)}_h\|_{V(h)}\leq C \sup_{u\in E,\|u\|_{V(h)}=1}
\inf_{v\in U_h} \|u-\mc{R}v\|_{V(h)}.
\end{equation}
Moreover, if the regularity of eigenspace is $E\subset
H^{m+1}(\Omega)$, $m\geq 2p-1$, then
\begin{equation}\label{eq:appr_eigenspace}
\|u^{(i)}-\mc{R}w^{(i)}_h\|_{V(h)}\leq C \Lr{h^{\tau}+\Lambda_md^{\tau}}
|u^{(i)}|_{H^{m+1}(\Omega)},
\end{equation}
where $\tau=m+1-p$.
\end{theorem}
\begin{proof}
The results~\eqref{eq:discrete_eigenspace}
and~\eqref{eq:best_appr_eigenspace} are direct extensions of
Lemma~\ref{corollary_non-pollution} and the estimate
~\eqref{eq:appr_eigenspace} is directly derived from
Lemma~\ref{lemma:appro_eigenvector}.
\end{proof}
Finally, the error estimates for the eigenvalues of
\eqref{eq:elliptic} and \eqref{eq:harmonic} are the following.
\begin{theorem}\label{approx_eigenvalue}
Let $\lambda^{(i)}$ denote the
eigenvalue of \eqref{eq:elliptic} and \eqref{eq:harmonic} with
multiplicity $k$, $\lambda^{(i)}_h$ be the discrete eigenvalues and $E$
denote the eigenspace associated with $\lambda^{(i)}$, then we have
\begin{equation}\label{eq:best_appro_eigenvalue}
|\lambda^{(i)}-\lambda^{(i)}_h|\leq C \sup_{u\in E,\|u\|_{V(h)}=1}
\inf_{v\in U_h} \|u-\mc{R}v\|_{V(h)}^2.
\end{equation}
Moreover, if eigenspace $E\subset H^{m+1}(\Omega)$, $m\geq 2p-1$, then
the following optimal double order of convergence holds
\begin{equation}\label{eq:eigenvalue_estimate}
|\lambda^{(i)}-\lambda^{(i)}_h|\leq C \Lr{h^{2\tau}+\Lambda_md^{2\tau}},
\end{equation}
where $\tau = m + 1 -p$.
\end{theorem}
\begin{proof}
Since the operator $a_h(\cdot, \cdot)$ is symmetric, i.e. $a_h(T_h f,
\mc{R}v) = a_h(\mc{R}v, T_h f)$, the estimate
\eqref{eq:best_appro_eigenvalue} is a direct application of
~\cite[Theorem 9.13]{boffi2010finite}. The
estimate~\eqref{eq:eigenvalue_estimate} is the combination
of the inequality~\eqref{eq:best_appro_eigenvalue} and
Lemma~\ref{lemma:appro_eigenvector}.
\end{proof}
\section{NumericalResults}\label{sec:examples}
In this section, we present some
numerical results to show that our method is efficient for eigenvalue
problems if we use higher order approximation. We would like to
emphasize two points: \begin{itemize} \item Less DOFs are used by our
method for high order approximation comparing to the classical DG
method and conforming finite element methods; \item More reliable
eigenvalues can be obtained increasing the order of approximation.
\end{itemize} Besides, we will compute the numerical order of
convergence to verify the theoretical error estimates and give results
on different domains and different meshes to demonstrate the
flexibility of the implementation using our method.
\subsection{Examples setup} First, let us list the setup of the
examples to be investigated.
\paragraph{\bf Example 1} We consider the two-dimensional square
domain $\Omega=[0,\pi]^2$, the eigenpairs of problem
\eqref{eq:elliptic} are given by \begin{displaymath} \begin{aligned}
\lambda_{i,j}=& i^2+j^2, \ \text{for} \ i,j>0 \ \text{and} \ i,j
\in \mb{N},\\ u_{i,j}=& \sin(ix)\sin(jy), \end{aligned}
\end{displaymath} and for the problem \eqref{eq:harmonic} with the
boundary condition $u|_{\partial \Omega}=\Delta u|_{\partial
\Omega}=0$, which is related to the bending of a simply supported
plate \cite{brenner2015C0}, the eigenpairs are given by
\begin{displaymath} \begin{aligned} \lambda_{i,j}=& (i^2+j^2)^2,
\ \text{for} \ i,j>0 \ \text{and} \ i,j \in \mb{N},\\ u_{i,j}=&
\sin(ix)\sin(jy). \end{aligned} \end{displaymath} In this example,
the computation involves a series of regular unstructured triangular
meshes which are generated by {\it Gmsh}~\cite{geuzaine2009gmsh}.
For the second order elliptic problem we take $(m=1,2,3,4,5)$ and for
the biharmonic problem $m$ is taken as $( 2,3,4,5)$.
\paragraph{\bf Example 2} We consider the L-shaped domain $[-1,1]^2
\backslash (0,1]\times(0,-1]$. The domain is partitioned into
polygonal meshes by {\it PolyMesher}
\cite{talischi2012polymesher}. Figure \ref{polygonal_mesh} shows
the initial mesh and the refined mesh. The meshes contain the
elements with various geometries such as quadrilaterals,
pentagons, hexagons, and so on. The first eigenfunction in
L-sharped domain has a singularity at the reentrant corner and has
no analytical expression. We note that the third eigenpair is
smooth for L-shaped domain. For the second order elliptic
equation, the third eigenvalue is $2\pi^2$ and the corresponding
eigenfunction is $\sin(\pi x) \sin(\pi y)$, and we take
$(m=1,2,3)$ to solve the eigenvalue problem. For the biharmonic
equation, the third eigenpair is $4\pi^4$ and $\sin(\pi x)
\sin(\pi y)$, and we choose $(m=2,3)$ to solve it.
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth]{L_polygon_mesh-crop.pdf}
\includegraphics[width=0.48\textwidth]{L_polygon_mesh_refine-crop.pdf}
\caption{The polygonal mesh (left) / refined polygonal mesh
(right) for Example 2.} \label{polygonal_mesh} \end{center}
\end{figure}
\paragraph{\bf Example 3} We solve the eigenvalue problem in three
dimensions in this example. The computational domain is the unit cubic
$\Omega=[0,1]^3$ which is partitioned into tetrahedral meshes by {\it
Gmsh}. The eigenpairs of problem \eqref{eq:elliptic} are as follows:
\begin{displaymath} \begin{aligned} \lambda_{i,j,k}=&
(i^2+j^2+k^2)\pi^2, \ \text{for} \ i,j,k>0 \ \text{and} \ i,j,k
\in \mb{N},\\ u_{i,j,k}=& \sin(i\pi x)\sin(j\pi y)\sin(k\pi z),
\end{aligned} \end{displaymath} and for problem \eqref{eq:harmonic}
with the simply supported plate boundary condition, the eigenpairs are
given by \begin{displaymath} \begin{aligned} \lambda_{i,j,k}=&
(i^2+j^2+k^2)^2\pi^4, \ \text{for} \ i,j,k>0 \ \text{and} \ i,j,k
\in \mb{N},\\ u_{i,j,k}=& \sin(i\pi x)\sin(j\pi y)\sin(k\pi z).
\end{aligned} \end{displaymath}
\subsection{Convergence order study} At first, we show that the
numerical results verify the optimal convergence order as predicted by
the theory.
For Example 1, Figure \ref{tri_elliptic_error} shows the convergence
rates of the eigenvalue and eigenfunction. The exact $20$-th
eigenvalue is $32$ and the corresponding eigenfunction is
$\sin(4x)\sin(4y)$. The eigenvalue converges to the exact one with
$h^{2m}$ rate and for the eigenfunction the convergence rate is $h^m$.
Figure \ref{tri_biharmonic_error} shows the convergence rates of the
eigenvalue and eigenfunction of the biharmonic equation. The exact
$20$-th eigenvalue is $1024$ and the corresponding eigenfunction is
$\sin(4x)\sin(4y)$. The eigenvalue convergences to the exact one with
$h^{2(m-1)}$ rate, and for the eigenfunction the convergence rate is
$h^{m-1}$. The numerical results agree with Theorem
\ref{approx_eigenvector} and \ref{approx_eigenvalue} perfectly.
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth]{elliptic_square_eig_val-crop.pdf}
\includegraphics[width=0.48\textwidth]{elliptic_square_eig_func-crop.pdf}
\caption{The convergence rates of the $20$-th eigenvalue (left) / eigenfunction (right)
of the second order problem for different orders $m$ on triangle
meshes for Example 1.}
\label{tri_elliptic_error}
\end{center}
\end{figure}
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth]{biharmonic_square_eig_val-crop.pdf}
\includegraphics[width=0.48\textwidth]{biharmonic_square_eig_func-crop.pdf}
\caption{The convergence rates of the $20$-th eigenvalue (left) / eigenfunction (right)
of the biharmonic problem for different orders $m$ on triangle
meshes for Example 1.}
\label{tri_biharmonic_error}
\end{center}
\end{figure}
The Example 2 shows that the proposed method can handle these
polygonal elements easily. First, we calculate the third smooth
eigenpair to verify the analysis of the proposed method. Figures
\ref{polygonal_elliptic_error} and \ref{polygonal_biharmonic_error}
show the numerical results that agree with the theoretical prediction.
The values of the first eigenvalue of the Laplace/biharmonic equation
are shown in Table \ref{table_L_shape}. It is clear that the
eigenvalues converge to the real eigenvalue as $h$ approaches $0$. The
eigenfunctions corresponding to the first eigenvalue and third
eigenvalue are presented in Figures \ref{polygonal_elliptic_eigfun}
and \ref{polygonal_biharmonic_eigfun}.
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth]{elliptic_L_eig_val-crop.pdf}
\includegraphics[width=0.48\textwidth]{elliptic_L_eig_func-crop.pdf}
\caption{The convergence rates of the 3rd eigenvalue (left) /
eigenfunction (right) of the second order problem for different
orders $m$ on polygonal meshes for Example 2.}
\label{polygonal_elliptic_error}
\end{center}
\end{figure}
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth]{L_polygon_elliptic_matlab-crop.pdf}
\includegraphics[width=0.48\textwidth]{L_polygon_elliptic_function-crop.pdf}
\caption{The 1st eigenfunction(left) and the 3rd eigenfunction(right) of the
second order problem for Example 2.}
\label{polygonal_elliptic_eigfun}
\end{center}
\end{figure}
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth]{biharmonic_L_eig_val-crop.pdf}
\includegraphics[width=0.48\textwidth]{biharmonic_L_eig_func-crop.pdf}
\caption{The convergence rates of 3rd eigenvalue (left) /
eigenfunction (right) of the biharmonic problem for different
orders $m$ on polygonal meshes for Example 2.}
\label{polygonal_biharmonic_error}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{L_polygon_biharmonic_matlab-crop.pdf}
\includegraphics[width=0.48\textwidth]{L_polygon_biharmonic_function-crop.pdf}
\caption{The 1st eigenfunction (left) and the 3rd eigenfunction (right) of the
biharmonic problem for Example 2.}
\label{polygonal_biharmonic_eigfun}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Order & DOFs & N=2.00E+2 &
N=8.00E+2 & N=3.20E+3 & N=1.28E+4& N=5.12E+4 \\ \hline $m=1$ &
\multirow{3}{*}{Laplace} &10.786 & 9.9562 & 9.7396 & 9.6867 & 9.6733
\\ \cline{1-1} \cline{3-7} $m=2$ & &10.403 & 9.7422 & 9.6780& 9.6707 &
9.6692 \\ \cline{1-1} \cline{3-7} $m=3$ & &9.8128 & 9.6811 & 9.6724 &
9.6700 & 9.6691 \\ \hline \hline $m=2$ & \multirow{2}{*}{Biharmonic}
&179.98 & 171.55 & 167.78 & 166.75 & 165.43 \\ \cline{1-1} \cline{3-7}
$m=3$ & &168.82 & 166.78 & 165.77 & 165.12 & 164.68 \\ \hline
\end{tabular}}
\caption{The first eigenvalues of the second order and
biharmonic equation in L-shaped domain.} \label{table_L_shape}
\end{center}
\end{table}
For Example 3, the numerical results are presented in Tables
~\ref{table_elliptic_3D} and ~\ref{table_biharmonic_3D} for the second
order and biharmonic equation, respectively. The convergence order of
the second order equation is $h^{2m}$, and of the biharmonic equation
is $h^{2(m-1)}$. Obviously, the computational results agree with the
error estimates.
\begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Order
& Mesh Size & $h=$2.500E-1 & $h=$1.250E-1 & $h=$6.250E-2 &
$h=$3.125E-3 \\ \hline \multirow{3}{*}{$m=1$} & Value & 45.43 &
34.99 & 31.03 & 29.97 \\ \cline{2-6} & Error & 5.33E-1 & 1.81E-1 &
4.82E-2 & 1.24E-2 \\ \cline{2-6} & Order & - & 1.55 & 1.91 & 1.96 \\
\hline \multirow{3}{*}{$m=2$} & Value & 35.57 & 29.96 & 29.63 &
29.61 \\ \cline{2-6} & Error & 2.01E-1 & 1.21E-2 & 7.70E-4 & 4.98E-5
\\ \cline{2-6} & Order & - & 4.10 & 3.96 & 3.95 \\ \hline
\multirow{3}{*}{$m=3$} & Value & 31.34 & 29.63 & 29.61 & 29.61 \\
\cline{2-6} & Error & 5.85E-2 & 6.79E-4 & 1.07E-5 & 1.64E-7 \\
\cline{2-6} & Order & - & 6.42 & 5.99 & 6.02 \\ \hline
\multirow{3}{*}{$m=4$} & Value & 30.23 & 29.61 & 29.61 & 29.61 \\
\cline{2-6} & Error & 2.12E-2 & 8.24E-5 & 3.23E-7 & 1.23E-9 \\
\cline{2-6} & Order & - & 8.03 & 7.99 & 7.93 \\ \hline \end{tabular}
\caption{The first eigenvalues of the Laplace problem in 3D,
$\lambda_{1}=3\pi^2(29.61)$.} \label{table_elliptic_3D} \end{table}
\begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Order
& Mesh Size & h=2.500E-1 & h=1.250E-1 & h=6.250E-2 & h=3.125E-3 \\
\hline \multirow{3}{*}{m=2} & Value & 1000.54 & 906.41 & 883.20 &
878.20 \\ \cline{2-6} & Error & 1.41E-1 & 3.39E-2 & 7.44E-3 &
1.73E-3 \\ \cline{2-6} & Order & - & 2.05 & 2.18 & 2.09 \\ \hline
\multirow{3}{*}{m=3} & Value & 942.74 & 879.49 & 876.84 & 876.69 \\
\cline{2-6} & Error & 7.54E-2 & 3.21E-3 & 1.88E-4 & 1.07E-5 \\
\cline{2-6} & Order & - & 4.55 & 4.09 & 4.12 \\ \hline
\multirow{3}{*}{m=4} & Value & 897.55 & 876.85 & 876.68 & 876.68 \\
\cline{2-6} & Error & 2.38E-2 & 2.00E-4 & 2.91E-6 & 4.33E-8 \\
\cline{2-6} & Order & - & 6.89 & 6.10 & 6.07 \\ \hline \end{tabular}
\caption{The first eigenvalues of the biharmonic problem in 3D,
$\lambda_{1}=9\pi^4(876.68)$.} \label{table_biharmonic_3D} \end{table}
\begin{remark} We note that all the eigenvalues obtained by the
proposed method are greater than the exact eigenvalues. This
behavior appears if conforming finite element method is used to
solve the eigenvalue problem. However, the approximate space $V_h$
is not a subspace of the space $V=H^{1}_{0}$ or $H^{2}_{0}$. In DG
framework, this phenomenon is related to the penalty parameter.
Warburton and Embree studied the role of penalty in the LDG method for
Maxwell's eigenvalue problem in~\cite{warburton2006role}. Giani et
al.~\cite{Giani2018posteriori} used the asymptotic perturbation theory
to analyze the dependence of eigenvalues and eigenspaces on the
penalty parameter. We hope the reason why this happened in our
method can be clarified in future study. \end{remark}
\subsection{Efficiency in terms of number of DOFs } Next, we make a
comparison in terms of number of DOFs among different methods. For the
second order elliptic problem, we consider the conforming FEM,
standard SIPDG method~ \cite{antonietti2006discontinuous} and our
method. For the biharmonic problem, we consider the $C^0$ IPG,
standard SIPDG and our method. Here we will study the numerical
behavior for higher order approximation. We restrict to Example 1,
since in this case the solution has enough regularity.
We calculate the first eigenvalue and eigenfunction on successively
refined meshes. The errors of eigenvalue are measured in the relative
error, and the errors of the eigenfunction are measured in
$|\cdot|_{1,h}$ and $|\cdot|_{2,h}$ semi-norms, respectively.
For the Laplace problem, Figure \ref{tri_elliptic_compare} shows the
performance of the conforming FEM, SIPDG method and our method. The
approximation order $m$ is taken from $1$ to $4$. The convergence
rate for the eigenvalue is $h^{2m}$ and for the eigenfunction the rate
is $h^{m}$ which meet the theoretical predictions. The horizontal
ordinate is the number of DOFs. The number of DOFs employed by our
method is fixed while the approximation order increases. In all cases,
the SIPDG method uses the maximum number of DOFs. As one's
expectation, the figure shows that the efficiency of FEM is higher
than others for the low order approximation. Increasing of the
approximation order, our method becomes the most efficient method
among these three methods.
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth
]{elliptic_square_eig_val_compare-crop.pdf}
\includegraphics[width=0.48\textwidth
]{elliptic_square_eig_func_compare-crop.pdf}
\caption{The
convergence rates of the 1st eigenvalue (left) / eigenfunction
(right) of the second order problem for three methods on triangle
meshes for Example 1.}
\label{tri_elliptic_compare}
\end{center}
\end{figure}
For the biharmonic problem, Figure \ref{tri_biharmonic_compare} shows
the error in terms of number of DOFs of the $C^0$ IPG method, standard
SIPDG method and our method. The approximation order $m$ is taken as
$2$, $3$, and $4$. The convergence rate for eigenvalue is $h^{2(m-1)}$
and the convergence rate is $h^{m-1}$ for eigenfunction which
perfectly agree with the error estimates. The experiments show that
our method performs better than the other methods in all cases. The
advantage of our method in efficiency is more remarkable for higher
order approximation.
\begin{figure} \begin{center}
\includegraphics[width=0.48\textwidth
]{biharmonic_square_eig_val_compare-crop.pdf}
\includegraphics[width=0.48\textwidth
]{biharmonic_square_eig_func_compare-crop.pdf}
\caption{The
convergence rates of the 1st eigenvalue (left) / eigenfunction
(right) of the biharmonic problem for three methods on triangle
meshes for Example 1.}
\label{tri_biharmonic_compare}
\end{center}
\end{figure}
\subsection{Number of reliable eigenvalues} Zhang studied the number
of reliable eigenvalues of the finite element method in
\cite{zhang2015how}, and the main result he gave is as below:
\begin{theorem}\label{thm:quantity} Suppose that we solve a $2p$-order
elliptic equation on a domain $\Omega \in \mb{R}^D$ by the finite
element method (conforming or non-conforming) of polynomial degree
$m$ under a shape regular and quasi-uniform mesh with mesh-parameter
$h$. Assume that the exact eigenvalue grows as
$\lambda_{j}=O(j^\frac{2p}{D})$ and the relative error can be
estimated by $\frac{\lambda_i^{h}-\lambda_i}{\lambda_i}=
h^{m+1-p}\lambda_{i}^{\frac{m+1}{p-1}}$. Then there are about \[
j_N=N^\frac{{m+1-p-\alpha/2}}{m+1-p}
m^{-D\frac{m+1-p-\alpha/2}{m+1-p}} \] reliable numerical
eigenvalues with the relative error of $\lambda_{jN}$, converging at
rate $h^{\alpha}$ for $\alpha \in (0,2(m+1-p)]$. Here $N$ is the
total degrees of freedom. \end{theorem}
Theorem \ref{thm:quantity} implies that the quantity of the reliable
numerical eigenvalues who have the optimal convergence rate
$\alpha=2(m+1-p)$ is $O(1)$, which means only eigenvalues lower in the
spectrum can achieve the optimal convergence rate. Therefore, for the
eigenvalue problem, the number of eigenvalues that have the optimal
convergence rate is very small. We here relax the convergence rate to
linear, saying taking $\alpha=1$, to identify if a numerical
eigenvalue is reliable. For the lowest order approximation of the
eigenvalue problem, linear element for Laplace operator and quadratic
element for biharmonic operator shall be involved. The predicted
number of the reliable numerical eigenvalues from Theorem
\ref{thm:quantity} is $O(N^{1/2})$, which implies that the percentage
of the reliable numerical eigenvalues reduce rapidly as the number of
DOFs of the system increases. For the higher order approximation, the
percentage of the reliable numerical eigenvalues reduces much slower
than the low order approximation.
To identify numerically if an eigenvalue is reliable, we define the
relative error by $\frac{|\lambda - \lambda_h|}{|\lambda|}$, and the
convergence rate by $\log_{2}\left( \frac{|\lambda -
\lambda_{2h}|}{|\lambda - \lambda_h|} \right)$. If the convergence
rate is not less than $1$, the eigenvalue is identified as reliable.
We carry out a series of numerical experiments with various $m$, while
the results are quite robust with almost the same efficiency.
\begin{table} \begin{center} \scalebox{1.0}{
\begin{tabular}{|c|c|c|c|c|} \hline Order & $N$(\#DOF) & 242 & 1,046 &
4,278 \\ \hline $m=1$ & \multirow{4}{*}{Laplace} &8 (3.3\%) & 17
(1.6\%) & 39 (0.9\%) \\ \cline{1-1} \cline{3-5}$ m=2$ & &32 (13.2\%) &
92 (8.8\%) & 270 (6.3\%) \\ \cline{1-1} \cline{3-5} $m=3$ & &38
(15.7\%) & 147 (14.0\%) & 553 (12.9\%) \\ \cline{1-1} \cline{3-5}
$m=4$ & & 96 (39.6\%) & 355 (33.9\%) & 1417 (33.1\%) \\ \hline \hline
$m=2$ & \multirow{3}{*}{Biharmonic} &24(9.9\%) & 53(5.0\%) & 94(2.2\%)
\\ \cline{1-1} \cline{3-5} $m=3$ & &45(18.6\%) & 204(19.5\%) &
691(16.1\%) \\ \cline{1-1} \cline{3-5} $m=4$ & &170 (70.2\%) & 705
(67.3\%) & 2798 (65.4\%) \\ \hline \end{tabular}} \caption{The number
$j_N$ of linear converged eigenvalues.} \label{table_quantity}
\end{center} \end{table}
Again we are limited to study the setup in Example 1 since we need
reference solutions. We calculate $j_{N}$ eigenvalues whose relative
errors are of order $O(h)$. Precisely, we enumerate the number of the
eigenvalues that are at least linearly convergent, with the result
given in Table \ref{table_quantity}. For the Laplace problem, there
are $O(N^{1/2})$ reliable numerical eigenvalues. In this table, the
percentage decreases rapidly as the computational scale $N$ increases.
The number of the eigenvalues that are at least linearly convergent
increases a lot if the higher order approximation is applied, which is
as implied by Zhang's result that the higher order method could
produce more reliable numerical eigenvalues with the same $N$.
Moreover, for the higher order method, the percentage of the reliable
numerical eigenvalues reduces much slower than the lower order method.
The behavior of the number of reliable eigenvalues is similar for the
biharmonic equation, as shown in Table \ref{table_quantity}. The
numerical results confirm the prediction of Theorem \ref{thm:quantity}
and emphasize that the higher order approximations are more robust and
preferred for the eigenvalue problem.
\section{Conclusion}
We applied the symmetric interior penalty discontinuous Galerkin
method based on a patch reconstructed approximation space for solving
elliptic eigenvalue problem. The proposed method, when
compared to other existing approximation methods, can be implemented
in a more flexible way and its approximation properties are easier
to analyse. Numerical results confirm the optimal convergence
rates and emphasize the great efficiency of our method in number of
DOFs. The great efficiency and convenient implementation
is even remarkable in the case of higher order approximation. Since
high order approximation is preferred for the elliptic eigenvalue
problems, our method is a quite appropriate method to solve the
elliptic eigenvalue problems.
\section*{Acknowledgment}
The authors would like to thank the anonymous referees. They have very
constructively helped to improve the original version of this paper.
The research is supported by the National Natural Science Foundation
of China (Grant No. 91630310, 11421110001 and 11421101) and Science
Challenge Project, No.TZ2016002.
| {
"timestamp": "2019-08-26T02:07:16",
"yymm": "1901",
"arxiv_id": "1901.01803",
"language": "en",
"url": "https://arxiv.org/abs/1901.01803",
"abstract": "We adapt a symmetric interior penalty discontinuous Galerkin method using a patch reconstructed approximation space to solve elliptic eigenvalue problems, including both second and fourth order problems in 2D and 3D. It is a direct extension of the method recently proposed to solve corresponding boundary value problems, and the optimal error estimates of the approximation to eigenfunctions and eigenvalues are instant consequences from existing results. The method enjoys the advantage that it uses only one degree of freedom on each element to achieve very high order accuracy, which is highly preferred for eigenvalue problems as implied by Zhang's recent study [J. Sci. Comput. 65(2), 2015]. By numerical results, we illustrate that higher order methods can provide much more reliable eigenvalues. To justify that our method is the right one for eigenvalue problems, we show that the patch reconstructed approximation space attains the same accuracy with fewer degrees of freedom than classical discontinuous Galerkin methods. With the increasing of the polynomial order, our method can even achieve a better performance than conforming finite element methods, such methods are traditionally the methods of choice to solve problems with high regularities.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Solving Eigenvalue Problems in a Discontinuous Approximation Space by Patch Reconstruction",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759649262345,
"lm_q2_score": 0.805632181981183,
"lm_q1q2_score": 0.7902252438764207
} |
https://arxiv.org/abs/2107.06380 | Bivariate Lagrange interpolation at the checkerboard nodes | In this paper, we derive an explicit formula for the bivariate Lagrange basis polynomials of a general set of checkerboard nodes. This formula generalizes existing results of bivariate Lagrange basis polynomials at the Padua nodes, Chebyshev nodes, Morrow-Patterson nodes, and Geronimus nodes. We also construct a subspace spanned by linearly independent bivariate vanishing polynomials that vanish at the checkerboard nodes and prove the uniqueness of the set of bivariate Lagrange basis polynomials in the quotient space defined as the space of bivariate polynomials with a certain degree by the subspace of bivariate vanishing polynomials. | \section{Introduction}
Given $x_0>x_1>\cdots>x_n$ and $y_0>y_1>\cdots>y_{n+\sigma}$ where $n$ and $\sigma$ are nonnegative integers, we define
a rectangular set of nodes:
\begin{equation}
S=\{(x_r,y_u):~0\le r\le n,~0\le u\le n+\sigma\},
\end{equation}
which consists of $(n+1)(n+\sigma+1)$ distinct points in $\mathbf R^2$.
The set $S$ can be divided into two checkerboard sets $S_0$ and $S_1$ such that $(x_r,y_u)\in S_0$ if and only if $r+u$ is even while $(x_r,y_u)\in S_1$ if and only if $r+u$ is odd.
Our objective is to develop existence and uniqueness theory of bivariate Lagrange basis polynomials for the checkerboard set $S_\tau$ with $\tau=0$ or $\tau=1$. The special case $\sigma=0$ was considered in \cite{Harris18JCAM}. The special case $\sigma=1$ was studied in \cite{Harris15JAT,Harris21PAMS}.
If $\sigma=1$, $x_r=\cos(r\pi/n)$ and $y_u=\cos[u\pi/(n+1)]$, then $S_\tau$ is the set of Padua points and the corresponding set of bivariate Lagrange basis polynomials is unique in $\mathbf P_n(x,y)$; see \cite{Bos06JAT,Bos07NM}.
In \cite{Xu96JAT}, Xu derived bivariate Lagrange basis polynomials when $\sigma=0$ and $x_k=y_k=\cos[(2k-1)\pi/(2n)]$ are zeros of Chebyshev polynomial of first kind $T_n$; see also \cite{Bojanov97JCAM,Harris10PAMS}.
The checkerboard nodes $S_\tau$ also generalize the Morrow-Patterson nodes \cite{Morrow78SINA} and Geronimus nodes \cite{Harris15JAT}.
In \cite{Harris21PAMS}, several formulas of bivariate Lagrange basis polynomials for the cases $\sigma=2,3,4,5$ were proposed as conjectures.
In this paper, we will derive a general formula of bivariate Lagrange basis polynomials for any nonnegative integer $\sigma$.
This formula generalizes the aforementioned results of bivariate Lagrange basis polynomials at the Padua nodes, Chebyshev nodes, Morrow-Patterson nodes and Geronimus nodes. Especially, the conjectures in \cite{Harris21PAMS} for the cases $\sigma=2,3,4,5$ are proved.
Moreover, we will prove that the set of bivariate Lagrange basis polynomials is unique in a certain quotient space of bivariate polynomials.
Let $\mathbf P_d(x,y)$ be the linear space of bivariate polynomials of degree no more than $d$, which can be generated by the monomials $x^jy^k$ with $j+k\le d$.
It is easily seen that the dimension of $\mathbf P_d(x,y)$ is $(d+1)(d+2)/2$.
If $f_1(x,y),\cdots,f_M(x,y)$ are linearly independent polynomials in $\mathbf P_d(x,y)$; namely, $c_1f_1(x,y)+\cdots+c_Mf_M(x,y)=0$ for all $(x,y)\in\mathbf R^2$ implies $c_1=\cdots=c_M=0$, then we can define the quotient space $\mathbf P_d(x,y)/\{f_1(x,y),\cdots,f_M(x,y)\}$ in the sense that two polynomials in this quotient space are identical if and only if their difference can be expressed as a linear combination of $f_1(x,y),\cdots,f_M(x,y)$. Clearly, the dimension of $\mathbf P_d(x,y)/\{f_1(x,y),\cdots,f_M(x,y)\}$ is $(d+1)(d+2)/2-M$.
Given a set of nodes $(x_1,y_1),\cdots,(x_N,y_N)\in\mathbf R^2$, we say $\{L_1(x,y),\cdots,L_N(x,y)\}\subset\mathbf P_d(x,y)$ is
a set of bivariate Lagrange basis polynomials if $L_j(x_k,y_k)=0$ for $1\le j\neq k\le N$ and $L_k(x_k,y_k)=1$ for $1\le k\le N$.
For convenience, we also define the {\it bivariate vanishing polynomial} as a bivariate polynomial $f(x,y)\in\mathbf P_d(x,y)$ that vanishes at all of the given nodes; namely, $f(x_k,y_k)=0$ for all $1\le k\le N$.
The rest of this paper is organized as follows. In Section 2, we state some preliminary results on one-to-one map between a sequence of univariate nodes and a sequence of difference equations. We also give a necessary and sufficient condition for the uniqueness of bivariate Lagrange basis polynomials.
In Section 3, we construct the bivariate Lagrange basis polynomials for $S_\tau$ with general $\sigma$.
In Section 4, we prove the uniqueness of the bivariate Lagrange basis polynomials for $S_\tau$ in a certain quotient space of bivariate polynomials.
\section{Preliminary results}
We first rephrase the results in \cite[Lemma 2]{Harris18JCAM} and \cite[Theorem A.1 \& Lemma A.2]{Harris21PAMS} as the following lemma.
\begin{lem}\label{lem-existence}
Given any $x_0>x_1>\cdots>x_n$, there exists a sequence of orthogonal polynomials $\{p_k(x)\}_{k=0}^n$ determined by two sequences $\{a_k\}_{k=0}^{n-1}$ and $\{b_k\}_{k=0}^{n-1}$ such that $p_0(x)=1$, $p_1(x)=a_0x+b_0$, and
\begin{equation}
p_{k+1}(x)+p_{k-1}(x)=(a_kx+b_k)p_k(x),
\end{equation}
for $1\le k\le n-1$, and the following properties hold.
\begin{enumerate}
\item (positivity) $a_k>0$ for all $0\le k\le n-1$.
\item (reflection) $a_k=a_{n-k}$ and $b_k=b_{n-k}$ for all $1\le k\le n-1$.
\item (alternation) $p_{n-k}(x_j)=(-1)^jp_k(x_j)$ for all $0\le j,k\le n$.
\end{enumerate}
\end{lem}
It is nature to ask whether the map from the set of distinct nodes $X=\{x_k\}_{k=0}^n$ to the set of coefficients $(A,B)=\{(a_k,b_k)\}_{k=0}^{n-1}$ satisfying the positivity and reflection conditions is invertible and unique. It is easy to show that the alternation condition implies the invertibility of the map.
When $n$ is odd, the dimension of $(A,B)$ is $n+1$, which suggests that the map might be unique.
However, when $n$ is even, the dimension of $(A,B)$ becomes $n+2$, which indicates that there is one more degree of freedom in $(A,B)$.
We shall prove that this additional degree of freedom can be removed by a normalization condition $a_0=1$.
\begin{thm}\label{thm-inv}
Given any sequences $\{a_k\}_{k=0}^{n-1}$ and $\{b_k\}_{k=0}^{n-1}$ satisfying the positivity and reflection conditions; namely, $a_k>0$ for all $0\le k\le n-1$ and $a_k-a_{n-k}=b_k-b_{n-k}=0$ for all $1\le k\le n-1$, there exists a unique set of nodes $x_0>x_1>\cdots>x_n$ satisfying the alternation condition $p_{n-k}(x_j)=(-1)^jp_k(x_j)$ for all $0\le j,k\le n$, where $\{p_k(x)\}_{k=0}^n$ is the sequence of orthogonal polynomials determined by the difference equation $p_{k+1}(x)+p_{k-1}(x)=(a_kx+b_k)p_k(x)$ for $1\le k\le n-1$ with initial conditions $p_0(x)=1$ and $p_1(x)=a_0x+b_0$.
\end{thm}
\begin{proof}
When $n=2m-1$ is odd, we denote by $u_1>\cdots>u_m$ the zeros of $p_m(x)$ and $v_1>v_2>\cdots>v_{m-1}$ the zeros of $p_{m-1}(x)$.
By alternation property of zeros of orthogonal polynomials, we have $u_1>v_1>u_2>v_2>\cdots>v_{m-1}>u_m$.
Since both $p_m(x)$ and $p_{m-1}(x)$ have positive leading coefficients, the difference function $p_m(x)-p_{m-1}(x)$ has at least one zero at each of the intervals $(u_1,\infty),(u_2,v_1),\cdots,(u_m,v_{m-1})$ because the difference takes opposite signs when the variable $x$ approaches the two ends of each interval. Similarly, the sum function $p_m(x)+p_{m-1}(x)$ has at least one zero at each of the intervals $(v_1,u_1),\cdots,(v_{m-1},u_{m-1}),(-\infty,u_m)$.
Thus, we can order the zeros of $p_m(x)-p_{m-1}(x)$ and $p_m(x)+p_{m-1}(x)$ as $x_0>x_1>\cdots>x_n$ such that $p_m(x_j)-(-1)^jp_{m-1}(x_j)=0$ for all $0\le j\le n$.
Actually, we have $x_0>u_1>x_1>v_1>x_2>v_2>\cdots>v_{m-1}>x_{n-1}>u_m>x_n$.
It then follows from the difference equation and the reflection condition that $p_{n-k}(x_j)=(-1)^jp_k(x_j)$ for all $0\le j,k\le n$.
When $n=2m$ is even, the zeros of $p_m(x)$ divide the real line into $m+1$ intervals, each of which contains at least one zero of $p_{m+1}(x)-p_{m-1}(x)$, thanks to the alternation property of orthogonal polynomials.
Hence, we can order the zeros of $p_m(x)$ and $p_{m+1}(x)-p_{m-1}(x)$ as $x_0>x_1>\cdots>x_n$ such that $p_m(x_j)=0$ for all odd $j=1,3,\cdots,n-1$ and $p_{m+1}(x_j)=p_{m-1}(x_j)$ for all even $j=0,2,\cdots,n$.
It then follows from the difference equation and the reflection condition that $p_{n-k}(x_j)=(-1)^jp_k(x_j)$ for all $0\le j,k\le n$.
This completes the proof.
\end{proof}
\begin{thm}
Given any $x_0>x_1>\cdots>x_n$, if there are two sets of orthogonal polynomials $\{p_k(x)\}_{k=0}^n$ and $\{\tilde p_k(x)\}_{k=0}^n$, which are determined by two sets of coefficients $(A,B)=\{(a_k,b_k)\}_{k=0}^{n-1}$ and $(\tilde A,\tilde B)=\{(\tilde a_k,\tilde b_k)\}_{k=0}^{n-1}$, such that the difference equation and three properties (positivity, reflection and alternation) in Lemma \ref{lem-existence} are satisfied, then we have the following results depending on whether $n$ is odd or even.
\begin{enumerate}
\item If $n$ is odd, then $\tilde a_k=a_k$ and $\tilde b_k=b_k$ for $0\le k\le n-1$.
\item If $n$ is even, then there exists a positive constant $\gamma$ such that $\tilde a_k/a_k=\tilde b_k/b_k=\gamma$ for even $k=0,2,\cdots,n$ and $a_k/\tilde a_k=b_k/\tilde b_k=\gamma$ for odd $k=1,3,\cdots,n-1$.
\end{enumerate}
\end{thm}
\begin{proof}
When $n=2m-1$ is odd, it follows from the alternation condition that the zeros of $p_m(x)+p_{m-1}(x)$ and $\tilde p_m(x)+\tilde p_{m-1}(x)$ are the same while the zeros of $p_m(x)-p_{m-1}(x)$ and $\tilde p_m(x)-\tilde p_{m-1}(x)$ are the same. The positivity condition implies that there exist two positive constants $\gamma_1$ and $\gamma_2$ such that $\tilde p_m(x)+\tilde p_{m-1}(x)=\gamma_1[p_m(x)+p_{m-1}(x)]$ and $\tilde p_m(x)-\tilde p_{m-1}(x)=\gamma_2[p_m(x)-p_{m-1}(x)]$.
Comparing the leading coefficients yields $\gamma_1=\gamma_2$. A simple combination of these two equations then gives $\tilde p_m(x)=\gamma_1 p_m(x)$ and $\tilde p_{m-1}(x)=\gamma_1 p_{m-1}(x)$. In view of the difference equations $\tilde p_m(x)+\tilde p_{m-2}(x)=(\tilde a_{m-1}x+\tilde b_{m-1})\tilde p_{m-1}(x)$ and $p_m(x)+p_{m-2}(x)=(a_{m-1}x+b_{m-1})\tilde p_{m-1}(x)$, we obtain $\tilde a_{m-1}=a_{m-1}$, $\tilde b_{m-1}=b_{m-1}$ and $\tilde p_{m-2}(x)=\gamma_1 p_{m-2}(x)$. Repeating this argument implies $\tilde a_k=a_k$ and $\tilde b_k=b_k$ for $0\le k\le m-1$. Moreover, $\tilde p_0(x)=\gamma_1p_0(x)$, which yields $\gamma_1=1$. The reflection condition then gives $\tilde a_k=a_k$ and $\tilde b_k=b_k$ for $0\le k\le n-1$.
When $n=2m$ is even, it follows from the alternation condition that the zeros of $p_m(x)$ and $\tilde p_m(x)$ are the same while the zeros of $p_{m+1}(x)-p_{m-1}(x)$ and $\tilde p_{m+1}(x)-\tilde p_{m-1}(x)$ are the same. The positivity condition implies that there exist two positive constants $\gamma_1$ and $\gamma_2$ such that $\tilde p_m(x)=\gamma_1 p_m(x)$ and $\tilde p_{m+1}(x)-\tilde p_{m-1}(x)=\gamma_2[p_{m+1}(x)-p_{m-1}(x)]$.
On account of the difference equations $\tilde p_{m+1}(x)+\tilde p_{m-1}(x)=(\tilde a_mx+\tilde b_m)\tilde p_m(x)$ and $p_{m+1}(x)+p_{m-1}(x)=(a_mx+b_m)p_m(x)$, we obtain
$\gamma_2a_m=\gamma_1\tilde a_m$, $\gamma_2b_m=\gamma_1\tilde b_m$, and $\tilde p_{m+1}(x)+\tilde p_{m-1}(x)=\gamma_2[p_{m+1}(x)+p_{m-1}(x)]$.
Consequently, $\tilde p_{m+1}(x)=\gamma_2p_{m+1}(x)$ and $\tilde p_{m-1}(x)=\gamma_2p_{m-1}(x)$.
It then follows from the difference equations $\tilde p_m(x)+\tilde p_{m-2}(x)=(\tilde a_{m-1}x+\tilde b_{m-1})\tilde p_{m-1}(x)$ and $p_m(x)+p_{m-2}(x)=(a_{m-1}x+b_{m-1})\tilde p_{m-1}(x)$ that $\gamma_1a_{m-1}=\gamma_2\tilde a_{m-1}$, $\gamma_1b_{m-1}=\gamma_2\tilde b_{m-1}$ and $\tilde p_{m-2}(x)=\gamma_1p_{m-2}(x)$.
Repeating this argument gives
$$\gamma_1a_{m-j}=\gamma_2\tilde a_{m-j},~\gamma_1b_{m-j}=\gamma_2\tilde b_{m-j},~\tilde p_{m-j}(x)=\gamma_2p_{m-j}(x)$$
for odd $j\le m$, and
$$\gamma_2a_{m-j}=\gamma_1\tilde a_{m-j},~\gamma_2b_{m-j}=\gamma_1\tilde b_{m-j},~\tilde p_{m-j}(x)=\gamma_1p_{m-j}(x)$$
for even $j\le m$. Since $\tilde p_0(x)=p_0(x)=1$, either $\gamma_1=1$ (when $m$ is even) or $\gamma_2=1$ (when $m$ is odd).
We denote $\gamma=\gamma_2$ if $m$ is even and $\gamma=\gamma_1$ if $m$ is odd. It then follows that
$$\tilde a_{2j}=\gamma a_{2j},~\tilde b_{2j}=\gamma b_{2j},~~\tilde p_{2j}(x)=p_{2j}(x),$$
for $j=0,\cdots,m$, and
$$\gamma\tilde a_{2j+1}=a_{2j+1},~~\gamma\tilde b_{2j+1}=b_{2j+1},~~\tilde p_{2j+1}(x)=\gamma p_{2j}(x),$$
for $j=0,\cdots,m-1$. This completes the proof.
\end{proof}
For the univariate case, the set of Lagrange basis polynomials for any set of distinct points exists and is uniquely determined because the corresponding Vandermonde matrix is invertible. The following theorem gives criteria for uniqueness of bivariate Lagrange basis polynomials.
\begin{thm}\label{thm-uniqueness}
Given any distinct points $(x_1,y_1),\cdots,(x_N,y_N)\in\mathbf R^2$ and any positive integer $d$ such that $(d+1)(d+2)/2\ge N$, there exist at least $M=(d+1)(d+2)/2-N$ linear independent bivariate vanishing polynomials, denoted by $f_1(x,y),\cdots,f_M(x,y)$, in $\mathbf P_d(x,y)$.
Let
\begin{align}
V=\begin{pmatrix}
1&x_1&\cdots&x_1^d&y_1&x_1y_1&\cdots&x_1^{d-1}y_1&\cdots&y_1^d\\
\vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots&&\vdots\\
1&x_N&\cdots&x_N^d&y_N&x_Ny_N&\cdots&x_N^{d-1}y_N&\cdots&y_N^d\\
\end{pmatrix}
\end{align}
be the bivariate Vandermonde matrix of dimension $N$ by $(d+1)(d+2)/2$.
The following statements are equivalent.
\begin{enumerate}[(i)]
\item There exists a unique set of bivariate Lagrange interpolation polynomials in the quotient space $\mathbf P_d(x,y)/\{f_1(x,y),\cdots,f_M(x,y)\}$.
\item There exists a set of bivariate Lagrange interpolation polynomials in $\mathbf P_d(x,y)$.
\item Any bivariate vanishing polynomial in $\mathbf P_d(x,y)$ can be expressed as a linear combination of $f_1(x,y),\cdots,f_M(x,y)$.
\item The rank of $V$ is $N$.
\end{enumerate}
\end{thm}
\begin{proof}
The coefficients of any bivariate vanishing polynomial in $\mathbf P_d(x,y)$ corresponds a vector $z\in\mathbf R^{(d+1)(d+2)/2}$ satisfying $Vz=0$.
The existence of $f_1(x,y),\cdots, f_M(x,y)$ follows from the fact that the rank of $V$ is no more than $N$.
Moreover, we have (iii) $\Leftrightarrow$ (iv).
Any set of bivariate Lagrange interpolation in $\mathbf P_d(x,y)$ can be represented by the matrix $L$ of dimension $(d+1)(d+2)/2$ by $N$ such that $VL$ is the identity matrix in $\mathbf R^{N\times N}$. Hence, (ii) $\Leftrightarrow$ (iv).
It is obvious that (i) $\Rightarrow$ (ii).
Finally, coupling (ii) and (iii) gives (i).
The proof is complete.
\end{proof}
\section{Existence of bivariate Lagrange basis polynomials}
Given $x_0>x_1>\cdots>x_n$ and $y_0>y_1>\cdots>y_{n+\sigma}$ where $n$ and $\sigma$ are nonnegative integers, we define two sets of checkerboard nodes
\begin{align}
S_0&=\{(x_r,y_u):~0\le r\le n,~0\le u\le n+\sigma,~r+u~\text{even}\},\label{S0}\\
S_1&=\{(x_r,y_u):~0\le r\le n,~0\le u\le n+\sigma,~r+u~\text{odd}\},\label{S1}
\end{align}
which consist of $N_0$ and $N_1$ nodes, respectively.
It is easily seen that $N_0+N_1=(n+1)(n+\sigma+1)$.
Moreover, we have $N_0-N_1=1$ and $N_\tau=[(n+1)(n+\sigma+1)+1]/2-\tau$ if both $n$ and $\sigma$ are even, while $N_0=N_1=(n+1)(n+\sigma+1)/2$ if either $n$ or $\sigma$ is odd.
We need to find a set of bivariate Lagrange basis polynomials for $S_\tau$ with $\tau=0$ or $\tau=1$.
According to Lemma \ref{lem-existence}, there exist orthogonal polynomials $\{p_j(x)\}_{j=0}^n$ and $\{q_k(y)\}_{k=0}^{n+\sigma}$ such that
$p_0(x)=1$, $p_1(x)=a_0x+b_0$, $q_0(y)=1$, $q_1(x)=c_0y+d_0$, and
\begin{align}
p_{j+1}(x)+p_{j-1}(x)&=(a_jx+b_j)p_j(x),~~1\le j\le n-1,\\
q_{k+1}(y)+q_{k-1}(y)&=(c_ky+d_k)q_k(y),~~1\le k\le n+\sigma-1,
\end{align}
where $a_j>0$ for $0\le j\le n-1$, and $c_k>0$ for $0\le k\le n+\sigma-1$, and
\begin{align}
a_j=a_{n-j}, b_j=b_{n-j},&~~1\le j\le n-1,\label{a}\\
c_k=c_{n+\sigma-k}, d_k=d_{n+\sigma-k},&~~1\le k\le n+\sigma-1,\label{c}\\
p_{n-j}(x_r)=(-1)^rp_j(x_r),&~~0\le j,r\le n,\label{p}\\
q_{n+\sigma-k}(y_u)=(-1)^uq_k(y_u),&~~0\le k,u\le n+\sigma.\label{q}
\end{align}
For any $(x_r,y_u)\in S_\tau$ and $(x_s,y_v)\in S_\tau$, where $\tau$ is either $0$ or $1$, it is easily seen that $r+u+s+v$ is even.
Hence, we have from the above two equations
\begin{align}\label{pq}
p_j(x_r)p_i(x_s)q_k(y_u)q_l(y_v)=p_{n-j}(x_r)p_{n-i}(x_s)q_{n+\sigma-k}(y_u)q_{n+\sigma-l}(y_v),
\end{align}
for all $0\le j,i\le n$ and $0\le k,l\le n+\sigma$.
Moreover, we have the following Christoffel-Darboux formulas:
\begin{align}
(x_r-x_s)\sum_{j=0}^ia_jp_j(x_r)p_j(x_s)&=p_{i+1}(x_r)p_i(x_s)-p_i(x_r)p_{i+1}(x_s),\label{CD-x}\\
(y_u-y_v)\sum_{k=0}^lc_kq_k(y_u)q_k(y_v)&=q_{l+1}(y_u)q_l(y_v)-q_l(y_u)q_{l+1}(y_v),\label{CD-y}
\end{align}
for $0\le i\le n-1$ and $0\le l\le n+\sigma-1$.
Given any integers $0\le\delta\le\sigma-1$, $0\le s\le n$ and $0\le v\le n+\sigma$, we define the bivariate polynomial
\begin{equation}\label{Kd}
K_\delta(x,y;x_s,y_v)=\sum_{j=0}^{n-1}a_jp_j(x)p_j(x_s)\sum_{k=0}^{n-j+\delta}c_kq_k(y)q_k(y_v)\in\mathbf P_{n+\delta}(x,y).
\end{equation}
It is readily seen from \eqref{pq} and \eqref{CD-y} that
\begin{align*}
&(y_u-y_v)K_\delta(x_r,y_u;x_s,y_v)
\\=&\sum_{j=0}^{n-1}a_jp_j(x_r)p_j(x_s)[q_{n-j+\delta+1}(y_u)q_{n-j+\delta}(y_v)-q_{n-j+\delta}(y_u)q_{n-j+\delta+1}(y_v)]
\\=&\sum_{j=0}^{n-1}a_jp_{n-j}(x_r)p_{n-j}(x_s)[q_{j+\sigma-\delta-1}(y_u)q_{j+\sigma-\delta}(y_v)-q_{j+\sigma-\delta}(y_u)q_{j+\sigma-\delta-1}(y_v)]
\\=&\sum_{j=1}^na_{n-j}p_j(x_r)p_j(x_s)[q_{n-j+\sigma-\delta-1}(y_u)q_{n-j+\sigma-\delta}(y_v)-q_{n-j+\sigma-\delta}(y_u)q_{n-j+\sigma-\delta-1}(y_v)],
\end{align*}
and
\begin{align*}
&(y_u-y_v)K_{\sigma-\delta-1}(x_r,y_u;x_s,y_v)
\\=&\sum_{j=0}^{n-1}a_jp_j(x_r)p_j(x_s)[q_{n-j+\sigma-\delta}(y_u)q_{n-j+\sigma-\delta-1}(y_v)-q_{n-j+\sigma-\delta-1}(y_u)q_{n-j+\sigma-\delta}(y_v)].
\end{align*}
Adding the above two equations and making use of \eqref{a}, \eqref{pq} and \eqref{CD-y} yield
\begin{align*}
&(y_u-y_v)[K_\delta(x_r,y_u;x_s,y_v)+K_{\sigma-\delta-1}(x_r,y_u;x_s,y_v)]
\\=&a_0p_n(x_r)p_n(x_s)[q_{\sigma-\delta-1}(y_u)q_{\sigma-\delta}(y_v)-q_{\sigma-\delta}(y_u)q_{\sigma-\delta-1}(y_v)]
\\&~+a_0p_0(x_r)p_0(x_s)[q_{n+\sigma-\delta}(y_u)q_{n+\sigma-\delta-1}(y_v)-q_{n+\sigma-\delta-1}(y_u)q_{n+\sigma-\delta}(y_v)]
\\=&a_0p_n(x_r)p_n(x_s)[q_{\sigma-\delta-1}(y_u)q_{\sigma-\delta}(y_v)-q_{\sigma-\delta}(y_u)q_{\sigma-\delta-1}(y_v)]
\\&~+a_0p_n(x_r)p_n(x_s)[q_{\delta}(y_u)q_{\delta+1}(y_v)-q_{\delta+1}(y_u)q_{\delta}(y_v)]
\\=&-(y_u-y_v)a_0p_n(x_r)p_n(x_s)[\sum_{k=0}^{\sigma-\delta-1}c_kq_k(y_u)q_k(y_v)+\sum_{k=0}^{\delta}c_kq_k(y_u)q_k(y_v)],
\end{align*}
which can be written as
\begin{equation}\label{Gy}
(y_u-y_v)[K_\delta(x_r,y_u;x_s,y_v)+K_{\sigma-\delta-1}(x_r,y_u;x_s,y_v)+L(x_r,y_u;x_s,y_v)]=0,
\end{equation}
where
\begin{equation}\label{J}
J(x,y;x_s,y_v)=a_0p_n(x)p_n(x_s)[\sum_{k=0}^{\sigma-\delta-1}c_kq_k(y)q_k(y_v)+\sum_{k=0}^{\delta}c_kq_k(y)q_k(y_v)].
\end{equation}
Interchanging the double sum in \eqref{Kd} gives another expression
\begin{equation*}
K_\delta(x,y;x_s,y_v)=\sum_{k=0}^{n+\delta}c_kq_k(y)q_k(y_v)\sum_{j=0}^{\min\{n-1,n+\delta-k\}}a_jp_j(x)p_j(x_s).
\end{equation*}
It is readily seen from \eqref{c}, \eqref{pq} and \eqref{CD-x} that
\begin{align*}
&(x_r-x_s)K_\delta(x_r,y_u;x_s,y_v)
\\=&\sum_{k=1+\delta}^{n+\delta}c_kq_k(y_u)q_k(y_v)[p_{n-k+\delta+1}(x_r)p_{n-k+\delta}(x_s)-p_{n-k+\delta}(x_r)p_{n-k+\delta+1}(x_s)]
\\&+\sum_{k=0}^{\delta}c_kq_k(y_u)q_k(y_v)[p_{n}(x_r)p_{n-1}(x_s)-p_{n-1}(x_r)p_{n}(x_s)]
\\=:&A_\delta+B_\delta,
\end{align*}
where
\begin{align*}
A_\delta=&\sum_{k=1+\delta}^{n+\delta}c_{n+\sigma-k}q_{n+\sigma-k}(y_u)q_{n+\sigma-k}(y_v)[p_{k-\delta-1}(x_r)p_{k-\delta}(x_s)-p_{k-\delta}(x_r)p_{k-\delta-1}(x_s)]
\\=&\sum_{k=\sigma-\delta}^{n+\sigma-\delta-1}c_kq_k(y_u)q_k(y_v)[p_{n-k+\sigma-\delta-1}(x_r)p_{n-k+\sigma-\delta}(x_s)-p_{n-k+\sigma-\delta}(x_r)p_{n-k+\sigma-\delta-1}(x_s)]
\\=&-A_{\sigma-\delta-1},
\end{align*}
and
\begin{align*}
B_\delta=&\sum_{k=0}^{\delta}c_kq_{n-k+\sigma}(y_u)q_{n-k+\sigma}(y_v)[p_0(x_r)p_1(x_s)-p_1(x_r)p_0(x_s)]
\\=&-(x_r-x_s)\sum_{k=0}^{\delta}c_kq_{n-k+\sigma}(y_u)q_{n-k+\sigma}(y_v)a_0p_0(x_r)p_0(x_s)
\\=&-(x_r-x_s)\sum_{k=0}^{\delta}c_kq_k(y_u)q_k(y_v)a_0p_n(x_r)p_n(x_s).
\end{align*}
Hence,
$$(x_r-x_s)[K_\delta(x_r,y_u;x_s,y_v)+K_{\sigma-\delta-1}(x_r,y_u;x_s,y_v)]=B_\delta+B_{\sigma-\delta-1}.$$
Recall the definition of $J$ in \eqref{J}, we obtain
\begin{equation}\label{Gx}
(x_r-x_s)[K_\delta(x_r,y_u;x_s,y_v)+K_{\sigma-\delta-1}(x_r,y_u;x_s,y_v)+J(x_r,y_u;x_s,y_v)]=0.
\end{equation}
Finally, we choose $\delta=\lfloor\sigma/2\rfloor$ and define the bivariate polynomial
\begin{align}\label{G}
G(x,y;x_s,y_v)=&K_\delta(x,y;x_s,y_v)+K_{\sigma-\delta-1}(x,y;x_s,y_v)+J(x,y;x_s,y_v)
\notag\\=&\sum_{j=0}^{n-1}a_jp_j(x)p_j(x_s)[\sum_{k=0}^{n-j+\delta}c_kq_k(y)q_k(y_v)
+\sum_{k=0}^{n-j+\sigma-\delta-1}c_kq_k(y)q_k(y_v)]
\notag\\&+a_0p_n(x)p_n(x_s)[\sum_{k=0}^{\sigma-\delta-1}c_kq_k(y)q_k(y_v)+\sum_{k=0}^{\delta}c_kq_k(y)q_k(y_v)].
\end{align}
It is obvious that $G\in\mathbf P_{n+\delta}(x,y)$ and $G(x_s,y_v;x_s,y_v)>0$. Coupling \eqref{Gy} and \eqref{Gx} gives $G(x_r,y_u;x_s,y_v)=0$ if either $x_r\neq x_s$ or $y_u\neq y_v$.
We summarize our results in the following theorem.
\begin{thm}\label{thm-existence}
Given any $x_0>x_1>\cdots>x_n$ and $y_0>y_1>\cdots>y_{n+\sigma}$ where $n$ and $\sigma$ are nonnegative integers.
Let $S_\tau$ with either $\tau=0$ or $\tau=1$ be defined in \eqref{S0} or \eqref{S1}.
Set $\delta=\lfloor\sigma/2\rfloor$.
There exists a set of bivariate Lagrange basis polynomials in $\mathbf P_{n+\delta}(x,y)$ for $S_\tau$ which can be defined as
\begin{equation}\label{L}
L(x,y;x_s,y_v)=G(x,y;x_s,y_v)/G(x_s,y_v;x_s,y_v),
\end{equation}
for each $(x_s,y_v)\in S_\tau$,
where $G$ is defined in \eqref{G}.
\end{thm}
\section{Uniqueness of bivariate Lagrange basis polynomials}
We use the same notations as in the previous section.
To prove that the set of bivariate Lagrange basis polynomials constructed in \eqref{L} is unique in a certain quotient space of $\mathbf P_{n+\delta}(x,y)$,
we only need to find $M=(n+\delta+1)(n+\delta+2)/2-N_\tau$ linearly independent bivariate vanishing polynomials in $\mathbf P_{n+\delta}(x,y)$, where $\delta=\lfloor\sigma/2\rfloor$ and $N_\tau$ is the number of nodes in $S_\tau$ with $\tau=0$ or $\tau=1$. In other words, we shall construct a linear subspace $Q$ of bivariate vanishing polynomials in $\mathbf P_{n+\delta}(x,y)$ with dimension $M$ and then apply Theorem \ref{thm-uniqueness}.
First, we introduce the linear subspace of bivariate vanishing polynomials:
\begin{equation}\label{V}
V=\spa\{(x-x_0)\cdots(x-x_n)x^jy^k,~~j\ge0,~k\ge0,~j+k\le \delta-1\}.
\end{equation}
It is obvious that $V$ is a subspace of $\mathbf P_{n+\delta}(x,y)$ with dimension $\delta(\delta+1)/2$.
We shall consider the following three cases respectively.
\begin{enumerate}[{Case} I.]
\item $\sigma=2\delta+1$ is odd.
We have $N_0=N_1=(n+1)(n+\sigma+1)/2$ and
$$M={(n+\delta+1)(n+\delta+2)\over2}-{(n+1)(n+\sigma+1)\over2}={\delta(\delta+1)\over2}$$
for either $\tau=0$ or $\tau=1$.
We simply set
\begin{equation}\label{Q1}
Q=V.
\end{equation}
\item $\sigma=2\delta$ is even and $n=2m-1$ is odd.
We have $N_0=N_1=(n+1)(n+\sigma+1)/2$ and
$$M={(n+\delta+1)(n+\delta+2)\over2}-{(n+1)(n+\sigma+1)\over2}={\delta(\delta+1)\over2}+m$$
for either $\tau=0$ or $\tau=1$.
Note from \eqref{p} and \eqref{q} that
$$p_{n-j}(x_r)q_{j+\delta}(y_u)=(-1)^{r+u}p_j(x_r)q_{n+\delta-j}(y_u)=(-1)^\tau p_j(x_r)q_{n+\delta-j}(y_u),$$
for any $(x_r,y_u)\in S_\tau$.
We define
\begin{equation}\label{Q2}
Q=V+\spa\{p_{n-j}(x)q_{j+\delta}(y)-(-1)^\tau p_j(x)q_{n+\delta-j}(y),~~0\le j\le m-1\},
\end{equation}
which is a subspace of bivariate vanishing polynomials in $\mathbf P_{n+\delta}(x,y)$ with dimension $M=\delta(\delta+1)/2+m$.
\item $\sigma=2\delta$ is even and $n=2m$ is even.
We have $N_\tau=[(n+1)(n+\sigma+1)+1]/2-\tau$ and
\begin{align*}
M&={(n+\delta+1)(n+\delta+2)\over2}-{(n+1)(n+\sigma+1)+1\over2}+\tau
\\&={\delta(\delta+1)\over2}+m+\tau
\end{align*}
for $\tau=0,1$.
We define
\begin{equation}\label{Q3}
Q=V+\spa\{p_{n-j}(x)q_{j+\delta}(y)-(-1)^\tau p_j(x)q_{n+\delta-j}(y),~~0\le j\le m-1+\tau\},
\end{equation}
which is a subspace of bivariate vanishing polynomials in $\mathbf P_{n+\delta}(x,y)$ with dimension $M=\delta(\delta+1)/2+m+\tau$.
\end{enumerate}
On account of Theorem \ref{thm-uniqueness}, we have the following uniqueness property of bivariate Lagrange basis polynomials for $S_\tau$ with $\tau=0$ or $\tau=1$.
\begin{thm}
Given any $x_0>x_1>\cdots>x_n$ and $y_0>y_1>\cdots>y_{n+\sigma}$ where $n$ and $\sigma$ are nonnegative integers.
Let $S_\tau$ with either $\tau=0$ or $\tau=1$ be defined in \eqref{S0} or \eqref{S1}.
Set $\delta=\lfloor\sigma/2\rfloor$.
The set of bivariate Lagrange basis polynomials for $S_\tau$ defined in \eqref{L} is unique in the quotient space $\mathbf P_{n+\delta}(x,y)/Q$, where $Q$ is defined in \eqref{Q1}-\eqref{Q3} depending on odd-even properties of $\sigma$ and $n$.
\end{thm}
\section*{Acknowledgment}
LC is partially supported by National Natural Science Foundation of China (No. 11571375), the Natural Science Funding of Shenzhen University (No. 2018073), and the Shenzhen Scientific Research and Development Funding Program (No. JCYJ20170302144002028).
| {
"timestamp": "2021-07-15T02:03:59",
"yymm": "2107",
"arxiv_id": "2107.06380",
"language": "en",
"url": "https://arxiv.org/abs/2107.06380",
"abstract": "In this paper, we derive an explicit formula for the bivariate Lagrange basis polynomials of a general set of checkerboard nodes. This formula generalizes existing results of bivariate Lagrange basis polynomials at the Padua nodes, Chebyshev nodes, Morrow-Patterson nodes, and Geronimus nodes. We also construct a subspace spanned by linearly independent bivariate vanishing polynomials that vanish at the checkerboard nodes and prove the uniqueness of the set of bivariate Lagrange basis polynomials in the quotient space defined as the space of bivariate polynomials with a certain degree by the subspace of bivariate vanishing polynomials.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Bivariate Lagrange interpolation at the checkerboard nodes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759632491111,
"lm_q2_score": 0.8056321796478255,
"lm_q1q2_score": 0.7902252402365417
} |
https://arxiv.org/abs/2011.10763 | Measuring Quadrangle Formation in Complex Networks | The classic clustering coefficient and the lately proposed closure coefficient quantify the formation of triangles from two different perspectives, with the focal node at the centre or at the end in an open triad respectively. As many networks are naturally rich in triangles, they become standard metrics to describe and analyse networks. However, the advantages of applying them can be limited in networks, where there are relatively few triangles but which are rich in quadrangles, such as the protein-protein interaction networks, the neural networks and the food webs. This yields for other approaches that would leverage quadrangles in our journey to better understand local structures and their meaning in different types of networks. Here we propose two quadrangle coefficients, i.e., the i-quad coefficient and the o-quad coefficient, to quantify quadrangle formation in networks, and we further extend them to weighted networks. Through experiments on 16 networks from six different domains, we first reveal the density distribution of the two quadrangle coefficients, and then analyse their correlations with node degree. Finally, we demonstrate that at network-level, adding the average i-quad coefficient and the average o-quad coefficient leads to significant improvement in network classification, while at node-level, the i-quad and o-quad coefficients are useful features to improve link prediction. | \section{Introduction}\label{sec:introduction}}
\input{1.intro}
\input{2.background}
\input{3.quadrangle-co}
\input{4.experiments}
\input{5.related_works}
\input{6.conclusion}
\bibliographystyle{IEEEtran}
\section{Background and Motivating Example} \label{sec: background}
This section first introduces the basic concepts such as the classic clustering coefficient and the recently proposed closure coefficient. We then illustrate how these coefficients are calculated in the case of a small-scale network that serves as an example.
\subsection{Clustering Coefficient} \label{sec:2.1}
The clustering coefficient, or more specifically the local clustering coefficient, was originally proposed in order to measure the cliquishness of a neighbourhood in networks \cite{watts1998collective}. It has since become one of the most commonly used metrics for network structure, together with such measures as degree distribution, path length, connected components, etc.
Let $G = (V,E)$ be an undirected graph on a node set $V$ (the number of nodes is $|V|$) and an edge set $E$ (the number of edges is $m$), without self-loops and multiple edges. We denote the set of neighbours of node $i$ as $N(i)$, and thus the degree of node $i$, denoted as $d_{i}$, equals to $|N(i)|$. An open triad is a directionless length-2 path. For example, in an open triad $ijk$, where an edge connects node $i$ and $j$, and another edge connects node $j$ and $k$, we do not distinguish between path $i\rightarrow j \rightarrow k$ and path $k\rightarrow j \rightarrow i$.
For any node $i \in V$, its \textit{local clustering coefficient}, denoted $C(i)$, is defined as the number of triangles containing node $i$ (denoted $T(i)$), divided by the number of open triads with $i$ as the centre node (denoted $OTC(i)$):
\begin{equation}
C(i) =\frac{T(i)}{O T C(i)}=\frac{\frac{1}{2} \sum_{j \in N(i)}|N(i) \cap N(j)|}{\frac{1}{2} d_{i}\left(d_{i}-1\right)}.
\end{equation}
In other words, it is the fraction of open triads, where the focal node serves as the centre node, that actually form triangles. By definition, $C(i) \in [0,1]$.
In order to get a network-level measurement, the \textit{average clustering coefficient} is introduced by averaging the local clustering coefficient over all nodes (an undefined local clustering coefficient is treated as zero):
\begin{equation} \label{eq: avg_clu}
\overline{C}=\frac{1}{|V|} \sum_{i \in V} C(i).
\end{equation}
An alternative way to measure clustering at the network-level is the \textit{global clustering coefficient} \cite{newman2001random}, which is defined as the fraction of open triads that form triangles in the entire network:
\begin{equation}
\label{eqn_gcc}
C=\frac{\sum_{i \in V} \sum_{j \in N(i)}|N(i) \cap N(j)|}{\sum_{i \in V}d_{i}\left(d_{i}-1\right)}.
\end{equation}
Note that the global clustering coefficient is not equivalent to the average clustering coefficient. In Equation~\ref{eqn_gcc}, we calculate the number of triangles in the entire network, then divided by the number of open triads across the network. Since a node with high degree forms more open triads and also tends to form more triangles, the global clustering coefficient thus puts more weight on hub nodes. On the contrary, in Equation~\ref{eq: avg_clu}, we first calculate the sum of local clustering coefficient of each node, then average over the number of nodes, which gives equal weight on each node.
\subsection{Closure Coefficient}
Different from the ordinary centre node based perspective in the clustering coefficient, another interesting measure of triangle formation, i.e., the local closure coefficient, has recently been proposed \cite{yin2019local}. The focal node in the closure coefficient serves as the end node of an open triad. As Yin et al.\cite{yin2019local} has revealed, this subtle difference in measurement leads to very different properties from those of the clustering coefficient.
Adopting the notations of Section~\ref{sec:2.1}, the local closure coefficient of node $i$, denoted $E(i)$, is defined as twice the number of triangles formed with $i$, divided by the number of open triads with $i$ as the end node. (denoted $OTE(i)$):
\begin{equation} \label{eqn_lcc}
E(i)=\frac{2T(i)}{OTE(i)}=\frac{\sum_{j \in N(i)}|N(i) \cap N(j)|}{\sum_{j \in N(i)}(d_j-1)}.
\end{equation}
In other words, it is the fraction of open triads, where the focal node serves as the end node, that actually form triangles. $T(i)$ is multiplied by two for the reason that each triangle contains two open triads with $i$ as the end node. When a triangle is actually formed, the focal node can be viewed as the centre node in one open triad or as the end node in two open triads (Figure~\ref{fig:open_triad}). Obviously, $E(i) \in [0,1]$.
\begin{figure}[t]
\centerline{\includegraphics[scale = 1]{Figures/open_triads.pdf}}
\caption{Two types of open triads in triangle formation. Among three nodes $i$, $j$ and $k$, node $i$, painted in green, is the focal node.}
\label{fig:open_triad}
\vspace{-0mm}
\end{figure}
At the network-level, the \textit{average closure coefficient} is then defined as the mean of the local closure coefficient over all nodes (an undefined local closure coefficient is treated as zero):
\begin{equation}
\overline{E}=\frac{1}{|V|} \sum_{i \in V} E(i).
\end{equation}
Analogous to the global clustering coefficient (Equation~\ref{eqn_gcc}), the \textit{global closure coefficient}, denoted $E$, is defined as:
\begin{equation}
\label{eqn_gce}
E=\frac{\sum_{i \in V} \sum_{j \in N(i)}|N(i) \cap N(j)|}{{\sum_{i \in V} \sum_{j \in N(i)}(d_j-1)}}.
\end{equation}
The global closure coefficient (Equation~\ref{eqn_gce}) is actually equivalent to the global clustering coefficient (Equation~\ref{eqn_gcc}), as globally the difference of the position of the focal node will not surface.
\subsection{A motivating example}
\begin{figure*}[t]
\vspace{1mm}
\centerline{\includegraphics[scale = 1]{Figures/fig_example.pdf}}
\caption{A motivating example.}
\label{fig:ex}
\vspace{-0mm}
\end{figure*}
We illustrate how the two coefficients of triangle formation are calculated via a small yet real network.
Figure~\ref{fig:ex}a shows a simplified food web of the backwaters of Kerala, India \cite{qasim1970some}. It is composed of 9 nodes and 18 edges. Each node represents a species and each edge represents the flow of food energy from one species to another.
Figure~\ref{fig:ex}b gives a detailed table of the number of triangles $T(i)$, the number of centre-node-based open triads $OTC(i)$, the number of end-node-based open triads $OTE(i)$, the local clustering coefficient $C(i)$ and the local closure coefficient $E(i)$ for each node. Also, the last row gives the average clustering coefficient, the average closure coefficient and the global clustering/closure coefficient, all of which are around $0.20$.
Different from some triangle-rich networks, we find many more quadrangles than triangles in this food web (23 versus 4), which motivates us to propose measuring quadrangle formation instead. In the next section, new measures to quantify information about quadrangles in complex networks are proposed, and we show how we can leverage the fact that some networks are quadrangle and not triangle rich.
\section{Two Quadrangle Coefficients} \label{sec: quadrangle_coefs}
The clustering coefficient and the closure coefficient provide us two ways of measuring triangle formation. In some networks however, we care more about the formation of quadrangles. Also, triangles do not exist in bipartite networks and the most basic enclosed structure in this representation of networks is quadrangle. In this section, we first propose two coefficients measuring quadrangle formation, based on two different positions of the focal node in an open quadriad. Then, we further extend them to weighted networks.
\subsection{I-quad coefficient}
Recall that an open quadriad is a directionless length-3 path (Figure~\ref{fig:one}d). In an open quadriad $ijkl$, for instance, where three edges exist between node pairs $(i, j)$, $(j, k)$ and $(k, l)$, we name nodes $j$ and $k$ as inner nodes. In contrast, nodes $i$ and $l$ are outer nodes. Obviously, an inner node has a degree of two, and an outer node has a degree of one. Further, an open quadriad with the focal node acting as the inner node is called inner-node-based open quadriad of that node; an open quadriad with the focal node acting as the outer node is named outer-node-based open quadriad of that node.
In comparison to the definition of clustering coefficient in measuring triangle formation, we propose the i-quad coefficient for measuring quadrangle formation. It is quantified as the fraction of inner-node-based open quadriads that actually form quadrangles. Concretely, the \textbf{\textit{i-quad coefficient}} of node $i$, denoted $I(i)$, is defined as twice the number of quadrangles formed with $i$ (denoted as $Q(i)$), divided by the number of open quadriads with $i$ as the inner node (denoted as $OQI(i)$):
\begin{equation} \label{eqn: i-quad}
\begin{split}
I (i) &
= \frac{2 Q(i)}{OQI(i)} \\
& = \frac{\sum_{j \in N(i)} \sum_{k \in(N(j)-i)}|N(k) \cap N(i)-j|}{\sum_{j \in N(i)} \sum_{k \in (N(j)-i)}|N(i)-j-k|}.
\end{split}
\end{equation}
In the above equation, $j$ is in $i$'s neighbour set, and $k$ is in $j$'s neighbour set excluding $i$. $Q(i)$ is multiplied by two because each quadrangle can be viewed as constructed from two open quadriads with $i$ as the inner node. By definition, it is obvious that $I(i) \in [0,1]$.
Then, we define the \textbf{\textit{average i-quad coefficient}} at the network-level, as the mean of the i-quad coefficient over all nodes (undefined ones are treated as zeros):
\begin{equation}
\overline{I}=\frac{1}{|V|} \sum_{i \in V} I(i).
\end{equation}
In the case of a random network where each pair of nodes is connected with a probability $p$, the expected value of the average i-quad coefficient is also $p$, i.e., $\mathop{\mathbb{E}}[\overline{I}] = p$.
An alternative way of measuring quadrangle formation at the network-level is the \textbf{\textit{global i-quad coefficient}}, which is defined as the fraction of inner-node-based open quadriads that form quadrangles in the entire network:
\begin{equation}
\label{eqn_giq}
I = \frac{\sum_{i \in V} \sum_{j \in N(i)} \sum_{k \in(N(j)-i)}|N(k) \cap N(i)-j|}{{\sum_{i \in V} \sum_{j \in N(i)} \sum_{k \in (N(j)-i)}|N(i)-j-k|}}.
\end{equation}
The numerator of the above equation can be viewed as eight times the number of quadrangles in the entire network (each node of a quadrangle contributes two counts), then divided by twice the number of open quadriads with each node acting as the inner node.
Although both the average i-quad coefficient and the global i-quad coefficient can be used as metrics to describe quadrangle formation in the entire network, they are calculated differently. The average i-quad coefficient adds up the i-quad coefficient of every node then divides it by the number of nodes, giving each node equal weight. In contrast, the global i-quad coefficient gives nodes that form numerous quadrangles more weight, by first totalling the numerator of the i-quad coefficient then dividing it by the sum of the denominator of the i-quad coefficient.
\subsection{O-quad coefficient}
\begin{figure}[t]
\centerline{\includegraphics[scale = 0.95]{Figures/open_quadriads.pdf}}
\caption{Two types of open quadriads in a quadrangle. Node $i$, depicted in green, is the focal node, among four nodes $i$, $j$, $k$ and $l$.}
\label{fig:four}
\vspace{-0mm}
\end{figure}
Inspired by the closure coefficient in measuring triangle formation, we move the focal node from the inner node to the outer node of an open quadriad, thus proposing the o-quad coefficient in order to measure the formation of quadrangle from a different perspective.
The significance of introducing the o-quad coefficient is twofold.
First, the o-quad coefficient takes into account length-$3$ paths emanating from the focal node, and therefore has a larger scope of the network structure.
Second, when a quadrangle is formed, the closing edge (the edge that closes the outer-node-based open quadriad) is incident to the focal node. This leads to some special properties, comparing to the i-quad coefficient where the closing edge is not incident to the focal node. We show in Section~\ref{sec: evaluation} that the cumulative distribution curve of the o-quad coefficient is above that of the i-quad coefficient, and that the o-quad coefficient tends to increase with node degree.
In a similar way, the \textbf{\textit{o-quad coefficient}} of node $i$, denoted as $O(i)$, is defined as the fraction of open quadriads with $i$ as the outer node that are closed:
\begin{equation} \label{eqn: o-quad}
\begin{split}
O (i) &
= \frac{2 Q(i)}{OQO(i)} \\
& = \frac{\sum_{j \in N(i)} \sum_{k \in(N(j)-i)}|N(k) \cap N(i)-j|}{\sum_{j \in N(i)} \sum_{k \in (N(j)-i)}|N(k)-j-i|},
\end{split}
\end{equation}
where $OQO(i)$ is the number of outer-node-based open quadriads of node $i$, and $Q(i)$ is the number of quadrangles containing $i$. $Q(i)$ is multiplied by two because each quadrangle contains two open quadriads with $i$ as the outer node. In a quadrangle, the focal node can serve as the inner node in two open quadriads or as the outer node in another two open quadriads (Figure~\ref{fig:four}). Obviously, $O(i) \in [0,1]$.
In order to measure at the network level, the \textbf{\textit{average o-quad coefficient}} is defined by averaging the o-quad coefficient over all nodes (an undefined o-quad coefficient is treated as zero):
\begin{equation}
\overline{O}=\frac{1}{|V|} \sum_{i \in V} O(i).
\end{equation}
Analogous to the global i-quad coefficient, the \textbf{\textit{global o-quad coefficient}} can be defined as the fraction of outer-node-based open quadriads that form quadrangles in the entire network:
\begin{equation}
\label{eqn_goq}
O = \frac{\sum_{i \in V} \sum_{j \in N(i)} \sum_{k \in(N(j)-i)}|N(k) \cap N(i)-j|}{{\sum_{i \in V} \sum_{j \in N(i)} \sum_{k \in (N(j)-i)}|N(k)-j-i|}}.
\end{equation}
As the equivalence between the global clustering coefficient and the global closure coefficient, this definition of global o-quad coefficient is actually not different from the global i-quad coefficient (Equation~\ref{eqn_giq}) since globally the difference of the position of the focal node will not arise.
Revisiting the motivating example, Figure~\ref{fig:ex}c gives a detailed table of the number of quadrangles $Q(i)$, the number of inner-node-based open quadriads $OQI(i)$ and the number of outer-node-based open quadriads $OQO(i)$ of each node, based on which the i-quad coefficient $I(i)$ and the o-quad coefficient $O(i)$ are calculated. Also, the last row of this table gives the three network-level measures, i.e., the average i-quad coefficient, the average o-quad coefficient and the global i-quad/o-quad coefficient, which are more than $2.5$ times larger than those metrics measuring triangles formation.
\subsection{Quadrangle coefficients in weighted networks}
Until now, the discussion has been focused on binary networks, where the value of each link is either one or zero. In many networks, however, we need a more accurate representation of the relationships between nodes, such as the frequency of contact in a communication network, or the rating of a product given by a consumer in a recommender network, etc. This kind of information is usually expressed as a strength of the relationship and we use weighted networks to represent it. Therefore, we are interested in extending the two quadrangle coefficients to networks that allow for weights of the relationships.
Several versions of weighted clustering coefficient have been proposed in order to measure triangle formation in weighted networks \cite{barrat2004architecture, onnela2005intensity, zhang2005general, saramaki2007generalizations}. For example, Onnela et al. \cite{onnela2005intensity} proposed to sum over the geometric averages of the three weights in formed triangles, divided by the number of potential triangles. Alternatively, Zhang and Horvath. \cite{zhang2005general} chose to sum simply over the products of the three weights in formed triangles, divided by the total of products of the two weights of all open triads, implying the triadic closing edges taking the maximum weight.
Adopting a strategy similar to the one proposed by Zhang and Horvath \cite{zhang2005general}, we introduce the weighted i-quad coefficient and the weighted o-quad coefficient to measure quadrangles formation in weighted networks. Let $G^\mathcal{W} = (V, E)$ be a weighted graph without self-loops and multiple edges. The weight of a link between any node $i$ and $j$ is denoted $w_{i j}$ ($w_{i j} \in [0,1]$ after normalisation by the maximum weight). For any node $i \in V$, the \textit{\textbf{weighted i-quad coefficient}}, denoted as $I^\mathcal{W}(i)$, and the \textbf{\textit{weighted o-quad coefficient}}, denoted as $O^\mathcal{W}(i)$, are defined as:
\begin{equation}
I^\mathcal{W}(i) = \frac{\sum\limits_{j \in N(i)} \sum\limits_{k \in(N(j)-i)} \sum\limits_{l \in(N(i) \cap N(k)-j)} w_{i j} w_{j k} w_{i l} w_{l k}}{\sum\limits_{j \in N(i)} \sum\limits_{k \in(N(j)-i)} \sum\limits_{l \in(N(i)-j-k)} w_{i j} w_{j k} w_{i l}},
\end{equation}
\begin{equation}
O^\mathcal{W}(i) = \frac{\sum\limits_{j \in N(i)} \sum\limits_{k \in(N(j)-i)} \sum\limits_{l \in(N(i) \cap N(k)-j)} w_{i j} w_{j k} w_{i l} w_{l k}}{\sum\limits_{j \in N(i)} \sum\limits_{k \in(N(j)-i)} \sum\limits_{l \in(N(k)-j-i)} w_{i j} w_{j k} w_{k l}}.
\end{equation}
When the graph becomes binary (unweighted), i.e., $w_{i j} = 1$, the above two weighted quadrangle coefficients degrade to their unweighted versions (Equation~\ref{eqn: i-quad} and Equation~\ref{eqn: o-quad}). The average weighted i-quad coefficient and the average weighted o-quad coefficient are then defined respectively as: $\overline{I^\mathcal{W}}=\frac{1}{|V|} \sum_{i \in V} I^\mathcal{W}(i)$, $\overline{O^\mathcal{W}}=\frac{1}{|V|} \sum_{i \in V} O^\mathcal{W}(i)$.
We can see from Figure~\ref{fig:weighted_quad} that in different weighted networks, the correlation of i-quad coefficient and weighted i-quad coefficient (and the correlation of o-quad coefficient and weighted o-quad coefficient) is also different. In other words, when weights are considered in calculating quadrangle coefficients, the weighted i-quad coefficient and the weighted o-quad coefficient capture different information compared to their unweighted counterparts.
\begin{figure}[t]
\centerline{\includegraphics[scale = 0.24]{Figures/weighted_quads.pdf}}
\caption{Correlation of quadrangle coefficients and weighted quadrangle coefficients in three different networks. First row is the correlation of i-quad coefficient $I(i)$ and weighted i-quad coefficient $I^\mathcal{W}(i)$, second row is the correlation of o-quad coefficient $O(i)$ and weighted o-quad coefficient $O^\mathcal{W}(i)$. The weighted networks are: (1) the neural network of the Caenorhabditis elegans worm \cite{watts1998collective}; (2) the network of the $500$ busiest commercial airports in the United States\cite{colizza2007reaction}; (3) the social network of online community for students at University of California, Irvine\cite{opsahl2009clustering}.}
\label{fig:weighted_quad}
\vspace{-0mm}
\end{figure}
\subsection{Computational cost}
At the end of this section, we give a brief discussion about the computational efficiency of the above mentioned metrics. From Equation~\ref{eqn: i-quad} and Equation~\ref{eqn: o-quad}, we can see that to compute the i-quad coefficient or the o-quad coefficient for a single node, the cost is $O({\langle k\rangle}^3)$, where $\langle k\rangle$ is the average degree of the network. Therefore, the cost for computing the two coefficients for every node in a network is $O({|V| \cdot \langle k\rangle}^3)$. This might seem expensive. Fortunately, in most real-world networks, $\langle k\rangle$ is small, and therefore the computation of these proposed metrics is relatively fast in large networks.
\section{Experiments and Analysis} \label{sec: evaluation}
In this section, we analyse the proposed quadrangle coefficients on different types of real-world networks and demonstrate their usage in some common applications\footnote{Our code is available at \url{https://github.com/MingshanJia/explore-local-structure}.}.
\subsection{Quadrangle coefficients in real-world networks}
\textbf{\textit{Datasets}.} We run experiments on 16 networks of six categories: \begin{enumerate}
\item Food webs. \textsc{FloridaDry}\cite{ulanowicz1999network, kunegis2013konect} and \textsc{LittleRock} \cite{martinez1991artifacts}: energy transfer relationships collected from the cypress wetlands of South Florida and the Little Rock Lake of Wisconsin. Nodes represent species and an edge denotes that one species feeds on another (edge direction and weight are ignored).
\item Social networks. \textsc{EmailEu}\cite{paranjape2017motifs, leskovec2016snap}: a temporal email network from a European research institution (a temporal edge denotes that an email is exchanged between two persons at a certain time); \textsc{ClgMsg}\cite{panzarasa2009patterns}: temporal online message interactions between UCIrvine college students (a temporal edge means that a message is exchanged between two students at a certain time); \textsc{BTCAlpha} \cite{kumar2018rev2}: a temporal who-trusts-whom network of users on a Bitcoin trading platform Bitcoin Alpha (edge direction and weight are ignored); \textsc{TwitchFr} \cite{rozemberczki2019multi}: a network of gamers who stream in French, where nodes are the users and edges are mutual friendships between them.
\item Protein-protein interaction networks. \textsc{Stelzl}\cite{stelzl2005human}, \textsc{Figeys}\cite{ewing2007large}, \textsc{Vidal}\cite{rual2005towards} and \textsc{IntAct}\cite{orchard2014mintact}: four networks of interactions between proteins in Homo sapiens. Nodes represent proteins and an edge denotes the physical contact between two proteins in the cell.
\item Citation networks. \textsc{DBLP}\cite{ley2002dblp} and \textsc{Cora}\cite{vsubelj2013model}: two academic publication citation networks. \textsc{DBLP} contains temporal information on edges. Nodes represent papers, and an edge means that one paper cites another paper (direction is ignored).
\item Infrastructure networks. \textsc{Rd-NewYork} and \textsc{Rd-BayArea}\cite{kunegis2013konect}: two road networks for New York City and San Francisco Bay Area. Nodes represent intersections and endpoints, and the roads connecting them are represented by edges.
\item Q$\&$A networks. \textsc{MathOvfl.} and \textsc{AskUbuntu}\cite{paranjape2017motifs}: two temporal Q$\&$A networks derived from Stack Exchange. Nodes represent users, and a temporal edge means that one user answers another user's question at a certain time (edge direction is ignored).
\end{enumerate}
\noindent\textbf{\textit{Observations}.}
Table~\ref{tab:dataset} lists some key statistics including the proposed coefficients of these networks. We observe that in most types of networks (except road networks), the average o-quad coefficient is smaller than the average i-quad coefficient. That is to say, for the majority of nodes in these types of networks, fewer quadrangles are built from the outer-node-based open quadriads, compared to the number of quadrangles constructed from the inner-node-based open quadriads. This phenomenon is better revealed through the cumulative distribution function (Figure~\ref{fig:cdf}): the CDF curve of the o-quad coefficient is above that of the i-quad coefficient when the coefficient value is small (except in \textsc{Rd-NewYork}).
We can also observe that in all food webs, two PPI networks (\textsc{PPI-Stelzl} and \textsc{PPI-Figeys}) and all road networks, the average i-quad coefficient is larger than the average clustering coefficient ($\overline{I}>\overline{C}$); and the average o-quad coefficient is larger than the average closure coefficient ($\overline{O}>\overline{E}$). In other words, these networks are more inclined to form quadrangles than to form triangles, which leads us to the following experiments.
\begin{figure}[t]
\centerline{\includegraphics[scale = 0.47]{Figures/CDF.pdf}}
\caption{Cumulative distribution curve of the i-quad coefficient $I(i)$ (in green colour) and the o-quad coefficient $O(i)$ (in purple colour) in six real-world networks of different types.}
\label{fig:cdf}
\vspace{-0mm}
\end{figure}
\subsection{Correlation with node degree}
Since node degree is one of the most important and widely used concepts in network science, we study how the two quadrangle coefficients vary with it. We start by conducting an empirical analysis in real networks, followed by a theoretical justification under the degree-preserving random graph model.
We choose one network in each category and plot the correlation of quadrangle coefficients and degree (Figure~\ref{fig:corr_degree}). We observe a strong positive correlation between the o-quad coefficient and the node degree:
the average o-quad coefficient is small among nodes with small degree and becomes larger as the average node degree increases.
In contrast, the correlation between the i-quad coefficient and the degree is weak: the average i-quad coefficient is large (compared to the average o-quad coefficient) when the average node degree is small and does not change too much as the average degree increases. Since most real-world networks are scale-free and exhibit heavy-tailed degree distribution, it also explains why the average i-quad coefficient is bigger than the average o-quad coefficient in most networks studied in our work (Table~\ref{tab:dataset}).
To better understand the correlation between the quadrangle coefficients and the node degree, we give a theoretical explanation under the configuration model \cite{fosdick2018configuring}. Constrained by a given degree sequence, the configuration model generates a network by placing edges between nodes uniformly at random. This can be achieved through a stub-matching process, in which the probability of forming an edge between node $i$ and node $j$ equals $ d_i \cdot d_j / 2m$ (assuming $d_{i}^{2} \leqslant 2 m$ for all $i$). Now we give the following proposition.
\begin{figure}[t]
\centerline{\includegraphics[scale = 0.47]{Figures/Corr_with_degree.pdf}}
\caption{Correlation of two quadrangle coefficients with node degree in six real-world networks. Nodes are grouped into logarithmic bins in ascending order by degree, then average i-quad and o-quad coefficients are calculated in each bin.}
\label{fig:corr_degree}
\vspace{-0mm}
\end{figure}
\begin{prop} \label{prop}
Let $V$ be a set of $n$ nodes with specific degrees $d_1, d_2, ..., d_n$, on which graph $G$ is generated from the configuration model. Let $m=\frac{1}{2} \sum_{i=1}^{n} d_{i}$ denote the number of edges and $\bar{k}=(\sum_{i} d_{i}^{2}) /(\sum_{i} d_{i})$ be the expected degree when a node is chosen with probability proportional to its degree. As $n \rightarrow \infty$, for any node $i \in V$, its local i-quad coefficient satisfies:
\begin{equation*}
\mathbb{E}[I(i)]=\frac{(\bar{k}-1)^2}{2 m},
\end{equation*}
and its local o-quad coefficient satisfies:
\begin{equation*}
\mathbb{E}[O(i)]=\frac{(d_{i}-1) \cdot (\bar{k}-1)}{2 m} .
\end{equation*}
\end{prop}
\begin{proof}
For any open quadriad with node $i$ as an inner node, we denote one outer node by $k$ and another outer node by $l$ (Figure~\ref{fig:proof}a). The probability that this open quadriad is closed equals the probability of having an edge between node $k$ and $l$, which is $\left(d_{k}-1\right)\left(d_{l}-1\right) / 2 m$ in the configuration mode. The reason of subtracting $1$ from $d_k$ and $d_l$ is that one stub of node $k$ (and node $l$) has already been used in forming the open quadriad.
Now, we show that as $n \rightarrow \infty$, $\mathbb{E}\left[d_{k}\right]=\mathbb{E}\left[d_{l}\right]=\bar{k}$. Via stub matching, any node, other than node $i$ and $j$, can form an edge with node $j$ and thus become one outer node of the open quadriad. The probability of node $k$ being this node is proportional to its degree, which is $\frac{d_{k}}{\sum_{k\in V, k \neq i, j} d_{k}}$. Therefore, we have $\mathbb{E}\left[d_{k}\right]=\sum_{k \in V, k \neq i, j} d_{k} \cdot \frac{d_{k}}{\sum_{k\in V, k \neq i, j} d_{k}}$. When $n \rightarrow \infty$, $\mathbb{E}\left[d_{k}\right]=\sum_{k \in V} d_{k} \cdot \frac{d_{k}}{\sum_{k\in V} d_{k}}=\bar{k}$. Similarly, we have $\mathbb{E}\left[d_{l}\right]=\bar{k}$.
In short, we have:
\begin{equation*}\begin{aligned}
\mathbb{E}[I(i)] &=\mathbb{E}\left[\left(d_{k}-1\right)\left(d_{l}-1\right) /(2 m)\right] \\
& = \frac{(\mathbb{E}\left[d_{k}\right]-1) \cdot (\mathbb{E}\left[d_{l}\right]-1)}{2 m} =\frac{(\bar{k}-1) ^ 2}{2 m}.
\end{aligned}\end{equation*}
Likewise, for any open quadriad with node $i$ as an outer node, we denote the other outer node by $l$ (Figure~\ref{fig:proof}b).
And we have:
\begin{equation*}\begin{aligned}
\mathbb{E}[O(i)] &=\mathbb{E}\left[\left(d_{i}-1\right)\left(d_{l}-1\right) /(2 m)\right] \\
& = \frac{(d_{i}-1) \cdot (\mathbb{E}\left[d_{l}\right]-1)}{2 m} =\frac{(d_{i}-1) \cdot (\bar{k}-1)}{2 m}.
\end{aligned}\end{equation*}
\end{proof}
\begin{figure}[t]
\centerline{\includegraphics[scale = 0.75]{Figures/proof.pdf}}
\caption{Two types of quadrangle formation via stub matching. (a) Quadrangle is potentially formed with the focal node $i$ acting as the inner node. The closing edge is between node $k$ and $l$. (b) Quadrangle is potentially formed with the focal node $i$ acting as the outer node. The closing edge is between node $i$ and $l$.}
\label{fig:proof}
\vspace{-0mm}
\end{figure}
Although Proposition~\ref{prop} is given under the configuration model, we see from Figure~\ref{fig:corr_degree} that this property is well preserved in most real-world networks. Only that in road networks, i.e., \textsc{Rd-NewYork} and \textsc{Rd-BayArea}, the average i-quad coefficient and the average o-quad coefficient are very similar (Table~\ref{tab:dataset}), and they exhibit similar correlations with node degree. This is because the variance of node degree is extremely small (less than one) in this type of network, resulting in $d_i$ close to $\bar{k}$, and thus $\mathbb{E}[O(i)]$ close to $\mathbb{E}[I(i)]$.
\subsection{Network classification}
In this section, we exhibit how useful the proposed quadrangle coefficients are in classifying different types of networks. Previous works have shown that normalized number of triads and triangles (triad significance profile\cite{milo2004superfamilies} and clustering signatures\cite{ahnert2008clustering}) are effective attributes in a network classification task. It motivated us to use the two quadrangle coefficients in the network classification, as they represent a normalized number of quadrangles.
We can see in Table~\ref{tab:dataset} that the quotient of the average i-quad coefficient and the average clustering coefficient ($\overline{I}/\overline{C}$), and the quotient of the average o-quad coefficient and the average closure coefficient ($\overline{O}/\overline{E}$) are contrasting in different types of networks. It is intuitive to expect the two quadrangle coefficients will be able to add useful discriminative information to a set of features, in addition to the average clustering coefficient and the average closure coefficient, for improving of the network classification accuracy.
\begin{figure*}[t]
\centerline{\includegraphics[scale = 0.42]{Figures/network_classification.pdf}}
\caption{Two-dimensional visualisation of K-means clustering on PCA-reduced data, with and without quadrangle coefficients (left figure and right figure respectively). Six clusters are labelled from $1$ to $6$, and painted in sequential colours. Centroids of clusters are marked as black crosses. Data points are plotted in different shapes and colours representing their ground truth categories, as shown in the legend.}
\label{fig:classification}
\vspace{-0mm}
\end{figure*}
\noindent\textbf{\textit{Setup}.}
We first prepare the data by choosing five features from the networks, i.e., the average node degree $\langle k\rangle$, the average clustering coefficient $\overline{C}$, the average closure coefficient $\overline{E}$, the average i-quad coefficient $\overline{I}$, and the average o-quad coefficient $\overline{O}$. Then we employ a K-means clustering algorithm to partition all $16$ networks of our dataset into $6$ clusters. The initial centroids are chosen randomly, and we repeat the algorithm with different set of initial centroids for $1000$ times, returning the best result in terms of V-measure score\cite{rosenberg2007v}. Maximum number of iterations for a single run is set to $300$. To compare, we use the same setting to run the experiment, but with only three features, i.e., without the two quadrangle coefficients.
\begin{table}[ht]
\setlength{\tabcolsep}{7pt}
\centering
\caption{Homogeneity (Homo.), completeness (Compl.) and V-measure score of the K-means clustering on $16$ real-world networks, with and without the quadrangle coefficients (first row and second row respectively).}
\label{tab:network_classification}
\begin{tabular}{lccc}
\toprule
Features & Homo. & Compl. & V-measure \\
\midrule
with quadrangle coefs. & 0.810 & 0.879 & 0.826 \Tstrut \\
without quadrangle coefs. & 0.731 & 0.766 & 0.745 \Tstrut \\
\bottomrule
\end{tabular}
\end{table}
\noindent\textbf{\textit{Results and discussion}.}
The classification results measured in homogeneity, completeness and V-measure score are given in Table~\ref{tab:network_classification}. Homogeneity measures whether the samples of a single class belonging to a single cluster; Completeness measures whether all members of a class are assigned to the same cluster; V-measure score is the harmonic mean of the Homogeneity and Completeness. We observe significant improvement (more than $10\%$ in homogeneity and V-measure score, nearly $15\%$ in completeness score) after adding the two quadrangle coefficients. In order to better analyse the results, we adopt the Principal Component Analysis algorithm to compress the data to a two-dimensional space, and thus visualise the classification results (Figure~\ref{fig:classification}).
We can see from Figure~\ref{fig:classification}(a) that with the two quadrangle coefficients, the labellings of food webs (cluster $1$), PPI networks (cluster $3$), road networks (cluster $2$) and QA networks (cluster $6$) are perfect. The model only cannot properly partition social networks from citation networks (cluster $5$ contains three social networks and two citation networks while cluster $4$ has only one social network --- \textsc{Soc-EmailEu}). In contrast, when the quadrangle coefficients are excluded from the model, the majority of data points are congregated at the left part of the space, resulting in worse classification result, as shown in Figure~\ref{fig:classification}(b). Only two types of networks are labelled perfectly (food webs in cluster $2$ and QA networks in cluster $5$). The remaining four types of networks are poorly clustered, especially in cluster $3$ which contains data points of all four categories. This experiment shows that adding quadrangle coefficients improves significantly the ability to tell apart different types of real-world networks, especially for these rich in quadrangles.
\subsection{Link prediction}
As two new metrics measuring quadrangle formation, the i-quad coefficient and the o-quad coefficient provide additional topological features for a node-level network analysis and inference. As an example, we show their utilities in missing link prediction, where significant improvement is brought by adding them.
Many studies have shown that common neighbours index and its variations such as Adamic-Adar index and resource allocation index perform well in the link prediction problem \cite{liben2007link, adamic2003friends, zhou2009predicting}. Besides, the clustering coefficient and the closure coefficient are proven to be useful features to improve the performance \cite{al2006link, yin2019local}. Therefore, we use these five features as the baseline features in our prediction model, and then test the performance by adding the proposed i-quad and o-quad coefficients. XGBoost, the gradient boosted trees, is used as the prediction model due to its speed and performance.
\noindent\textbf{\textit{Setup}.}
We model a network as a graph $G = (V,E)$. For networks having timestamps on edges, we order the edges according to their appearing times and select the first $70\%$ edges and related nodes to form an \say{old graph}, denoted $G_{old} = (V^*, E_{old})$. For networks not having timestamps, we randomly shuffle the edges then perform the partition, and we repeat $100$ times in order to assess variance and reduce the impact of a single partition on the possible conclusions. The remaining $30\%$ edges filtered by node set $V^*$ will form a \say{new graph}, denoted $G_{new} = (V^*, E_{new})$. The test set is built by node pairs, that appear in the old graph, but do not form a link. Each such pair of nodes indicates a positive or a negative example depending on whether a link between them appears in the new graph.
The training set is built on the old graph, on which we fit four XGBoost models with four sets of features: 1) baseline feature set which includes common neighbours, Adamic-Adar, resource allocation, clustering coefficient and closure coefficient; 2) baseline features plus i-quad coefficient; 3) baseline features plus o-quad coefficient; 4) baseline features plus both i-quad coefficient and o-quad coefficients. Then we evaluate their prediction performances on the test set. For large networks ($|V| > 10K$), we perform a randomised breadth first search sampling \cite{doerr2013metric} of $3K$ nodes on the original graph and repeat 10 times.
\begin{table}[t!]
\vspace{0mm}
\centering
\caption{Test set performance comparison measured in ROC-AUC score of four XGBoost models with different features. Second column lists the scores with baseline features (BL) , third column adds i-quad coefficient to baseline features, fourth column adds o-quad coefficient to baseline features, and fifth column adds both i-quad and o-quad coefficients to baseline features. An improvement of more than $2\%$ is put in bold type, and an improvement of more than $5\%$ is indicated by dagger. Last row gives the average (over the datasets) ranking of the four models for comparison, where smaller is better. A model receives rank $1$ if it has the highest ROC-AUC score, rank $2$ if it has the second highest, and so on. If two models share the best score, they both get rank $1.5$, and so on. The best ranking is put in bold italic.}
\label{tab:link_pred_roc}
\setlength{\tabcolsep}{4.5pt}
\def1.2{1.2}
\begin{tabular}{lcccc}
\toprule
Network & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}w/ baseline\\ features (BL) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}add I(i)\\ to BL\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}add O(i)\\ to BL\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}add I(i) \& \\ O(i) to BL\end{tabular}} \\
\midrule
\textsc{FW-FloridaDry} & 0.6703 & 0.6779 & 0.6834 & \textbf{0.6886} \\
\textsc{FW-LittleRock} & 0.8077 & \textbf{0.8357} & \textbf{0.8421} & \textbf{0.8521$^\dagger$} \\
\midrule
\textsc{Soc-EmailEu$^{\tau}$} & 0.9076 & 0.9070 & 0.9090 & 0.9084 \\
\textsc{Soc-ClgMsg$^{\tau}$} & 0.7831 & 0.7873 & 0.7879 & 0.7920 \\
\textsc{Soc-BTCAlpha$^{\tau}$} & 0.8588 & 0.8601 & 0.8679 & 0.8697 \\
\textsc{Soc-TwitchFr} & 0.9160 & 0.9176 & 0.9192 & 0.9202 \\
\midrule
\textsc{PPI-Stelzl} & 0.6565 & \textbf{0.7778$^\dagger$} & \textbf{0.7809$^\dagger$} & \textbf{0.7764$^\dagger$} \\
\textsc{PPI-Figeys} & 0.8171 & \textbf{0.8644$^\dagger$} & \textbf{0.8668$^\dagger$} & \textbf{0.8650$^\dagger$}\\
\textsc{PPI-Vidal} & 0.7566 & \textbf{0.7973$^\dagger$} & \textbf{0.8009$^\dagger$} & \textbf{0.7992$^\dagger$}\\
\textsc{PPI-IntAct} & 0.8524 & \textbf{0.8808} & \textbf{0.8839} & \textbf{0.8842}\\
\midrule
\textsc{Cit-DBLP$^{\tau}$} & 0.7294 & 0.7261 & 0.7336 & 0.7310 \\
\textsc{Cit-Cora} & 0.8700 & 0.8705 & 0.8726 & 0.8734 \\
\midrule
\textsc{Rd-NewYork} & 0.5268 & \textbf{0.5529} & \textbf{0.5538$^\dagger$} & \textbf{0.5538$^\dagger$} \\
\textsc{Rd-BayAera} & 0.5218 & \textbf{0.5353} & \textbf{0.5353} & \textbf{0.5356} \\
\midrule
\textsc{QA-MathOvfl.$^{\tau}$} & 0.8546 & 0.8554 & 0.8541 & 0.8551 \\
\textsc{QA-AskUbuntu$^{\tau}$} & 0.8746 & 0.8791 & 0.8765 & 0.8777 \\
\midrule
\midrule
\textbf{Avg. ranking} & 3.8 & 2.8 & 1.9 & \textit{\textbf{1.5}} \\
\bottomrule
\end{tabular}
\vspace{-1mm}
\end{table}
\noindent\textbf{\textit{Results and discussion}.} Since a network link prediction is a highly unbalanced task, we choose ROC-AUC score as the metric and report the prediction result on the test set, as shown in Table~\ref{tab:link_pred_roc}. First, we discover that adding only i-quad (\nth{3} column) or o-quad coefficient (\nth{4} column) leads to improvement in most networks (except \textsc{Soc-EmailEu} and \textsc{Cit-DBLP} when i-quad is added, and \textsc{QA-MathOvfl.} when o-quad is added). And when both of the quadrangle coefficients are added to the baseline features (\nth{5} column), the performance is improved in all networks. The average ranking (last row) also shows that adding both i-quad and o-quad coefficients at the same time leads to the best performance overall.
Second, we find that the improvement is particularly significant in food webs, protein-protein networks and road networks (more than $2\%$ in all eight networks of these three types, and more than $5\%$ in five networks when both quadrangle coefficients are added). The common characteristic of these types of networks is that they tend to have larger quadrangle coefficients compared to the clustering and closure coefficients. Also, we notice that only adding o-quad coefficient has better performance than only adding i-quad coefficient in most networks (except in two Q\&A networks), which is an interesting phenomenon for further study.
\section{Additional Related Work} \label{sec: related_works}
\begin{figure}[t]
\centerline{\includegraphics[scale = 0.75]{Figures/related_work.pdf}}
\caption{An example of all five coefficients measuring quadrangle formation for node $i$. $H(i)$ is the higher order clustering coefficient proposed by Fronczak et al.\cite{fronczak2002higher}; $S^L(i)$ is the square clustering coefficient proposed by Lind et al.\cite{lind2005cycles}; $S^Z(i)$ is another square clustering coefficient proposed by Zhang et al.\cite{zhang2008clustering}; $I(i)$ and $O(i)$ are the two quadrangle coefficients proposed by us.}
\label{fig:related_work}
\vspace{-0mm}
\end{figure}
We now recapitulate some additional related works that proposed other metrics to measure quadrangle formations in networks. Fronczak et al.\cite{fronczak2002higher} proposed a higher order clustering coefficient for random networks. It is defined as $C_{i}(x)=\frac{2 E_{i}(x)}{k_{i}\left(k_{i}-1\right)}$, where $i$ is the focal node and $x$ is the length of path. $E_{i}(x)$ denotes the number of $x$-length paths between the neighbours of $i$. When $x$ equals $2$, this definition deals with the formation of quadrangles. The limitation of this definition is that the normalisation only takes the degree of the focal node $i$ into account while neglects the degree of $i$'s neighbours. Since each pair of neighbours could have multiple length-$2$ paths between them, the clustering value can be larger than one.
Lind et al.\cite{lind2005cycles} later proposed a square clustering coefficient in the context of bipartite networks by taking into consideration the degree of the neighbours, in other words, the length-$2$ paths starting from the focal node. It is defined as $C_{4, m n}(i)=\frac{q_{i m n}}{\left(k_{m}-\eta_{i m n}\right)\left(k_{n}-\eta_{i m n}\right)+q_{i m n}}$, where $m$ and $n$ are a pair of neighbours of the focal node $i$, and
$q_{i m n}$ denotes the number of squares containing the three nodes. What is uncommon about this definition is that it deems squares are formed via node overlapping, which is not a standard approach. Zhang et al.\cite{zhang2008clustering} then modified the equation and proposed another more standard square clustering coefficient for bipartite networks. Their definition is: $C_{4, m n}(i)=\frac{q_{i m n}}{\left(k_{m}-\eta_{i m n}\right)+\left(k_{n}-\eta_{i m n}\right)+q_{i m n}}$. However, in both of these definitions, there is no notion of open quadriad introduced, and the normalisation is thus based on the number of squares.
Our proposed i-quad and o-quad coefficients are different from the previous works in that 1) the scope of the o-quad coefficient is larger since it takes into account length-$3$ paths emanating from the focal node, whereas the square clustering coefficients only calculates length-$2$ paths in the normalisation; 2) the two coefficients proposed by us view a formed quadrangle as being built from two open quadriads, which conform with the classic clustering and closure coefficients (in their definitions a formed triangle is viewed as being built from open triads); 3) the two quadrangle coefficients are proposed for the general unipartite networks on which multiple experiments are conducted. In Figure~\ref{fig:related_work}, we provide a small example to illustrate the three coefficients proposed by previous works and the two quadrangle coefficients proposed by us.
\section{Conclusion} \label{sec: conclusion}
In this paper, we introduced the i-quad coefficient and the o-quad coefficient to measure quadrangle formation in networks, according to the different location of the focal node in an open quadriad. We also extended them to weighed networks. Through experiments on $16$ real-world networks from six domains, we revealed that 1) in most types of networks, the average o-quad coefficient is smaller than the average i-quad coefficient; 2) in food webs, protein-protein interaction networks and road networks, the i-quad and o-quad coefficients are larger than the clustering and closure coefficients respectively; 3) the o-quad coefficient tends to increase with node degree while the i-quad coefficient does not change too much as the node degree increases.
We also demonstrated that including the two coefficients leads to improvement in both network-level and node-level analysis tasks, such as network classification and link prediction. The improvement is especially significant in food webs, protein-protein networks and road networks in link prediction task.
Additionally, we plan to further consider the dynamics of time-varying networks and link directions of directed networks when measuring quadrangle formation in the future. Due to the simplicity and interpretability in the definitions, we anticipate that the i-quad and o-quad coefficients will become standard descriptive features and be incorporated in other network mining tasks.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
This work was supported by the Australian Research Council, grant No. DP190101087: \say{Dynamics and Control of Complex Social Networks}.
\else
\section*{Acknowledgment}
\fi
| {
"timestamp": "2020-11-24T02:08:51",
"yymm": "2011",
"arxiv_id": "2011.10763",
"language": "en",
"url": "https://arxiv.org/abs/2011.10763",
"abstract": "The classic clustering coefficient and the lately proposed closure coefficient quantify the formation of triangles from two different perspectives, with the focal node at the centre or at the end in an open triad respectively. As many networks are naturally rich in triangles, they become standard metrics to describe and analyse networks. However, the advantages of applying them can be limited in networks, where there are relatively few triangles but which are rich in quadrangles, such as the protein-protein interaction networks, the neural networks and the food webs. This yields for other approaches that would leverage quadrangles in our journey to better understand local structures and their meaning in different types of networks. Here we propose two quadrangle coefficients, i.e., the i-quad coefficient and the o-quad coefficient, to quantify quadrangle formation in networks, and we further extend them to weighted networks. Through experiments on 16 networks from six different domains, we first reveal the density distribution of the two quadrangle coefficients, and then analyse their correlations with node degree. Finally, we demonstrate that at network-level, adding the average i-quad coefficient and the average o-quad coefficient leads to significant improvement in network classification, while at node-level, the i-quad and o-quad coefficients are useful features to improve link prediction.",
"subjects": "Social and Information Networks (cs.SI)",
"title": "Measuring Quadrangle Formation in Complex Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241974031599,
"lm_q2_score": 0.8152324938410783,
"lm_q1q2_score": 0.7902245827894797
} |
https://arxiv.org/abs/2205.12316 | Upper bounds for the product of element orders of finite groups | Let $G$ be a finite group of order $n$, and denote by $\rho(G)$ the product of element orders of $G$. The aim of this work is to provide some upper bounds for $\rho(G)$ depending only on $n$ and on its least prime divisor, when $G$ belongs to some classes of non-cyclic groups. | \section{Introduction}
\noindent It is well-known that the behaviour of element orders strongly affects the structure of a periodic group. For instance, in the finite case, a group can be characterized by looking at its element orders, as the group \mbox{PSL}$(2,q)$ for $q \neq 9$, see \cite{BW}.
Therefore, it is natural to study the set $\omega(G)=\{o(x) \ | \ x \in G \}$ of a given periodic group, where $o(x)$ denotes the order of $x \in G$. To give an example, if $\omega(G)\subseteq\{1,2,3,4\}$, the group $G$ is locally finite \cite{sanov}. For a survey on this topic see \cite{HLM1}.
Furthermore, one can impose some arithmetic conditions on element orders of a subset $S$ of a group $G$ to obtain information about the subgroup generated by $S$, see for instance \cites{BM,BMS, BS, MT}. Moreover, some recent criteria for solvability, nilpotency and other properties of finite groups $G$, based either on the orders of the elements of $G$ or on the orders of the subgroups of $G$ have been described in \cite{HLM5}.
Another direction is to consider functions depending on the element orders. In \cite{AJI}, H. Amiri et al. introduced the function $\psi(G)$, which denotes the sum of element orders of a finite group $G$. They proved that if $G$ is a non-cyclic group of order $n$, then $\psi(G) < \psi(C_n)$, where $C_n$ denotes the cyclic group of order $n$ (see also \cite{AJ} and \cite{Jafarian}). Recently, this last result has been improved by M. Herzog et al., who proved in \cite{HLM} that if $G$ is a finite non-cyclic group then $\psi(G) \leq \frac{7}{11}\psi(C_n)$, by showing also that this bound is the best possible; see also \cites{HLM2, HLM3, HLM4}.
In the present work, we will denote by $\rho(G)$ the product of element orders of a finite group $G$. Some finite groups can be recognized by the product of its element orders, like the groups $PSL(2,7)$ and $PSL(2,11)$, while the groups $PSL(2,5)$ and $PSL(2,13)$ are uniquely determined by their orders and the product of element orders, see \cite{AK2}. However, one can easily see that in general the knowledge of $\rho(G)$ is not sufficient to recognize the group $G$, even when its order is known. Indeed, if we denote by $S_4$ the symmetric group of degree $4$, and by $D_{12}$ the dihedral group of order $12$, then $S_4$ and $C_2 \times D_{12}$ have the same order and $\rho(S_4)=\rho(C_2 \times D_{12})$. This implies the necessity to found under which conditions $\rho(G)$ affects the structure of the group $G$.
The function $\rho$ was studied by M. Garonzi and M. Patassini in \cite{GP}, where they proved that $\rho(G) \leq \rho(C_n)$ for every finite group $G$ of order $n$, and that $\rho(G) = \rho(C_n)$ if and only if $G\simeq C_n$. Later M. T\u{a}rn\u{a}uceanu in \cite{tarnauceanu} studied $\rho(G)$ when $G$ is a finite abelian group $G$, showing that two finite abelian groups of the same order are isomorphic if and only if they have the same product of element orders.
The aim of this paper is to provide more information about the function $\rho$. We consider some classes of non-cyclic finite groups, and we determine upper bounds for the product of element orders depending only on the order of the group and on the smallest prime dividing it.
Our first result reads as follows.
\begin{theorem}\label{mainthm}
Let $G$ be a non-cyclic supersoluble group of order $n$. If either $G$ is nilpotent or $G$ is not metacyclic, then
\[
\rho(G) \leq q^{-\frac{n}{q} (q-1)}\rho(C_{n}).
\]
where $q$ is the smallest prime dividing $n$.
\end{theorem}
We recall that a group $G$ is said to admit a {\it Sylow tower} if there exists a normal series
\[
1 = G_0 \leq G_1 \leq \cdots \leq G_n=G
\]
such that each $G_{i+1}/G_i$ is isomorphic to a Sylow subgroup of $G$ for every $i \in \{0, \ldots, n-1\}$.
As a consequence of Theorem~\ref{mainthm} we obtain the following bound for groups admitting a Sylow tower.
\begin{corollary}
Let $G$ be a non-cyclic group of order $n$ admitting a Sylow tower.
If $q$ is the smallest prime dividing $n$, then
\[
\rho(G) \leq q^{-q} \rho(C_n).
\]
\end{corollary}
Going further, we show that such a bound holds for other class of groups. Indeed, if $G$ is a non-cyclic group of order $n$ and $q$ is the smallest prime dividing $n$, we show that
\[
\rho(G) \leq q^{-q} \rho(C_n),
\]
when either $n=p^{\alpha}q^{\beta}$, where $p > q$ are primes (Theorem \ref{thm:pq}), or $G$ is a Frobenius group (Proposition \ref{frobenius}).
\section{Preliminary results}
In this section we recall some results concerning the function $\rho$, then we compute a bound for the product of element orders of a group with a normal Sylow $p$-subgroup, and finally we show the main result in the case of non-cyclic nilpotent groups.
The following results from \cite{tarnauceanu} will be useful in the next.
\begin{lem}\cite[Prop.~1.1] {tarnauceanu}\label{rhocoprime}
Let $n \geq 1$, and let $H_1,\ldots,H_n$ be finite groups with pairwise coprime orders. Then
\[
\rho\left( H_1 \times \cdots \times H_n \right)=\prod_{i=1}^{n}\rho(H_i)^{\prod_{j\neq i}|H_j|}.
\]
\end{lem}
As a consequence, and by using \cite[Theorem~1.1] {tarnauceanu}, one can readily see that the following corollary holds.
\begin{cor}\cite[Ex.~1.1] {tarnauceanu}\label{summ}
Let $C_n$ be a cyclic group of order $n$.
\begin{enumerate}[label=(\roman*)]
\item If $n=p^{\alpha}$ for some prime $p$, then
\[
\rho(C_n)= p^{\displaystyle \frac{\alpha p^{\alpha+1}-(\alpha+1)p^\alpha+1}{p-1}}.
\]
\item If $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_s^{\alpha_s}$, where $p_i$'s are distinct primes and $\alpha_i$'s are positive integers, then
\[
\rho(C_n)=\prod_{i=1}^{s}p_i^{\left(\displaystyle \frac{\alpha_i p_i^{\alpha_i+1}-(\alpha_i+1)p_i^{\alpha_i}+1}{p_i-1}\right){\displaystyle \frac{n}{p_i^{\alpha_i}}}}.
\]
\end{enumerate}
\end{cor}
The following remarks will be useful in the next.
\begin{rem}\label{rem:p}
Let $p$ be a prime and $\alpha \geq 1$. Then
\[
(p^{\alpha -1})^{p^{\alpha}} \leq \rho(C_{p^{\alpha}}) p^{-1}.
\]
If, additionally, $p$ is odd and $\alpha \geq 2$, we have
\[
(p^{\alpha -1})^{p^{\alpha}} \leq \rho(C_{p^{\alpha}})p^{-p}.
\]
\end{rem}
\begin{comment}
\begin{proof}
By Corollary~\ref{summ}(i), we only need to check that
\[
(p-1)(\alpha -1)p^{\alpha} \leq \alpha p^{\alpha+1} - (\alpha +1)p^{\alpha} +1 - p +1,
\]
which easily follows.
Now let $p$ be odd, and $\alpha > 1$. Again, by Corollary~\ref{summ}(i), we only need to check that
\[
(p - 1)(\alpha - 1)p^{\alpha} \leq \alpha p^{\alpha+1} - (\alpha +1)p^{\alpha} +1 - p^2+p.
\]
However, this is equivalent to
\[
p^{\alpha}(p -2) \geq p^2 - p - 1,
\]
which is true as $p >2$ and $\alpha >1$.
\end{proof}
\end{comment}
\begin{rem}\label{rem: q<p}
Let $p \geq q$ be positive integers. Then
\[
p^{-(p-1)/p} \leq q^{-(q-1)/q}
\]
\end{rem}
\begin{comment}
\begin{proof}
We observe that the thesis is equivalent to show that
\[
p(q-1)\log_p(q)\leq q(p-1).
\]
Since $q\leq p$ then $p(q-1)\leq q(p-1)$. Thus we have
\[
p(q-1)\log_p(q)\leq p(q-1)\leq q(p-1).
\]
This completes the proof.
\end{proof}
\end{comment}
\begin{comment}
For a non-cyclic $p$-group we have the following result.
\begin{pr}\label{pr: nilpotent}
(Unire a Proposizione 8)Let $G$ be a non-cyclic group of order $q^{\alpha}$, for $q$ a prime. Then
\[
\rho(G) \leq \frac{1}{q^{\frac{|G|}{q} (q-1)}}\rho(C_{|G|}).
\]
\end{pr}
\begin{proof}
As $G$ is non-cyclic, $o(x) \leq q^{\alpha -1}$ for every $x \in G$. Let $M$ be a maximal subgroup of $G$. Applying \cite{GP}*{Theorem 3} we have
\begin{align*}
\rho(G)\leq \rho(M)\prod_{x\in G\setminus M}o(x)\leq \rho(C_{q^{\alpha-1}})(q^{\alpha-1})^{q^{\alpha}-q^{\alpha-1}}.
\end{align*}
Since $\rho(C_{q^{\alpha}})=\rho(C_{q^{\alpha-1}})q^{\alpha(q^{\alpha}-q^{\alpha-1})}$, it follows that
\begin{align*}
\rho(G)\leq \rho(C_{q^{\alpha}})\frac{1}{q^{\alpha}-q^{\alpha-1}}=\frac{1}{q^{\frac{|G|}{q} (q-1)}}\rho(C_{|G|}).
\end{align*}
This completes the proof.
\end{proof}
\end{comment}
The next lemma collects results on the product of element orders of groups with a normal cyclic Sylow subgroup, that follow from Lemma~2.4 and Lemma~2.6 in \cite{AK}.
\begin{lem}\label{lem:mercede}
Let $p$ be a prime and let $G$ be a finite group satisfying $G = P \rtimes F$, where $P$ is a cyclic Sylow $p$-subgroup and $(p, |F|) = 1$. Write $Z=C_F(P)$. Then
\begin{itemize}
\item [(i)] $\rho(G) = \rho(P)^{|Z|} \rho(F)^{|P|}.$
\item [(ii)] $\rho(G) \mid \rho(P)^{|F|} \rho(F)^{|P|},$ with equality if and only if $C_F(P)=F$.
\end{itemize}
\end{lem}
When $C_F(P) \neq F$ we have the following result.
\begin{cor}\label{cor:mercede}
Let $p$ be a prime and let $G$ be a finite group satisfying $G = P \rtimes F$, where $P$ is a cyclic Sylow $p$-subgroup and $(p, |F|) = 1$. If $C_F(P) \neq F$, we have $$\rho(G) \leq q^{-q} \rho(C_{|G|}),$$ where $q$ is the smallest prime dividing $|G|$.
\end{cor}
\begin{proof}
Now, let $Z=C_F(P)$ and assume that $Z \neq F$. Applying Lemma~\ref{lem:mercede} to $G$ and \cite{GP}*{Theorem 3} to $F$, we have
\begin{align*}
\rho(G) &= \rho(P)^{|Z|} \rho(F)^{|P|}\leq \rho(P)^{|Z|} \rho(C_{|F|})^{|P|} = \frac{1}{\rho(P)^{|F\setminus Z|}} \rho(C_{|G|}).
\end{align*}
Therefore we only need to check that $$q^q \leq \rho(P)^{|F\setminus Z|},$$ which is obviously true since $q \leq p$.
\end{proof}
It will be also useful to have information about $\rho(G)$ when $G$ has a non-cyclic normal Sylow subgroup.
\begin{pr}\label{nonciclico}
Let $p$ be a prime and let $G$ be a finite group satisfying $G = P \rtimes F$, where $P$ is a non-cyclic Sylow $p$-subgroup and $(p, |F|) = 1$. Then
$$\rho(G) \leq \left(\frac{|P|}{p}\right)^{|G|} \rho(F)^{|P|}.$$
In particular, $\rho(G) \leq q^{-q} \rho(C_{|G|}),$ where $q$ is the smallest prime dividing $|G|$.
\end{pr}
\begin{proof}
Since $P$ is not cyclic, $o(x) \leq \frac{|P|}{p}$ for every $x \in P$. Now, $G= \cup_{x \in H} Px$, hence if $x \in H$ and $y \in P$ we have $(xy)^{o(x)} \in P$ and $o(xy)$ divides $\frac{|P|}{p}o(x)$. It follows that
\[
\rho(G) = \prod_{x\in H}\prod_{y\in P} o(yx) \leq \prod_{x\in H}\prod_{y\in P} \frac{|P|}{p}o(x) = \prod_{y\in P} \left(\frac{|P|}{p}\right)^{|H|} \rho (H)\leq \left(\frac{|P|}{p}\right)^{|G|} \rho(H)^{|P|}.
\]
Finally, from Remark~\ref{rem:p} and \cite{GP}*{Theorem 3} we have
\[
\rho(G) \leq (\rho(C_{|P|})p^{-p})^{|H|} \rho(C_{|H|})^{|P|} = p^{-p |H|} \rho(C_{|G|}) \leq p^{-p} \rho(C_{|G|}) \leq q^{-q} \rho(C_{|G|}).
\]
\end{proof}
It will be also useful to have information about $\rho(G)$ when $G$ has a non-cyclic normal Sylow subgroup.
\begin{pr}\label{nonciclico}
Let $p$ be a prime and let $G$ be a finite group satisfying $G = P \rtimes F$, where $P$ is a non-cyclic Sylow $p$-subgroup and $(p, |F|) = 1$. Then
$$\rho(G) \leq \left(\frac{|P|}{p}\right)^{|G|} \rho(F)^{|P|}.$$
In particular, $\rho(G) \leq q^{-q} \rho(C_{|G|}),$ where $q$ is the smallest prime dividing $|G|$.
\end{pr}
\begin{proof}
Since $P$ is not cyclic, $o(x) \leq \frac{|P|}{p}$ for every $x \in P$. Now, $G= \cup_{x \in H} Px$, hence if $x \in H$ and $y \in P$ we have $(xy)^{o(x)} \in P$ and $o(xy)$ divides $\frac{|P|}{p}o(x)$. It follows that
\[
\rho(G) = \prod_{x\in H}\prod_{y\in P} o(yx) \leq \prod_{x\in H}\prod_{y\in P} \frac{|P|}{p}o(x) = \prod_{y\in P} \left(\frac{|P|}{p}\right)^{|H|} \rho (H)\leq \left(\frac{|P|}{p}\right)^{|G|} \rho(H)^{|P|}.
\]
Finally, from Remark~\ref{rem:p} and \cite{GP}*{Theorem 3} we have
\[
\rho(G) \leq (\rho(C_{|P|})p^{-p})^{|H|} \rho(C_{|H|})^{|P|} = p^{-p |H|} \rho(C_{|G|}) \leq p^{-p} \rho(C_{|G|}) \leq q^{-q} \rho(C_{|G|}).
\]
\end{proof}
For a non-cyclic nilpotent group we have the following result.
\begin{pr}\label{pr: nilpotent}
Let $G$ be a non-cyclic nilpotent group of order $n$ and let $q$ be the smallest prime dividing $n$. Then
\[
\rho(G) \leq q^{-\frac{n}{q} (q-1)}\rho(C_{n}).
\]
\end{pr}
\begin{proof}
We proceed by induction on the number of prime divisors of $n$. Assume first that there exists $\alpha\geq 1$ such that $n=q^{\alpha}$. As $G$ is non-cyclic, we have $o(x) \leq q^{\alpha -1}$ for every $x \in G$. Let $M$ be a maximal subgroup of $G$. Applying \cite{GP}*{Theorem 3} we have
\begin{align*}
\rho(G) = \rho(M)\prod_{x\in G\setminus M}o(x)\leq \rho(C_{q^{\alpha-1}})(q^{\alpha-1})^{q^{\alpha}-q^{\alpha-1}}.
\end{align*}
Since $\rho(C_{q^{\alpha}})=\rho(C_{q^{\alpha-1}})q^{\alpha(q^{\alpha}-q^{\alpha-1})}$, it follows that
\begin{align*}
\rho(G)\leq \rho(C_{q^{\alpha}})q^{-(q^{\alpha}-q^{\alpha-1})}=q^{-\frac{n}{q} (q-1)}\rho(C_{n}),
\end{align*}
and in this case the result follows.
Assume now that at least two primes divide $n$.
Since $G$ is not cyclic, there exists a non-cyclic Sylow $p$-subgroup $P$ such that $G$ can be written as $G=P\times H$ with $(|P|,|H|)=1$. By induction and applying \cite{GP}*{Theorem 3} to $H$, we have
\begin{align*}
\rho(G) &= \rho(P)^{|H|} \rho(H)^{|P|}\leq \left(p^{-\frac{|P|}{p}(p-1)}\rho(C_{|P|})\right)^{|H|}\rho(H)^{|P|}\\
&\leq p^{-\frac{|G|}{p}(p-1)}\rho(C_{|P|})^{|H|}\rho(C_{|H|})^{|P|}=p^{-\frac{n}{p}(p-1)}\rho(C_{|G|}),
\end{align*}
and the result follows from Remark~\ref{rem: q<p} as $q \leq p$.
\end{proof}
\section{Proof of Theorem~\ref{mainthm} and some applications}\label{main}
In this section we prove the main result of the paper. We recall that, for $p$ a prime, a finite group $G$ is \textit{$p$-nilpotent} if $G$ has a normal $p$-complement, i.e. there exist a normal subgroup $H$ and a Sylow $p$-subgroup $P$ of $G$ such that $HP = G$ and $H \cap P$ is trivial.
\begin{proof}[Proof of Theorem \ref{mainthm}]
By Proposition~\ref{pr: nilpotent}, we can assume that $G$ is a supersoluble non-metacyclic group of order $n$ and we proceed by induction on $n$. If $p$ is the greatest prime dividing $n$, there exist a Sylow $p$-subgroup $P$ of $G$ and a subgroup $H \leq G$ with $(|H|,p)=1$ such that $G=P \rtimes H$.
If $G= P \times H$, from Lemma~\ref{rhocoprime} it follows that $\rho(G)=\rho(P)^{|H|} \rho(H)^{|P|}$.
Since $G$ is non-cyclic, then either $P$ or $H$ is not cyclic.
Then by induction and Remark~\ref{rem: q<p}, the result follows.
Therefore we can assume that $C_{H}(P) < H$. Since $G$ is not metacyclic, either $P$ or $H$ is not cyclic. If $P$ is cyclic, then $q\mid |H|$, otherwise $G$ would be $q$-nilpotent by \cite[10.1.9]{robinson}, which implies $G=P\times H$.
As $H$ is not cyclic and $H$ is supersoluble, by induction and from Lemma~\ref{lem:mercede} we have
\begin{align*}
\rho(G)& \leq \rho(P)^{|H|} \rho(H)^{|P|}\leq \rho(C_{|P|})^{|H|} \left(q^{-\frac{|H|}{q}(q-1)} \rho(C_{|H|}) \right)^{|P|} = q^{-\frac{n}{q}(q-1)}\rho(C_{n}).
\end{align*}
Now suppose that $P$ is not cyclic. By Proposition~\ref{nonciclico}
\[
\rho(G) \leq \rho(H)^{|P|}\left(\frac{|P|}{p}\right)^{|G|}.
\]
We claim that
\[
\rho(H)^{|P|}\left(\frac{|P|}{p}\right)^{|G|} \leq q^{-\frac{|G|}{q} (q-1)}\rho(C_{|H|})^{|P|}\rho(C_{|P|})^{|H|}.
\]
If $H$ is not cyclic and $r \geq q$ is the least prime dividing $|H|$, then
\begin{align*}
\rho(H)^{|P|} \leq \left(r^{-\frac{|H|}{r}(r-1)}\right)^{|P|} \rho(C_{|H|})^{|P|} \leq q^{-\frac{n}{q}(q-1)} \rho(C_{|H|})^{|P|}
\end{align*}
by Remark~\ref{rem: q<p}.
Thus in order to prove the claim, we have to prove that
\[
\left(\frac{|P|}{p}\right)^{|G|} \leq \rho(C_{|P|})^{|H|}
\]
that is
\[
\left(\frac{|P|}{p}\right)^{|P|} \leq \rho(C_{|P|}).
\]
Expanding the values of $|P|$ and $ \rho(C_{|P|})$, we have to check the following inequality
\[
p^{(\alpha-1)p^{\alpha}} \leq p^{\frac{\alpha p^{\alpha +1}-(\alpha+1)p^{\alpha}+1}{p-1}}.
\]
Since $p \geq 2$, $p^{\alpha}(p-2)+1 \geq 0$ and we are done.
Suppose now that both $P$ is a non-cyclic group and $H$ is a cyclic group.
In this case we have
\[
\rho(G) \leq \rho(H)^{|P|}\left(\frac{|P|}{p}\right)^{|G|} = \rho(C_{|H|})^{|P|}\left(\frac{|P|}{p}\right)^{|G|}.
\]
Thus we need to check if
\[
\rho(C_{|H|})^{|P|}\left(\frac{|P|}{p}\right)^{|G|} \leq q^{-\frac{|G|}{q} (q-1)}\rho(C_{|H|})^{|P|}\rho(C_{|P|})^{|H|},
\]
which is equivalent to prove that
\[
\left(\frac{|P|}{p}\right)^{|G|} \leq q^{-\frac{|G|}{q} (q-1)}\rho(C_{|P|})^{|H|}.
\]
Expanding all the values above, we have
\[
q^{\frac{p^{\alpha}(q-1)}{q}} \leq p^{\frac{p^{\alpha+1}-2p^{\alpha}+1}{p-1}},
\]
which reduces to prove that $p^
{\alpha}(p-q-1)+q \geq 0$, that is true as $p>q\geq 2$. This completes the proof.
\end{proof}
\begin{proof}[Proof of Corollary B]
Since $G$ has a Sylow tower, there exists a prime $p$ and a Sylow $p$-subgroup of $G$ such that $G=P \rtimes H$, where $H \leq G$ such that $(p, |H|)=1$. If $G= P \times H$, then $\rho(G)=\rho(P)^{|H|} \rho(H)^{|G|}$ and the result follows by induction on $|G|$. Therefore assume that $C_H(P) < H$. If $P$ is cyclic, the case follows from Corollary~\ref{cor:mercede}. Assume therefore that $P$ is not cyclic, then by Proposition~\ref{nonciclico} the bound is obtained and we are done.
\end{proof}
We finish the section dealing with groups of order $p^{\alpha}q^{\beta}$, where $p, q$ are primes with $p>q$.
\begin{thm}\label{thm:pq}
Let $G$ be a non-cyclic group of order $n=p^{\alpha}q^{\beta}$, where $p > q$ are primes. Then
\[
\rho(G) \leq q^{-q} \rho(C_n).
\]
\end{thm}
\begin{proof}
Assume by way of contradiction that $\rho(G) > q^{-q} \rho(C_n)$. Firstly we show that there exists $x \in G$ such that $o(x) > p^{\alpha -1}q^{\beta -1}$. Assume that $o(x) \leq p^{\alpha -1}q^{\beta -1}$ for every $x \in G$. Then we have
\[
\rho(G) \leq (p^{\alpha -1}q^{\beta -1})^{p^{\alpha} q^{\beta}}=((p^{\alpha -1})^{p^{\alpha}})^{q^{\beta}} ((q^{\beta -1})^{q^{\beta}})^{p^{\alpha}}.
\]
From Remark~\ref{rem:p} it follows that
\[
\rho(G) \leq (\rho(C_{p^{\alpha}})p^{-1})^{q^{\beta}} (\rho(C_{q^{\beta}})q^{-1})^{p^{\alpha}} \leq \rho(C_n) p^{-q^{\beta}}q^{-p^{\alpha}} \leq \rho(C_n) q^{-q},
\]
which yields a contradiction.
Thus there exists $x \in G$ such that $o(x) > p^{\alpha-1}q^{\beta-1}$ with $|G: \langle x \rangle| < pq$. We distinguish two cases. If $p$ divides $|G: \langle x \rangle|$, then the only possibility is that $|G: \langle x \rangle|=p$. As a consequence, $G$ has cyclic Sylow $q$-subgroups, and so $G$ is $q$-nilpotent by \cite[10.1.9]{robinson}. Therefore $G$ admits a Sylow tower, and the result follows from Corollary~B.
Suppose then that $p$ does not divide $|G: \langle x \rangle|$. In this case $|G: \langle x \rangle|=q^{\gamma}$, where $q^{\gamma-1}<p$ as $q^{\gamma}<pq$.
It follows that there exists a Sylow $p$-subgroup $P$ of $G$ such that $P \leq \langle x \rangle$. Then $\langle x \rangle \leq N_G(P)$ and $|G : N_G(P)|$ divides $q^{\gamma}$. However, $|G : N_G(P)| = 1 + kp$, for $k \geq 0$. If $k = 0$, then $P$ is normal in $G$ and the result follows from Corollary \ref{cor:mercede}. If $k > 0$, since $|G : N_G(P)| > p$ and $q^{\gamma-1}<p$, we have $|G : N_G(P)| =q^{\gamma}$ and $N_G(P) = \langle x \rangle$. Therefore, $P \leq Z(N_G(P))$ and $G$ is $p$-nilpotent by \cite[10.1.8]{robinson}
In this case $G = Q \rtimes P$. Therefore $G$ has a Sylow tower and applying again Corollary~B we are done. This completes the proof.
\end{proof}
\section{Product of element orders of a Frobenius group}
In the following, we estimate the product of element orders of a Frobenius group. We recall that a finite group $G$ is said to be a \textit{Frobenius group} if $G$ has a subgroup $H$ such that $H \cap H^x=1$ for all $x \in G \setminus H$. Frobenius proved that if $G$ is such a group, then
\[
N= G \setminus \bigcup_{x\in G} (H \setminus \{1\})^x
\]
is a normal subgroup of $G$, and $G=NH$ with $N \cap H=1$. In this case $H$ is called a \textit{Frobenius complement} and $N$ the \textit{Frobenius kernel}.
As a consequence, in a Frobenius group $G=NH$, we have $(|N|,|H|)=1$.
\begin{pr}\label{frobenius}
Let $G$ be a Frobenius group with Frobenius kernel $N$ and Frobenius complement $H$. Then
\[
\rho(G)=\rho(N)\rho(H)^{|N|}.
\]
In particular, if $G$ has order $n$ and $q$ is the smallest prime dividing $n$, we have
\[
\rho(G) \leq q^{-q}\rho(C_n).
\]
\end{pr}
\begin{proof}
As $G$ is a Frobenius group, it can be covered by its Frobenius kernel and all its Frobenius complements. Therefore
\[
\rho(G)=\rho(N)\rho(H)^{|N|}
\]
as $H$ has $|G:H|$ distinct conjugates in $G$. It follows that
$$
\rho(G)= \rho(N)\rho(H)^{|N|} \leq \rho\left(C_{|N|}\right)\rho\left(C_{|H|}\right)^{|N|}=\frac{\rho(C_n)}{\rho\left(C_{|N|}\right)^{|H|-1}}.
$$
Hence it suffices to show that
$$
q^q \leq \rho\left(C_{|N|}\right)^{|H|-1}.
$$
Since $(|N|,|H|)=1$, we distinguish cases: $q$ divides $|N|$, or $q$ divides $|H|$. Assume the former holds. Then we have
$$
q \leq |N| \leq \rho(N) \leq \rho(C_{|N|}).
$$
Since $q$ is the smallest prime dividing $|G|$, we have $q<|H|$. In other words, $q \leq |H|-1$, and the result follows.
Assume now that $q$ divides $|H|$. Then
\begin{equation}\label{case2}
q < \rho(N) \leq \rho(C_{|N|}).
\end{equation}
If $|H|>q$, then $|H|-1 \geq q$ and the inequality follows from Equation \eqref{case2}. If $|H|=q$, then every prime $p$ dividing $|N|$ is greater than $q$, and so $\rho(N)\geq p^2 >q^2$. Since $|H|-1=q-1$, we have
$$
\rho(C_{|N|})^{|H|-1}\geq \rho(N)^{q-1} \geq q^{2(q-1)} \geq q^q.
$$
This completes the proof.
\end{proof}
We conclude this section with an example, in which we compute the product of element orders of a group obtained as direct product of a Frobenius group and a cyclic group of coprime order.
\begin{example}
Let $n>5$, and let $G=F \times C$ be the direct product of a Frobenius group $F$, with the cyclic kernel $N$ and a cyclic complement $H$, and a cyclic group $C$ such that $(|F|, |C|)=1$ and $|F||C|=n$. Then
\[
\rho(G)=\frac{\rho(C_n)}{\rho(C_{|N|})^{|C|(|H|-1)}}.
\]
Indeed, by Lemma~\ref{rhocoprime}, and Proposition~\ref{frobenius}, we have
\begin{equation*}
\rho(G) =\rho(F)^{|C|}\rho(C)^{|F|}=\rho(N)^{|C|}\rho(H)^{|N||C|}\rho(C)^{|F|}.
\end{equation*}
Since $(|F|, |C|)=1$, applying Lemma~\ref{rhocoprime} it follows that
\begin{align*}
\rho(G) &=\rho(N)^{|C|} \rho(H)^{|N||C|}\rho(C)^{|F|} \\[2mm]
&=\frac{\rho(N)^{|C||H|} \rho(H)^{|N||C|}\rho(C)^{|F|}}{\rho(N)^{|C|(|H|-1)}}=\frac{\rho(C_{|F|})^{|C|} \rho(C)^{|F|}}{\rho(N)^{|C|(|H|-1)}} \\[2mm]
&=\frac{\rho(C_n)}{\rho(N)^{|C|(|H|-1)}},
\end{align*}
and we are done.
\end{example}
\subsection*{Acknowledgments}
The authors are grateful to professors G. A. Fern\'andez-Alcober, P. Longobardi, and M. Maj for interesting conversations.
\medskip
| {
"timestamp": "2022-05-26T02:01:40",
"yymm": "2205",
"arxiv_id": "2205.12316",
"language": "en",
"url": "https://arxiv.org/abs/2205.12316",
"abstract": "Let $G$ be a finite group of order $n$, and denote by $\\rho(G)$ the product of element orders of $G$. The aim of this work is to provide some upper bounds for $\\rho(G)$ depending only on $n$ and on its least prime divisor, when $G$ belongs to some classes of non-cyclic groups.",
"subjects": "Group Theory (math.GR)",
"title": "Upper bounds for the product of element orders of finite groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864308540234,
"lm_q2_score": 0.7981867801399695,
"lm_q1q2_score": 0.7901940816256334
} |
https://arxiv.org/abs/2012.02414 | Universal Approximation Property of Neural Ordinary Differential Equations | Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator. Recently, the representation power of NODEs has been partly uncovered: they form an $L^p$-universal approximator for continuous maps under certain conditions. However, the $L^p$-universality may fail to guarantee an approximation for the entire input domain as it may still hold even if the approximator largely differs from the target function on a small region of the input space. To further uncover the potential of NODEs, we show their stronger approximation property, namely the $\sup$-universality for approximating a large class of diffeomorphisms. It is shown by leveraging a structure theorem of the diffeomorphism group, and the result complements the existing literature by establishing a fairly large set of mappings that NODEs can approximate with a stronger guarantee. | \section{Introduction}
\emph{Neural ordinary differential equations} (NODEs) \cite{ChenNeural2018a} are a family of deep neural networks that indirectly model functions by transforming an input vector through an ordinary differential equation (ODE).
When viewed as an invertible neural network (INN) architecture, NODEs have the advantage of having free-form Jacobian, i.e., it is invertible without restricting the Jacobian's structure, unlike other INN architectures \cite{PapamakariosNormalizing2019}.
For the out-of-box invertibility and the availability of a tractable unbiased estimator of the Jacobian determinant \cite{GrathwohlFFJORD2018}, NODEs have been used for constructing \emph{continuous normalizing flows} for generative modeling and density estimation \cite{ChenNeural2018a,GrathwohlFFJORD2018,FinlayHow2020}.
Recently, the representation power of NODEs has been partly uncovered in \citet{LiDeep2020}, namely, a sufficient condition for a family of NODEs to be an \emph{\(L^p\)-universal approximator} (see Definition~\ref{def: sup universality}) for continuous maps has been established.
However, the universal approximation property with respect to the $L^p$-norm can be insufficient as it does not guarantee an approximation for the entire input domain: $L^p$ approximation may still hold even if the approximator largely differs from the target function on a small region of the input space.
In this work, we elucidate that the NODEs are a $\sup$-universal approximator (Definition~\ref{def: sup universality}) for a fairly large class of \emph{diffeomorphisms}, i.e., smooth invertible maps with smooth inverse.
Our result establishes a function class that can be approximated using NODEs with a stronger guarantee than in the existing literature \cite{LiDeep2020}.
We prove the result by using a structure theorem of \emph{differential geometry} to represent a diffeomorphism as a finite composition of \emph{flow endpoints}, i.e., diffeomorphisms that are smooth transformations of the identity map.
The NODEs are themselves examples of flow endpoints, and we derive the main result by approximating the flow endpoints by the NODEs. \section{Preliminaries and goal}
In this section, we define the family of NODEs considered in the present paper as well as the notion of universality.
\subsection{Neural ordinary differential equations (NODEs)}
Let $\mathbb{R}$ (resp. $\mathbb{N}$) denote the set of all real values (resp. all positive integers).
Throughout the paper, we fix $d \in \mathbb{N}$.
Let $\operatorname{Lip}(\mathbb{R}^d){}:=\{ f\colon \mathbb{R}^d\to \mathbb{R}^d\ |\ f \text{ is Lipschitz continuous} \}$.
It is known that any \emph{autonomous} ODE (i.e., one that is defined by a time-invariant vector field) with a Lipschitz continuous vector field has a solution and that the solution is unique:
\begin{fact}[Existence and uniqueness of a global solution to an ODE \cite{Derrickglobal1976}]\label{fact:ODE solution exists for Lip}
Let $f \in \operatorname{Lip}(\mathbb{R}^d){}$.
Then, a solution $z\colon \mathbb{R}\to \mathbb{R}^d$ to the following ordinary differential equation exists and it is unique:
\begin{equation}\label{eq:initial value problem}
z(0) = \mbox{\boldmath $x$}, \quad \dot{z}(t)= f(z(t)), \quad t \in \mathbb{R},
\end{equation}
where $\mbox{\boldmath $x$}\in \mathbb{R}^d$, and $\dot{z}$ denotes the derivative of $z$.
\end{fact}
In view of Fact~\ref{fact:ODE solution exists for Lip}, we use the following notation.
\begin{definition}
\label{def:ivp}
For $f \in \operatorname{Lip}(\mathbb{R}^d){}$, $\mbox{\boldmath $x$} \in \mathbb{R}^d$, and $t \in \mathbb{R}$, we define
\[
\IVP{f}{\mbox{\boldmath $x$}}{t} := z(t),
\]
where $z: \mathbb{R} \to \mathbb{R}^d$ is the unique solution to Equation~\eqref{eq:initial value problem}.
\end{definition}
\begin{definition}[Autonomous-ODE flow endpoints; \citet{LiDeep2020}]\label{def: autonomous ODE flow endpoints}
For $\mathcal{F}\subset \operatorname{Lip}(\mathbb{R}^d){}$, we define
\[
\ODEFlowEnds{\mathcal{F}}:= \{\IVP{f}{\cdot}{1} \ |\ f\in \mathcal{F}\}.
\]
\end{definition}
\begin{definition}[\(\INN{\HFNODE}\)]
Let $\mathrm{Aff}$ denote the group of all invertible affine maps on $\mathbb{R}^d$,
and let \(\mathcal{H} \subset \operatorname{Lip}(\mathbb{R}^d){}\). Define the invertible neural network architecture based on NODEs as
\[
\INN{\HFNODE} := \{W \circ \psi_{k} \circ \cdots \circ \psi_{1} \ |\ \psi_1, \ldots, \psi_k \in \ODEFlowEnds{\mathcal{H}{}}, W \in \mathrm{Aff}{}, k \in \mathbb{N}\}.
\]
\end{definition}
\subsection{Goal: the notions of universality and their relations}
Here, we define the notions of universality.
Let \(m, n \in \mathbb{N}\).
For a subset $K\subset\mathbb{R}^m$ and a map $f: K \to \mathbb{R}^n$, we define $\inftyKnorm{f}:=\sup_{x\in K}\Euclideannorm{f(x)}$, where $\|\cdot\|$ denotes the Euclidean norm.
Also, for a measurable map $f:\mathbb{R}^m\to\mathbb{R}^n$, a subset $K\subset\mathbb{R}^m$, and $p\in [1, \infty)$, we define \(\LpKnorm{f} := \left(\int_K \Euclideannorm{f(x)}^p dx\right)^{1/p}\).
\begin{definition}[\(\sup\)-universality and \(L^p\)-universality]
\label{def: sup universality}
Let $\mathcal{M}} \newcommand{\FLin}{\mathrm{Aff}$ be a model, which is a set of measurable mappings from $\mathbb{R}^m$ to $\mathbb{R}^n$.
Let $\mathcal{F}$ be a set of measurable mappings $f:U_f\rightarrow\mathbb{R}^n$, where $U_f$ is a measurable subset of $\mathbb{R}^m$, which may depend on $f$.
We say that $\mathcal{M}} \newcommand{\FLin}{\mathrm{Aff}$ is a \emph{$\sup$-universal approximator} or \emph{has the $\sup$-universal approximation property} for $\mathcal{F}$ if for any $f\in \mathcal{F}$, any $\varepsilon>0$, and any compact subset $K\subset U_f$, there exists $g\in \mathcal{M}} \newcommand{\FLin}{\mathrm{Aff}$ such that $\supKnorm{f - g}<\varepsilon$.
The \(L^p\)-universal approximation property is defined by replacing \(\supKnorm{\cdot}\) with \(\LpKnorm{\cdot}\) in the above.
\end{definition}
\paragraph{Our goal.}
Our goal is to elucidate the representation power of INNs{} composed of NODEs{} by proving the \(\sup\)-universality of $\INN{\HFNODE}$ for a fairly large class of \emph{diffeomorphisms}, i.e., smooth invertible functions with smooth inverse.
\section{Main result}
In this section, we present our main result, Theorem~\ref{thm: NODE is sup-universal}.
First, we define the following class of invertible maps, which will be our target to be approximated.
\begin{definition}[\(C^2\)-diffeomorphisms: ${\mathcal{D}^2}$]
\label{def: D}
We define ${\mathcal{D}^2}$ as the set of all $C^2$-diffeomorphisms $f:U_f\rightarrow {\rm Im}(f)\subset\mathbb{R}^d$
, where $U_f \subset \mathbb{R}^d$ is open and $C^2$-diffeomorphic to $\mathbb{R}^d$, and it may depend on $f$.
\end{definition}
The set ${\mathcal{D}^2}$ is a fairly large class: it contains any \(C^2\)-diffeomorphism defined on the entire \(\mathbb{R}^d\), an open convex set, or more generally, a star-shaped open set.
Now, we state our main result to establish a class that the invertible neural networks based on NODEs can approximate with respect to the \(\sup\)-norm.
\begin{theorem}[Universality of NODEs]\label{thm: NODE is sup-universal}
Assume \(\mathcal{H} \subset \operatorname{Lip}(\mathbb{R}^d){}\) is a $\sup$-universal approximator for $\operatorname{Lip}(\mathbb{R}^d){}$.
Then, \(\INN{\HFNODE}\) is a \(\sup\)-universal approximator for \({\mathcal{D}^2}\).
\end{theorem}
Examples of $\mathcal{H}$ include the multi-layer perceptron with finite weights and Lipschitz-continuous activation functions such as rectified linear unit (ReLU) activation \cite{LeCunDeep2015,ChenNeural2018a}, as well as the \emph{Lipschitz Networks} \citep[Theorem~3]{AnilSorting2019}.
\paragraph{Proof outline.}
To prove Theorem~\ref{thm: NODE is sup-universal}, we take a similar strategy to that of Theorem~1 of~\cite{TeshimaCouplingbased2020} but with a major modification to adapt to our problem.
First, the approximation target is reduced from \({\mathcal{D}^2}\) to the set of compactly-supported diffeomorphisms from $\mathbb{R}^d$ to $\mathbb{R}^d$, denoted by \(\DcRDCmd{2}\), by applying Fact~\ref{red to comp. supp. diff} in Appendix~\ref{sec:appendix:universality-proof-prep}.
Then, it is shown that we can represent each $f\in\DcRDCmd{2}$ as a finite composition of \emph{flow endpoints} (Definition~\ref{def: appendix flow endpoints} in Appendix~\ref{sec:appendix:universality-proof-prep}), each of which can be approximated by a NODE.
The decomposition of $f$ into flow endpoints is realized by relying on a structure theorem of $\DcRDCmd{2}$ (Fact~\ref{fact: simplicity} in Appendix~\ref{sec:appendix:universality-proof-prep}) attributed to Herman, Thurston \cite{ThurstonFoliations1974}, Epstein \cite{Epsteinsimplicity1970}, and Mather \cite{MatherCommutators1974, MatherCommutators1975}.
Note that we require a different definition of flow endpoints (Definition~\ref{def: appendix flow endpoints} in Appendix~\ref{sec:appendix:universality-proof-prep}) from that employed in~\citep[Corollary~2]{TeshimaCouplingbased2020} in order to incorporate sufficient smoothness of the underlying flows.
\section{Related work and Discussion}
In this section, we overview the existing literature on the representation power of NODEs to provide the context of the present paper.
\paragraph{\(L^p\)-universal approximation property of NODEs.}
\citet{LiDeep2020} considered NODEs capped with a \emph{terminal family} to map the output of NODEs to a vector of the desired output dimension, and its Proposition~3.8 showed that the model class has the \(L^p\)-universality for the set of all continuous maps from \(\mathbb{R}^d\) to \(\mathbb{R}^n\) (\(n \in \mathbb{N}\)), under a certain sufficient condition.
In comparison to our result here, the result of \citet{LiDeep2020} established the universality of NODEs for a larger target function class (namely continuous maps) with a weaker notion of approximation (namely \(L^p\)-universality).
\paragraph{Limitations on the representation power of NODEs.}
\citet{ZhangApproximation2020} formulated its Theorem~1 to show that NODEs are not universal approximators by presenting a function that a NODE cannot approximate.
The existence of this counterexample does not contradict our result because our approximation target \({\mathcal{D}^2}\) is different from the function class considered in \citet{ZhangApproximation2020}: the class in \citet{ZhangApproximation2020} can contain discontinuous maps whereas the elements of \({\mathcal{D}^2}\) are smooth and invertible.
\paragraph{Universality of augmented NODEs.}
As a device to enhance the representation power of NODEs, increasing the dimensionality and padding zeros to the inputs/outputs has been explored \citep{DupontAugmented2019a,ZhangApproximation2020}.
\citet{ZhangApproximation2020} showed that the augmented NODEs (ANODEs) are universal approximators for homeomorphisms.
The approach has a limitation that it can undermine the invertibility of the model: unless the model is ideally trained so that it always outputs zeros in the zero-padded dimensions, the model can no longer represent an invertible map operating on the original dimensionality.
On the other hand, the present work explores the universal approximation property of NODEs that is achieved without introducing the complication arising from the dimensionality augmentation.
\paragraph{Relation between \(\INN{\HFNODE}\) and time-dependent NODEs.}
Our result can be readily extended to the design choice of NODEs that includes the time-index as an argument of $f$. It can be done by limiting our attention to the subset of the considered class of $f$ consisting of all time-invariant ones as in the following.
Let \(a \in (0, \infty]\) and consider \(\tilde f : \mathbb{R}^d\times(-a, a)\) be such that there exists a continuous function \(\ell: (-a, a) \to \mathbb{R}_{\geq 0}\) satisfying
\begin{align*}
\|\tilde f(\mbox{\boldmath $x$}_1, t) - \tilde f(\mbox{\boldmath $x$}_2, t)\| \leq \ell(t)\|\mbox{\boldmath $x$}_1 - \mbox{\boldmath $x$}_2\|.
\end{align*}
Then, the initial value problem
\[z(0) = \mbox{\boldmath $x$}, \quad \dot{z}(t)= \tilde f(z(t), t), \quad t \in (-a, a)\]
has a solution $z: (-a, a) \to \mathbb{R}^d$ and it is unique \cite{Derrickglobal1976}, synonymously to Fact~\ref{fact:ODE solution exists for Lip}.
Then, given a set \(\tilde{\mathcal{H}{}}\) of such mappings $\tilde f$, we can consider its subset \(\mathcal{H}{}\) that contains only the time-invariant elements, i.e., \(\mathcal{H}{} \subset \tilde{\mathcal{H}{}}\) such that for any \(f \in \mathcal{H}{}\) and any \(\mbox{\boldmath $x$} \in \mathbb{R}^d\), \(f(\mbox{\boldmath $x$}, \cdot)\) is a constant mapping. Such an \(f\) is an element of \(\operatorname{Lip}(\mathbb{R}^d){}\) with \(\inf_{t \in (-a, a)} \ell(t) \geq 0\) being a Lipschitz constant.
Then, we can apply Theorem~\ref{thm: NODE is sup-universal} to \(\mathcal{H}{}\) and its induced \(\INN{\HFNODE}{}\).
\section{Conclusion}
In this paper, we uncovered the $\sup$-universality of the INNs composed of NODEs for approximating a large class of diffeomorphisms.
This result complements the existing literature that showed the weaker approximation property of NODEs, namely $L^p$-universality, for general continuous maps.
Whether the $\sup$-universality holds for a larger class of maps than \({\mathcal{D}^2}\) is an important research question for future work.
Also, it is important for future work to quantitatively evaluate how many layers of NODEs are required to approximate a given diffeomorphism with a specified smoothness such as a bi-Lipschitz constant to evaluate the efficiency of the approximation. \begin{ack}
\acknowledgmentContent{}
\end{ack}
\printbibliography
\clearpage
\begin{appendices}
\global\csname @topnum\endcsname 0
\global\csname @botnum\endcsname 0
This is the Supplementary~Material for ``Universal approximation property of neural ordinary differential equations.{}''
Table~\ref{tbl:notation-table} summarizes the abbreviations and the symbols used in the paper.
\begin{table}[tbph]
\caption{Abbreviation and notation table.}
\label{tbl:notation-table}
\centering
\begin{tabular}{ll}
\toprule
Abbreviation/Notation & Meaning \\
\midrule
INN & Invertible neural networks\\
NODE & Neural ordinary differential equations\\
\midrule
$\FLin$ & Set of invertible affine transformations\\
\(\IVP{f}{\mbox{\boldmath $x$}}{t}\) & The (unique) solution to an initial value problem evaluated at $t$ \\
\(\ODEFlowEnds{\mathcal{F}}\) & Set of NODEs obtained from the Lipschitz continuous vector fields $\mathcal{F}$ \\
\(\operatorname{Lip}(\mathbb{R}^d)\) & The set of all Lipschitz continuous maps from $\mathbb{R}^d$ to $\mathbb{R}^d$ \\
\(\INN{\HFNODE}{}\) & INNs composed of $\FLin$ and NODEs parametrized by \(\mathcal{H} \subset \operatorname{Lip}(\mathbb{R}^d){}\)\\
\midrule
$d \in \mathbb{N}$ & Dimensionality of the Euclidean space under consideration\\
${\mathcal{D}^2}$ & Set of all $C^2$-diffeomorphisms with \(C^2\)-diffeomorphic domains\\
$\DcRDCmd{r}$ & Group of compactly-supported $C^r$-diffeomorphisms on \(\mathbb{R}^d\) ($1 \leq r \leq \infty$)\\
\midrule
$\Euclideannorm{\cdot}$ & Euclidean norm\\
$\opnorm{\cdot}$ & Operator norm\\
$\inftyKnorm{\cdot}$ & Supremum norm on a subset $K\subset \mathbb{R}^d$\\
$\LpKnorm{\cdot}$ & \(L^p\)-norm on a subset $K\subset \mathbb{R}^d$\\
\(\mathrm{Id}\) & Identity map \\
\(\supp{}\) & Support of a map \\
\bottomrule
\end{tabular}
\end{table}
\section{Proof of Theorem~\ref{thm: NODE is sup-universal}}
\label{sec:appendix:universality-proof}
Here, we provide a proof of Theorem~\ref{thm: NODE is sup-universal}.
In Section~\ref{sec:appendix:universality-proof-prep}, we display the known facts and show the lemmas used for the proof.
In Section~\ref{sec:appendix:universality-proof-content}, we prove Theorem~\ref{thm: NODE is sup-universal}.
\subsection{Lemmas and known facts}
\label{sec:appendix:universality-proof-prep}
We use the following definition and facts from \citet{TeshimaCouplingbased2020}.
\begin{definition}[Compactly supported diffeomorphism]
We use $\DcRDCmd{r}$ to denote the set of all compactly supported \(C^r\)-diffeomorphisms ($1 \leq r \leq \infty$) from $\mathbb{R}^d$ to $\mathbb{R}^d$.
Here, we say a diffeomorphism $f$ on $\mathbb{R}^d$ is {\em compactly supported} if
there exists a compact subset $K\subset \mathbb{R}^d$ such that for any $x\notin K$, $f(x)=x$.
We regard $\DcRDCmd{r}$ as a group whose group operation is function composition.
\end{definition}
The following fact enables us to reduce the approximation problem for ${\mathcal{D}^2}$ to that for $\DcRDCmd{2}$.
\begin{fact}[Lemma~5 of \citet{TeshimaCouplingbased2020}]
\label{red to comp. supp. diff}
Let $f \colon U \to \mathbb{R}^d$ be an element of ${\mathcal{D}^2}$, and let $K\subset U$ be a compact set. Then, there exists $h \in \DcRDCmd{2}$ and an affine transform $W \in \FLin$ such that \[\Restrict{W\circ h}{K}=\Restrict{f}{K}.\]
\end{fact}
The following fact enables the component-wise approximation, i.e., given a transformation that is represented by a composition of some transformations, we can approximate it by approximating each constituent and composing them.
\begin{fact}[Compatibility of composition and approximation;
Proposition~6 of \citet{TeshimaCouplingbased2020}]\label{lemma:composition}
Let \(\mathcal{M}\) be a set of locally bounded maps from $\mathbb{R}^d$ to $\mathbb{R}^d$,
and $F_1,\dots,F_k$ be continuous maps from $\mathbb{R}^d$ to $\mathbb{R}^d$.
Assume for any $\varepsilon>0$ and any compact set $K\subset\mathbb{R}^d$, there exist $\widetilde{G}_1,\dots, \widetilde{G}_k\in \mathcal{M}$ such that, for $1 \leq i \leq k$, $\big\Vert F_i-\widetilde{G}_i\big\Vert_{\sup, K}<\varepsilon$.
Then for any $\varepsilon>0$ and any compact set $K\subset\mathbb{R}^d$, there exist $G_1, \dots, G_k\in \mathcal{M}$ such that
\[\supKnorm{F_k\circ\cdots \circ F_1-G_k\circ\cdots\circ G_1} < \varepsilon.\]
\end{fact}
The following fact is attributed to Herman, Thurston \cite{ThurstonFoliations1974}, Epstein \cite{Epsteinsimplicity1970}, and Mather \cite{MatherCommutators1974, MatherCommutators1975}. See Fact~2 of \citet{TeshimaCouplingbased2020} and the remarks therein for details.
Let $\mathrm{Id}{}$ denote the identity map.
\begin{fact}[Fact~2 of \citet{TeshimaCouplingbased2020}]
\label{fact: simplicity}
If \(r \neq d + 1\), the group $\DcRDCmd{r}$ is simple, i.e., any normal subgroup $H \subset \DcRDCmd{r}$ is either $\{\mathrm{Id}\}$ or $\DcRDCmd{r}$.
\end{fact}
Next, we define a subset of $\DcRDCmd{r}$ called the \emph{flow endpoints}.
In Lemma~\ref{lem: diffc2 is generated by flow endpoints}, it is shown that the set of flow endpoints generates a non-trivial normal subgroup of $\DcRDCmd{r}$.
Therefore, by combining it with Fact~\ref{lemma:composition}, we can represent any element of $\DcRDCmd{r}$ as a finite composition of flow endpoints, each of which can be approximated by the NODEs.
While Corollary~2 of \citet{TeshimaCouplingbased2020} also defined a set of flow endpoints in $\DcRDCmd{2}$, it differs from the one defined here which is tailored for our purpose.
The two definitions can be interpreted as describing two different generators of the same group $\DcRDCmd{2}$.
Let $\mathrm{supp}$ denote the support of a map.
\begin{definition}[Flow endpoints $\FlowEnds{r}$ in $\DcRDCmd{r}$]
\label{def: appendix flow endpoints}
Let $1 \leq r \leq \infty$. Let $\FlowEnds{r}\subset\DcRDCmd{r}$ be the set of diffeomorphisms $g$ of the form $g(\bm{x})=\Phi(\bm{x},1)$ for some map $\Phi:\mathbb{R}^d\times U\rightarrow\mathbb{R}^d$ such that
\begin{itemize}
\item $U \subset \mathbb{R}$ is an open interval containing $[0, 1]$,
\item $\Phi(\bm{x},0)=\bm{x}$,
\item $\Phi(\cdot,t)\in\DcRDCmd{r}$ for any $t\in U$,
\item $\Phi(\bm{x},s+t)=\Phi(\Phi(\bm{x},s),t)$ for any $s,t\in U$ with $s+t\in U$,
\item $\Phi$ is $C^r$ on $\mathbb{R}^d \times U$,
\item There exists a compact subset $K_\Phi \subset \mathbb{R}^d$ such that $\cup_{t \in U} \mathrm{supp}{\Phi(\cdot, t)} \subset K_\Phi$.
\end{itemize}
\end{definition}
The difference between Definition~\ref{def: appendix flow endpoints} and the one in Corollary~2 of \citet{TeshimaCouplingbased2020} mainly lies in the last two conditions.
Technically, these two conditions are used in Section~\ref{sec:appendix:universality-proof-content} for showing that the partial derivative of $\Phi$ in $t$ at $t=0$ is Lipschitz continuous.
\begin{lemma}[Modified Corollary~2 of \citet{TeshimaCouplingbased2020}]
\label{lem: diffc2 is generated by flow endpoints}
Let $1 \leq r \leq \infty$ and $\FlowEnds{r}\subset\DcRDCmd{r}$ be the set of all flow endpoints.
Then, the subset $H^r$ of $\DcRDCmd{r}$ defined by
\[H^r:=\{ g_1\circ \cdots \circ g_n \ |\ n\ge1, g_1,\dots,g_n \in \FlowEnds{r}\}\]
forms a subgroup of $\DcRDCmd{r}$ and it is a non-trivial normal subgroup.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem: diffc2 is generated by flow endpoints}]
First, we prove that $H^r$ forms a subgroup of $\DcRDCmd{r}$.
By definition, for any $g, h \in H^r$, it holds that $g \circ h \in H^r$.
Also, $H^r$ is closed under inversion; to see this, it suffices to show that $\FlowEnds{r}$ is closed under inversion.
Let $g= \Phi(\cdot, 1) \in \FlowEnds{r}$. Consider the map $\phi:\mathbb{R}^d\times U\rightarrow\mathbb{R}^d$ defined by $\phi(\cdot, t) := \Phi^{-1}(\cdot, t)$.
It is easy to confirm that $\phi$ satisfies the conditions of Definition~\ref{def: appendix flow endpoints}, hence $g^{-1} = \phi(\cdot, 1)$ is an element of $\FlowEnds{r}$. Note that $\phi$ is confirmed to be $C^r$ on $\mathbb{R}^d \times U$ by applying the inverse function theorem to $(t, \mbox{\boldmath $x$}) \mapsto (t, \Phi(\mbox{\boldmath $x$}, t))$.
Next, we prove that $H^r$ is normal.
To show that the subgroup generated by $\FlowEnds{r}$ is normal, it suffices to show that $\FlowEnds{r}$ is closed under conjugation.
Take any $g\in \FlowEnds{r}$ and $h\in \DcRDCmd{r}$, and let $\Phi$ be a flow associated with $g$.
Then, the function $\Phi': \mathbb{R}^d\times U \to \mathbb{R}^d$ defined by $\Phi'(\cdot, s) := h^{-1} \circ \Phi(\cdot, s) \circ h$ is a flow associated with $h^{-1}\circ g \circ h$ satisfying the conditions in Definition~\ref{def: appendix flow endpoints}, which implies $h^{-1}\circ g \circ h\in \FlowEnds{r}$, i.e., $\FlowEnds{r}$ is closed under conjugation.
Next, we prove that $H^r$ is non-trivial by constructing an element of $\FlowEnds{r}$ that is not the identity element.
First, consider the case $d = 1$.
Let $\tilde v: \mathbb{R} \to \mathbb{R}_{\geq 0}$ be a non-constant $C^\infty$-function such that $\supp{\tilde v} \subset [0, 1]$ and $\tilde v^{(k)}(0) = 0$ for any \(k \in \mathbb{N}\).
Then define \(v : \mathbb{R} \to \mathbb{R}\) by
\[v(x) = \begin{cases}\tilde v(|x|)\frac{x}{|x|} & \text{ if } x \neq 0, \\ 0 & \text{ if } x = 0,\end{cases}\]
which is a \(C^\infty\)-function on \(\mathbb{R}\) with a compact support.
Since $v$ is Lipschitz continuous and $C^\infty$, there exists $\IVPFunc{v}$ that is a $C^\infty$-function over $\mathbb{R} \times \mathbb{R}$; see Fact~\ref{fact:ODE solution exists for Lip} and \citep[Chapter~V, Corollary~4.1]{HartmanOrdinary2002}.
Let $K_v \subset \mathbb{R}$ be a compact subset that contains $\supp{v}$. Then, by considering the ordinary differential equation by which $\IVPFunc{v}$ is defined, we see that $\bigcup_{t \in \mathbb{R}} \supp\IVP{v}{\cdot}{t} \subset K_v$ and also that $\IVP{v}{x}{0} = x$.
We also have $\IVP{v}{x}{s+t} = \IVP{v}{\IVP{v}{x}{s}}{t}$ for any $s, t \in \mathbb{R}$. In particular, we have $\IVP{v}{\cdot}{s}^{-1} = \IVP{v}{\cdot}{-s}$ for any $s \in \mathbb{R}$. Therefore, we have $\IVP{v}{\cdot}{1} \in \FlowEnds{r}$. Since $v \not \equiv 0$, $\IVP{v}{\cdot}{1}$ is not an identity map and thus $\FlowEnds{r}$ is not trivial.
Next, we consider the case $d \geq 2$.
Take a $C^\infty$-function $\phi\colon \mathbb{R}\to \mathbb{R}$ with $\supp{\phi}= [1,2]$ and a nonzero skew-symmetric matrix $A$ (i.e. $A^\top=-A$) of size $d$, and
let $X(x):=\phi(\|x\|)A$.
We define a $C^\infty$-map $\Phi\colon \mathbb{R}^d\times \mathbb{R}\to \mathbb{R}^d$ by
\[ \Phi(x,t):= \exp(t X(x))x. \]
Since $\exp( tX(x))$ is an orthogonal matrix for any $t\in \mathbb{R}$ and $x\in \mathbb{R}^d$, $\Phi$ is a $C^\infty$-flow on $\mathbb{R}^d$.
Now, it is enough to show that there exists a compact set $K_\Phi\subset \mathbb{R}^d$ satisfying $\cup_{t\in \mathbb{R}}\supp{\Phi(\cdot, t)}\subset K_\Phi$.
Let $K_\Phi:=\{x\in \mathbb{R}^d\ |\ \|x\|\leq 2\}$.
Then the inclusion $\supp{\Phi(\cdot, t)}\subset K_{\Phi}$ holds for any $t\in\mathbb{R}$ since $X(x)=0$ for $x\in \mathbb{R}^d\setminus K_\Phi$.
\end{proof}
The following lemma allows us to approximate an autonomous ODE flow endpoint by approximating the differential equation. See Definition~\ref{def: autonomous ODE flow endpoints} for the definition of $\ODEFlowEnds{\cdot}$.
\begin{lemma}[Approximation of Autonomous-ODE flow endpoints]
\label{appendix:lem:ODE flow endpoint approximation}
Assume \(\mathcal{H} \subset \operatorname{Lip}(\mathbb{R}^d){}\) is a $\sup$-universal approximator for $\operatorname{Lip}(\mathbb{R}^d){}$.
Then, \(\ODEFlowEnds{\mathcal{H}{}}\) is a \(\sup\)-universal approximator for \(\ODEFlowEnds{\operatorname{Lip}(\mathbb{R}^d){}}\).
\end{lemma}
\begin{proof}
Let $\phi \in \ODEFlowEnds{\operatorname{Lip}(\mathbb{R}^d){}}$. Then, by definition, there exists $F \in \operatorname{Lip}(\mathbb{R}^d){}$ such that $\phi = \IVP{F}{\cdot}{1}$.
Let $\LipConst{F}$ denote the Lipschitz constant of $F$.
In the following, we approximate $\IVP{F}{\cdot}{1}$ by approximating $F$ using an element of $\mathcal{H}{}$.
Let $\varepsilon > 0$, and let $K \subset \mathbb{R}^d$ be a compact subset of $\mathbb{R}^d$.
We show that there exists $f \in \mathcal{H}{}$ such that $\supKnorm{\IVP{F}{\cdot}{1} - \IVP{f}{\cdot}{1}} < \varepsilon$. Note that $\IVP{f}{\cdot}{\cdot}$ is well-defined because $\mathcal{H}{} \subset \operatorname{Lip}(\mathbb{R}^d){}$.
Define
\[
K' := \left\{\mbox{\boldmath $x$} \in \mathbb{R}^d \ \bigg|\ \inf_{\mbox{\boldmath $y$} \in \IVP{F}{K}{[0, 1]}} \|\mbox{\boldmath $x$} - \mbox{\boldmath $y$}\| \leq 2 e^{\LipConst{F}}\right\}.
\]
Then, $K'$ is compact. This follows from the compactness of $\IVP{F}{K}{[0, 1]}$: (i) $K'$ is bounded since $\IVP{F}{K}{[0, 1]}$ is bounded, and (ii) it is closed since the function $\min_{\mbox{\boldmath $y$} \in \IVP{F}{K}{[0, 1]}} \|\mbox{\boldmath $x$} - \mbox{\boldmath $y$}\|$ is continuous and hence $K'$ is the inverse image of a closed interval $[0, 2e^{\LipConst{F}}]$ by a continuous map.
Since $\mathcal{H}{}$ is assumed to be a \(\sup\)-universal approximator for $\operatorname{Lip}(\mathbb{R}^d){}$, for any $\delta > 0$, we can take $f \in \mathcal{H}{}$ such that $\supRangenorm{K'}{f - F} < \delta$.
Let $\delta$ be such that $0 < \delta < \min\{\varepsilon / (2e^{\LipConst{F}}), 1\}$, and take such an $f$.
Fix $\mbox{\boldmath $x$}_0 \in K$ and define $\targetError{t} := \|\IVP{F}{\mbox{\boldmath $x$}_0}{t} - \IVP{f}{\mbox{\boldmath $x$}_0}{t}\|$.
Let $\Bound{} := \delta e^{\LipConst{F}}{}$ and we show that
\[
\targetError{t} < 2\Bound{}
\]
holds for all $t \in [0, 1]$.
We prove this by contradiction. Suppose that there exists $t'$ for which the inequality does not hold. Then, the set $\mathcal{T} := \{t \in [0, 1] | \targetError{t} \geq 2 \Bound{}\}$ is not empty and
thus $\tau := \inf \mathcal{T} \in [0, 1]$.
For this $\tau$, we show both $\targetError{\tau} \leq \Bound{}$ and $\targetError{\tau} \geq 2\Bound{}$.
First, we have
\begin{align*}
\targetError{\tau} &= \left\|\IVP{F}{\mbox{\boldmath $x$}_0}{\tau} - \IVP{f}{\mbox{\boldmath $x$}_0}{\tau}\right\| \\
&= \left\|\mbox{\boldmath $x$}_0 + \int_0^\tau F(\IVP{F}{\mbox{\boldmath $x$}_0}{t}) dt - \mbox{\boldmath $x$}_0 - \int_0^\tau f(\IVP{f}{\mbox{\boldmath $x$}_0}{t}) dt\right\| \\
&\leq \left\|\int_0^\tau (F(\IVP{F}{\mbox{\boldmath $x$}_0}{t}) - F(\IVP{f}{\mbox{\boldmath $x$}_0}{t})) dt\right\| \\
&\qquad + \left\|\int_0^\tau (F(\IVP{f}{\mbox{\boldmath $x$}_0}{t}) - f(\IVP{f}{\mbox{\boldmath $x$}_0}{t})) dt\right\|.
\end{align*}
The last term can be bounded as
\[
\left\|\int_0^\tau (F(\IVP{f}{\mbox{\boldmath $x$}_0}{t}) - f(\IVP{f}{\mbox{\boldmath $x$}_0}{t})) dt\right\| \leq \int_0^\tau \delta dt
\]
because of the following argument.
If $\tau = 0$, then both sides equal to zero, hence it holds with equality.
If $\tau > 0$, then for any $t < \tau$, we have $\IVP{f}{\mbox{\boldmath $x$}_0}{t} \in K'$ because $t < \tau$ implies $\targetError{t} \leq 2 \Bound{}$.
In this case, $\supRangenorm{K'}{F - f} < \delta$ implies the inequality.
Therefore, we have
\[
\targetError{\tau} \leq \LipConst{F}\int_0^\tau \targetError{t} dt + \int_0^\tau \delta dt.
\]
Now, by applying Gr\"{o}nwall{}'s inequality \cite{GronwallNote1919}, we obtain
\[
\targetError{\tau} \leq \delta \tau e^{\LipConst{F} \tau} \leq \Bound{}.
\]
On the other hand, by the definition of $\mathcal{T}$ and the continuity of $\targetError{\cdot}$, we have $\targetError{\tau} \geq 2 \Bound{}$.
These two inequalities contradict.
Therefore, $\supKnorm{\IVP{F}{\cdot}{1} - \IVP{f}{\cdot}{1}} = \sup_{\mbox{\boldmath $x$}_0 \in K} \targetError{1} \leq 2 \Bound{} = 2\delta e^{\LipConst{F}}{}$ holds.
Since $\delta < \varepsilon / (2e^{\LipConst{F}})$, the right-hand side is smaller than $\varepsilon$.
\end{proof}
Finally, we display a lemma that is useful in the case of $d = 1$. It is proved by convolving a smooth bump-like function.
\begin{fact}[Lemma~11 of \citet{TeshimaCouplingbased2020}]
\label{fact: d=1 smoothing}
Let $\tau: \mathbb{R} \to \mathbb{R}$ be a strictly increasing continuous function. Then, for any compact subset $K \subset \mathbb{R}$ and any $\varepsilon > 0$, there exists a strictly increasing $C^\infty$-function $\tilde \tau$ such that \[\supKnorm{\tau - \tilde \tau} < \varepsilon.\]
\end{fact}
\subsection{Proof of Theorem~\ref{thm: NODE is sup-universal}}
\label{sec:appendix:universality-proof-content}
\begin{proof}[Proof of Theorem~\ref{thm: NODE is sup-universal}]
Let $F \colon U \to \mathbb{R}^d$ be an element of ${\mathcal{D}^2}$.
Take any compact set $K\subset U$ and $\varepsilon>0$.
First, thanks to Fact~\ref{red to comp. supp. diff}, there exists a $G \in \DcRDCmd{2}$ and an affine transform $W \in \FLin$ such that \[\Restrict{W\circ G}{K}=\Restrict{F}{K}.\]
Now, if $d \geq 2$, then $2 \neq d + 1$, hence we can immediately use Fact~\ref{fact: simplicity} and Lemma~\ref{lem: diffc2 is generated by flow endpoints} to show that there exists a finite set of flow endpoints (Definition~\ref{def: appendix flow endpoints}) $g_1, \ldots, g_k \in \FlowEnds{2}$ such that
\[
G = g_k \circ \cdots \circ g_1.
\]
On the other hand, if $d = 1$, by Fact~\ref{fact: d=1 smoothing}, for any $\delta > 0$, we can find $\tilde G$ that is a $C^\infty$-diffeomorphism on $\mathbb{R}$ such that $\supKnorm{G - \tilde G} < \delta$. Without loss of generality, we may assume that $\tilde G$ is compactly supported so that $\tilde G \in \DcRDCmd{\infty}$.
Then, we can use Fact~\ref{fact: simplicity} and Lemma~\ref{lem: diffc2 is generated by flow endpoints} to show that there exists a finite set of flow endpoints (Definition~\ref{def: appendix flow endpoints}) $g_1, \ldots, g_k \in \FlowEnds{\infty}$ such that
\[
\tilde G = g_k \circ \cdots \circ g_1.
\]
We now construct $f_j \in \operatorname{Lip}(\mathbb{R}^d){}$ such that $g_j = \IVP{f_j}{\cdot}{1}$.
By Definition~\ref{def: appendix flow endpoints}, for each $g_j$ ($1 \leq j \leq k$), there exists an associated flow $\Phi_j$.
Now, define
\[f_j(\cdot):=\left.\frac{\partial \Phi_j(\cdot,t)}{\partial t}\right|_{t=0}.
\]
Then, $f_j \in \operatorname{Lip}(\mathbb{R}^d){}$ because it is a compactly-supported $C^1$-map:
it is compactly supported since there exists a compact subset $K_j \subset \mathbb{R}^d$ containing the support of $\Phi(\cdot, t)$ for all $t$, and hence $\Phi(\cdot, t) - \Phi(\cdot, 0)$ is zero in the complement of $K_j$.
Now, $\Phi_j(\mbox{\boldmath $x$}, t) = \IVP{f_j}{\mbox{\boldmath $x$}}{t}$
since, by additivity of the flows,
\begin{align*}
\frac{\partial \Phi_j}{\partial t}(\mbox{\boldmath $x$},t)&=\lim_{s\rightarrow 0}\frac{\Phi_j(\mbox{\boldmath $x$},t+s)-\Phi_j(\mbox{\boldmath $x$},t)}{s}
= \lim_{s\rightarrow 0}\frac{\Phi_j(\Phi_j(\mbox{\boldmath $x$},t),s)-\Phi_j(\Phi_j(\mbox{\boldmath $x$},t), 0)}{s}\\
&= \left.\frac{\partial \Phi_j(\Phi_j(\mbox{\boldmath $x$},t),s)}{\partial s}\right|_{s=0}
= f_j(\Phi_j(\mbox{\boldmath $x$},t)),
\end{align*}
and hence it is a solution to the initial value problem that is unique.
As a result, we have $g_j = \Phi_j(\cdot, 1) = \IVP{f_j}{\cdot}{1}$.
By combining Fact~\ref{lemma:composition} and Lemma~\ref{appendix:lem:ODE flow endpoint approximation}, there exist $\phi_1, \ldots, \phi_k \in \Psi(\mathcal{H}{})$ such that
\[
\supKnorm{g_k \circ \cdots \circ g_1 - \phi_k \circ \cdots \circ \phi_1} < \frac{\varepsilon}{\opnorm{W}},
\]
where $\opnorm{\cdot}$ denotes the operator norm.
Therefore, we have that $W \circ \phi_k \circ \cdots \circ \phi_1 \in \INN{\HFNODE}$ satisfies \begin{align*}
\supKnorm{F - W \circ \phi_k \circ \cdots \circ \phi_1}
&= \supKnorm{W \circ G - W \circ \phi_k \circ \cdots \circ \phi_1} \\
&\leq \opnorm{W}\supKnorm{g_k \circ \cdots \circ g_1 - \phi_k \circ \cdots \circ \phi_1} \\
&< \varepsilon
\end{align*}
if $d \geq 2$.
For $d = 1$, it can be shown that there exists $W \circ \phi_k \circ \cdots \circ \phi_1 \in \INN{\HFNODE}$ that satisfies $\supKnorm{F - W \circ \phi_k \circ \cdots \circ \phi_1} < \varepsilon$ in a similar manner.
\end{proof}
\section{Terminal time of autonomous-ODE flow endpoints}
\label{sec:appendix:remark-terminal-time}
In Definition~\ref{def: autonomous ODE flow endpoints}, the choice of the terminal value of the time variable, $t = 1$, is only technical.
To see this, let $T > 0$.
If we consider $w: \mathbb{R} \to \mathbb{R}^d$ that is the solution of the initial value problem
$w(0) = \mbox{\boldmath $x$}, \dot{w}(t) = (T f)(w(t)) \ (t \in \mathbb{R})$
as well as $z: \mathbb{R} \to \mathbb{R}^d$ that is the unique solution to
$z(0) = \mbox{\boldmath $x$}, \dot{z}(t) = f(z(t)) \ (t \in \mathbb{R})$,
then $w(t) = z(Tt)$ holds.
Therefore, $\IVP{f}{\mbox{\boldmath $x$}}{Tt} = \IVP{Tf}{\mbox{\boldmath $x$}}{t}$.
As a result, $\IVP{f}{\mbox{\boldmath $x$}}{T} = \IVP{Tf}{\mbox{\boldmath $x$}}{1}$ holds.
Therefore, it holds that
\[
\{\IVP{f}{\cdot}{T} \ |\ f\in \mathcal{F}\}
= \{\IVP{Tf}{\cdot}{1} \ |\ f\in \mathcal{F}\}
= \ODEFlowEnds{T\mathcal{F}}.
\]
Thus, even if we consider $T \neq 1$, if the set $\mathcal{F}$ is a cone, the set of the autonomous-ODE flow endpoints remains the same.
\section{Comparison between \(L^p\)-universality and \(\sup\)-universality}
In this section, we discuss the advantage of having a representation power guarantee in terms of the \(\sup\)-norm instead of the \(L^p\)-norm in function approximation tasks.
Roughly speaking, the function approximation should be robust under a slight change of norms, but $L^p$-universal approximation property can be sensitive to the choice of $p$.
To make this point, we construct an example: even if a model $g$ sufficiently approximates a target $f$ with the norm $\|\cdot\|_{1,K}$, the model $g$ may fail to approximate $f$ with $\|\cdot\|_{p,K}$ for any $p > 1$, even if $p$ is very close to $1$.
Let $h: (0,1) \rightarrow \mathbb{R}$ be a strictly increasing function such that
\[
\begin{cases}
\|h\|_{p',[0,1]} &< \infty \text{ if } p'=1, \\
\|h\|_{p',[0,1]} &= \infty \text{ if } p'>1.
\end{cases}
\]
For example, $h(x)=-\sum_{k=1}^\infty x^{1/k-1}/k^3$ satisfies this condition.
Then, we define
\[g_n(x)=x+\frac{h(x)}{n}.\]
Now, the sequence $\{g_n\}_{n = 1}^\infty$ approximates $\mathrm{Id}$ in $L^1$-norm in the sense that for any small $\varepsilon >0$, for sufficiently large $N$, it holds that
\begin{align}
\|g_N - {\rm Id}\|_{1,[0,1]} &<\varepsilon, \label{siki for L1}\\
\end{align}
However, the same $g_N$ fails to approximate $\mathrm{Id}$ in $L^p$-norm ($p > 1$) since it always holds that, for sufficiently small $\delta \in (0, 1/2)$,
\begin{align}
\|g_N - {\rm Id}\|_{p,[\delta,1-\delta]} &\ge 1. \label{siki for Lp}
\end{align}
This example highlights that fixing $p$ first and guaranteeing approximation in $L^p$-norm may not suffice for guaranteeing the approximation in $L^{p'}$-norm ($p' > p$). On the other hand, having a guarantee in $\sup$-norm suffices for providing an approximation guarantee in $L^p$-norm for $p \ge 1$ simultaneously.
\end{appendices}
\end{document} | {
"timestamp": "2020-12-07T02:12:22",
"yymm": "2012",
"arxiv_id": "2012.02414",
"language": "en",
"url": "https://arxiv.org/abs/2012.02414",
"abstract": "Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator. Recently, the representation power of NODEs has been partly uncovered: they form an $L^p$-universal approximator for continuous maps under certain conditions. However, the $L^p$-universality may fail to guarantee an approximation for the entire input domain as it may still hold even if the approximator largely differs from the target function on a small region of the input space. To further uncover the potential of NODEs, we show their stronger approximation property, namely the $\\sup$-universality for approximating a large class of diffeomorphisms. It is shown by leveraging a structure theorem of the diffeomorphism group, and the result complements the existing literature by establishing a fairly large set of mappings that NODEs can approximate with a stronger guarantee.",
"subjects": "Machine Learning (cs.LG); Differential Geometry (math.DG); Machine Learning (stat.ML)",
"title": "Universal Approximation Property of Neural Ordinary Differential Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864287859481,
"lm_q2_score": 0.7981867801399695,
"lm_q1q2_score": 0.7901940799749231
} |
https://arxiv.org/abs/1805.01380 | Resistors in dual networks | Let $G$ be a finite plane multigraph and $G'$ its dual. Each edge $e$ of $G$ is interpreted as a resistor of resistance $R_e$, and the dual edge $e'$ is assigned the dual resistance $R_{e'}:=1/R_e$. Then the equivalent resistance $r_e$ over $e$ and the equivalent resistance $r_{e'}$ over $e'$ satisfy $r_e/R_e+r_{e'}/R_{e'}=1$. We provide a graph theoretic proof of this relation by expressing the resistances in terms of sums of weights of spanning trees in $G$ and $G'$ respectively. | \section{Introduction}
The systematic study of electrical resistor networks goes back to
the german physicist Gustav Robert Kirchhoff in the middle of the
19th century. In particular, Kirchhoff's two circuit laws and Ohm's law allow to fully
describe the electric current and potential in a given static network
of resistors and voltage sources. In the course of his investigations, Kirchhoff discovered
the Matrix Tree Theorem, which states that the number of spanning
trees in a graph $G$ is equal to any cofactor of the Laplacian matrix of $G$.
Surprisingly, this purely graph theoretic fact has a deep connection
to the physical question of the equivalent resistance between two
vertices of an electric network.
In the simplest situation, a finite simple graph $G$ can be interpreted as an electrical network
by considering each edge as a resistor of 1 Ohm. Then, one of Kirchhoff's results states that the
equivalent resistance over an edge $e$ in this graph
is given by the quotient of the number of spanning trees containing
the edge $e$ divided by the total number of spanning trees in $G$.
More generally, we may consider a finite multigraph $G$ and assign to each edge $e$ of $G$ a weight $R_e>0$
interpreted as resistance of $e$. Also in this case, the equivalent resistance
between two vertices can be expressed in terms of sums of weights of spanning trees (see
Section~\ref{sec-preliminaries}).
Consider a cube as a graph with unit resistance on each edge
and the dual polyhedron, the octahedron, in the same way.
The equivalent resistance over an edge of the cube turns out to be
$7/12$ Ohm, the equivalent resistance over an edge of the octahedron
is $5/12$ Ohm. Observe, that these values add up to $1$!
The same phenomenon occurs for the dodecahedron with equivalent resistance of $19/30$ Ohm over each edge
and the dual graph, the icosahedron, with $11/30$ Ohm, or
the rhombic dodecahedron with equivalent resistance of $13/24$ Ohm over each edge
and the dual graph, the cuboctahedron, with $11/24$ Ohm.
This is not just a coincidence:
Suppose that a planar graph and its dual are both interpreted as electrical
networks with unit resistance for all edges. Now, if $r_e$ is the equivalent
resistance over an edge $e$ and $r'_e$ is the equivalent
resistance over the dual edge $e'$, then $r_e+r'_e=1$ (see~\cite[Exercise 7, Section 9.5]{bapat}, \cite[Theorem 2.3]{yang2013}).
The aim of this article is to generalize this formula to plane networks with
arbitrary resistors (see Theorem~\ref{thm-main})
and to give a graph theoretic proof.
\section{Preliminaries}\label{sec-preliminaries}
Let $G$ be a finite connected graph with $n\ge 3$ vertices and
without loops. Multiple edges are allowed. Each edge $e$ is
considered as a resistor of resistance $R_e>0$. Then, we consider the
weighted Laplace matrix $L=(\ell_{ij})$ of $G$ defined as\footnotetext[1]{By convention, an empty sum is $0$.}
$$
\ell_{ij}:=\begin{cases}
-\sum \frac1{R_e} &\text{where the sum runs over all edges $e$}\\[-1.2mm]& \text{between the vertices $i$ and $j$\footnote{By convention, an empty sum is $0$.},}\\
a_{ii} &\text{if $i=j$}
\end{cases}
$$
where the diagonal values $a_{ii}$ are chosen such that the sum of all rows of $L$ vanishes.
The weight of a subgraph $H$ of $G$ is defined as
$$
\Pi(H):=\prod_{\text{$e$ an edge of $H$}}\frac1{R_e}\,.
$$
Then we recall the following:
\begin{proposition}\label{prop-matrix-tree}
\begin{enumerate}
\item The value of each cofactor $L_{ij}$ of $L$ is the sum of the weighs of all spanning trees of $G$.\label{qp}
\item If $L_{ij,ij}$ denotes the determinant of the matrix $L$ with rows $i,j$ and columns $i,j$ deleted,
then the quotient $L_{ij,ij}/R_e$ equals the sum of the weights of all spanning trees of $G$ which
contain the edge $e$ between the vertices $i$ and $j$.
\end{enumerate}
\end{proposition}
{\em Proof.}
The first part of the proposition follows directly from a general version of
the Matrix Tree Theorem (see, e.g., \cite[Theorem VI.27]{tutte}).
For the second part we proceed as follows: Let $S(G)$ denote the set of all spanning trees of $G$.
Observe, that by using the edge $e$ between the vertices $i$ and $j$ we can split up the sum of the weights of all spanning trees of $G$ as follows:
\begin{equation}\label{eq-spanning}
\sum_{T \in S(G)}\Pi(T) = \sum_{T \in S(G) \atop e \in T} \Pi(T) + \sum_{T \in S(G) \atop e \notin T} \Pi(T).
\end{equation}
Furthermore, the second sum on the right-hand side of~(\ref{eq-spanning}) corresponds to the sum of the weights of all spanning trees of $G-e$, i.e. $G$
with edge $e$ removed. Using the first part of the proposition, we get from~(\ref{eq-spanning}) that
$$
\sum_{T \in S(G) \atop e \in T} \Pi(T) = L_{ii} - L^e_{ii},
$$
where $L^e=(\ell^e_{hk})$ is the weighted Laplace matrix of $G-e$.
We have
$$
\ell^e_{ij}=\ell^e_{ji}=\ell_{ij}+\frac1{R_e},\quad \ell^e_{ii}=\ell_{ii}-\frac1{R_e},\quad \ell^e_{jj}=\ell_{jj}-\frac1{R_e}
$$
and $\ell^e_{hk}=\ell_{hk}$ for all other $h,k$.
The term $L_{ii} - L^e_{ii}$ can be computed using Laplace's cofactor expansion. Expanding both $L_{ii}$ and $L^e_{ii}$ along the $j$-th row yields
$$
\sum_{T \in S(G) \atop e \in T} \Pi(T) = \sum_{k\neq i}\ell_{jk}(L_{ii})_{jk} - \sum_{k\neq i}\ell^e_{jk}(L^e_{ii})_{jk}.
$$
Since the cofactors $(L_{ii})_{jk}$ and $(L^e_{ii})_{jk}$ are equal for all $k$ we are left with
\begin{equation}
\sum_{T \in S(G) \atop e \in T} \Pi(T) =(\ell_{jj}-\ell^e_{jj}) (L_{ii})_{jj}= \frac{L_{ij,ij}}{R_{e}}.\tag*{$\Box$}
\end{equation}
{\bf Remark.} The second part of Proposition~\ref{prop-matrix-tree} follows also quite
easily from the All Minors Matrix Tree Theorem (see~\cite{chaiken}).
The connection to the equivalent resistance is given by the following
\begin{proposition}\label{prop-resistance}
The equivalent resistance $r_e$ over the edge $e$ connecting the vertices $i$ and $j$
is given by
\begin{equation}\label{eq-res}
r_e=\frac{L_{ij,ij}}{L_{11}}\,.
\end{equation}
\end{proposition}
{\em Remark.} Recall, that by Proposition~\ref{prop-matrix-tree}(\ref{qp}) the denominator in~(\ref{eq-res}) can be replaced by
any other cofactor $L_{hk}$.
\begin{proof}
Observe, that the Laplace matrix of the weighted multigraph $G$ corresponds
to the Laplace matrix of a weighted simple graph $H$ where the multiple
edges $e_1,\ldots e_k$ between each two vertices $i$ and $j$ of $G$ are collapsed to a single edge $e$ with weight
$$R_e=\frac1{\frac1{R_{e_1}}+ \ldots+\frac1{R_{e_k}}}.$$
However this value corresponds exactly to the equivalent resistance
of the parallel resistors $R_{e_1},\ldots,R_{e_k}$. Thus,
the equivalent resistance over the vertices $i$ and $j$ in $G$
equals the equivalent resistance over the vertices $i$ and $j$ in $H$
and the claim follows from~\cite[Lemma 2]{gupta} and Proposition~\ref{prop-matrix-tree}.
\end{proof}
From now on we assume that is $G$ a finite planar multigraph with dual graph $G'$.
Recall that in general $G'$ depends on the embedding of $G$ in the plane.
\begin{definition}\label{def-dual}
Let $G'$ be the dual of a planar embedded multigraph $G$, and
let $G$ be interpreted as an electrical network by associating
to each edge $e$ a resistance $R_e>0$.
For each edge $e$ of $G$, we define
the electrical resistance $R_{e'}$ of the dual edge $e'$
to be the conductance of $e$, i.e. $R_{e'}:=1/R_e$.
Then, $G'$ equipped with these resistances is called the
{\em dual electrical network\/} of $G$.
\end{definition}
Observe, that the Laplace matrix $L'=(\ell'_{ij})$ of the dual electrical network $G'$ is given by
$$
\ell'_{ij}:=\begin{cases}
-\sum \frac1{R_e'}=-\sum R_{e} &\text{where the sum runs over all edges $e'$}\\[-1.2mm]& \text{between the vertices $i$ and $j$ of $G'$,}\\
a_{ii} &\text{if $i=j$}
\end{cases}
$$
where the diagonal values $a_{ii}$ are chosen such that the sum of all rows of $L'$ vanishes.
Similarly the weight of a subgraph $H'$ of $G'$ is
$$
\Pi(H'):=\prod_{\text{$e'$ an edge of $H'$}}\frac1{R_{e'}} =\prod_{\text{$e'$ an edge of $H'$}}{R_{e}} \,.
$$
Then we have:
\begin{proposition}\label{prop-dual}
\begin{enumerate}
\item The value of an arbitrary cofactor $L'_{ij}$ of $L'$ is equal to the
sum of the weights of all spanning trees in $G'$ and there holds:
\begin{equation}\label{eq-weights}
L'_{ij}=L_{ij}\Pi(G')
\end{equation}
where $\Pi(G')=\prod R_k$ is the total weight of $G'$.
\item The product $R_e L'_{ij,ij}$ equals the sum of the weights of the spanning trees in $G'$
which contain the dual edge $e'$ of edge $e$.\label{prop-dual-ii}
\end{enumerate}
\end{proposition}
\begin{proof}
We only have to show equation~(\ref{eq-weights}), since all other statements
follow from Proposition~\ref{prop-matrix-tree}. Let $S(G)$ and $S(G')$ denote the set of all spanning trees of $G$ and its dual $G'$ respectively. Furthermore, let $\Psi$ denote the canonocal bijection from $S(G')$ to $S(G)$ given by $\Psi(T') = \{e \in G | e' \in G' - T' \}$. Observe, that the weight of a spanning tree $T'$ of $G'$ can be expressed as
\begin{equation}\label{eq-pit}
\Pi(T') = \frac{\Pi(G')}{\Pi(G'-T')}.
\end{equation}
Using the bijection $\Psi$ and Definition~\ref{def-dual} in~(\ref{eq-pit}), namely the fact, that the
electrical resistance of an edge in $G'$ is equal to the conductance of the dual
edge in $G$, we get
\begin{equation}\label{eq-pitt}
\Pi(T') = \Pi(G')\Pi(\Psi(T')).
\end{equation}
The characterization of $L_{ij}$ and $L'_{ij}$ as the sum of the weights of all
spanning trees of $G$ and $G'$ respectively leads, together with~(\ref{eq-pitt}), t
\begin{align*}
\qquad L'_{ij} = \sum_{T' \in S(G')} \Pi(T') &= \sum_{T' \in S(G')} \Pi(G')\Pi(\Psi(T')) =\\&= \Pi(G')\sum_{T \in S(G)} \Pi(T) = \Pi(G')L_{ij}, \qquad
\end{align*}
where we have used the bijectivity of $\Psi$ in the penultimate equality. This completes the proof.
\end{proof}
\section{The sum formula in dual networks}
The main result is now the following:
\begin{theorem}\label{thm-main}
Let $R_e$ be the resistance of an edge $e$ and $R_{e'}=1/R_e$ the resistance
of the dual edge $e'$ in the dual electrical network. Let $r_e$ denote the equivalent resistance over edge $e$
and $r_{e'}$ denote the equivalent resistance over edge $e'$. Then
$$
\frac{r_e}{R_e}+\frac{r_{e'}}{R_{e'}}=1.
$$
\end{theorem}
For a proof of this formula based upon physical arguments see~\cite{furrer}.
Here, we provide a purely graph theoretic proof.
\begin{proof}
Let $e$ be an edge between the vertices $i$ and $j$ in $G$, and we may
assume that the vertices are numbered such that $e'$ also runs between
the vertices $i$ and $j$ in $G'$.
In a first step, we are going to derive a new expression for $L'_{ij,ij}$.
Let $S(G)$ and $S(G')$ denote the set of all spanning trees of $G$ and its dual $G'$ respectively, and let $\Psi$ be the canonical bijection from $S(G')$ to $S(G)$ as defined in the proof of Proposition~\ref{prop-dual}. By part~\ref{prop-dual-ii} of Proposition~\ref{prop-dual} we have
\begin{equation}\label{eq-o}
L'_{ij,ij} = \frac{1}{R_{e}}\sum_{T'\in S(G') \atop e'\in T'}\Pi(T').
\end{equation}
The identity~(\ref{eq-pitt}) and the fact, that $\Psi(T')$ does not contain the edge $e$ if $T'$ contains the dual edge $e'$ allow
to rewrite the right-hand side of~(\ref{eq-o}) as follows
\begin{equation}\label{eq-oo}
\frac{1}{R_{e}}\sum_{T'\in S(G') \atop e'\in T'}\Pi(T') =
\frac{\Pi(G')}{R_{e}}\sum_{T \in S(G) \atop e \notin T}\Pi(T).
\end{equation}
Furthermore, the sum on the right-hand side of~(\ref{eq-oo}) can be expressed as the difference between the sum of the weights of all spanning trees of $G$ and the sum of the weights of the spanning trees, that contain the edge $e$. Therefore, it holds that
\begin{equation}\label{eq-ooo}
L'_{ij,ij} = \frac{\Pi(G')}{R_{e}}\Bigl(\sum_{T \in S(G)}\Pi(T) -
\sum_{T \in S(G) \atop e \in T}\Pi(T)\Bigr).
\end{equation}
Using Proposition~\ref{prop-matrix-tree} in~(\ref{eq-ooo}) yields the following identity:
\begin{equation}\label{eq-oooo}
L'_{ij,ij} = \frac{\Pi(G')}{R_{e}}\Bigl(L_{ii} - \frac{L_{ij,ij}}{R_{e}}\Bigr).
\end{equation}
Now, in a second step, it follows from Proposition~\ref{prop-resistance} and Definition~\ref{def-dual}, that
\begin{equation}\label{eq-ooooo}
\frac{r_e}{R_e}+\frac{r_{e'}}{R_{e'}} = \frac{L_{ij,ij}}{L_{ii}R_{e}} +
R_{e}\frac{L'_{ij,ij}}{L'_{ii}}\,.
\end{equation}
Using the first part of Proposition~\ref{prop-dual} and~(\ref{eq-oooo}), we can rewrite the right-hand side of~(\ref{eq-ooooo})
and simplify the resulting expression to arrive at
$$
\frac{r_e}{R_e}+\frac{r_{e'}}{R_{e'}} = \frac{L_{ij,ij}}{L_{ii}R_{e}} +
R_{e}\frac{\frac{\Pi(G')}{R_{e}}(L_{ii} - \frac{L_{ij,ij}}{R_{e}})}{L_{ii}\Pi(G')}=1,
$$
as claimed.
\end{proof}
\begin{example}
Let us consider the following electrical network:
\begin{center}
\begin{tikzpicture}[x=80,y=80]
\draw[fill=black] (0,1) circle (3pt);
\draw[fill=black] (0,0) circle (3pt);
\draw[fill=black] (1,0) circle (3pt);
\draw[fill=black] (1,1) circle (3pt);
\draw (0,1) node[anchor=south east] {$1$};
\draw (0,0) node[anchor=north east] {$2$};
\draw (1,0) node[anchor=north west] {$3$};
\draw (1,1) node[anchor=south west] {$4$};
\draw [line width=.8pt] (1,1) -- node[above] {$R_1$} (0,1)-- node[left] {$R_2$} (0,0) -- node[below] {$R_3$} (1,0);
\draw [line width=.8pt] (1,0) to[out=45,in=-45] node[right] {$R_5$} (1,1);
\draw [line width=.8pt] (1,0) to[out=135,in=-135] node[left] {$R_4$}(1,1);
\end{tikzpicture}
\end{center}
The corresponding Laplace matrix $L$ is
$$
L=\begin{pmatrix}
\frac1{R_1}+\frac1{R_2} & -\frac1{R_2} & 0 &-\frac1{R_1}\\
-\frac1{R_2} & \frac1{R_2}+\frac1{R_3} & -\frac1{R_3} & 0\\
0 & -\frac1{R_3} &\frac1{R_3}+\frac1{R_4}+\frac1{R_5}& -\frac1{R_4}-\frac1{R_5}\\
-\frac1{R_1}&0&-\frac1{R_4}-\frac1{R_5}& \frac1{R_1}+\frac1{R_4}+\frac1{R_5}
\end{pmatrix}
$$
The cofactor
$$
L_{11}=\frac1{R_1R_2R_3R_4R_5}\bigl(R_4(R_1+R_2+R_3) +R_5(R_1+R_2+R_3+R_4)\bigr)
$$
corresponds indeed to the total weight of all spanning trees of $G$ as one easily checks directly.
For the edge $e$ between the vertices $3$ and $4$ with resistance $R_4$,
we get
$$
L_{34,34}=\frac1{R_1R_2}+\frac1{R_1R_3}+\frac1{R_2R_3}
$$
which, divided by $R_4$, gives the sum of the weights of the trees which contain $e$,
as stated in Proposition~\ref{prop-matrix-tree}.
The dual network looks as follows:
\begin{center}
\definecolor{gray}{rgb}{.8,.8,.8}
\begin{tikzpicture}[x=100,y=100]
\clip(-.5,-.4) rectangle (2.45,1.65);
\draw[color=gray, fill=gray] (0,1) circle (3pt);
\draw[color=gray, fill=gray] (0,0) circle (3pt);
\draw[color=gray, fill=gray] (1,0) circle (3pt);
\draw[color=gray, fill=gray] (1,1) circle (3pt);
\draw[color=gray] (0,1) node[anchor=south east] {$1$};
\draw[color=gray] (0,0) node[anchor=north east] {$2$};
\draw[color=gray] (1,0) node[anchor=north west] {$3$};
\draw[color=gray] (1,1) node[anchor=south west] {$4$};
\draw [color=gray, line width=.8pt] (1,1) -- node[above, style={pos=.75}] {$R_1$} (0,1)-- node[left, style={pos=.75}] {$R_2$} (0,0) -- node[below, style={pos=.25}] {$R_3$} (1,0);
\draw [color=gray, line width=.8pt] (1,0) to[out=45,in=-45] node[right, style={pos=.25}] {$R_5$} (1,1);
\draw [color=gray, line width=.8pt] (1,0) to[out=135,in=-135] node[left, style={pos=.25}] {$R_4$}(1,1);
\draw[color=black, fill=black] (1,.5) circle (3pt);
\draw[color=black, fill=black] (.3,.5) circle (3pt);
\draw[color=black, fill=black] (1.8,.5) circle (3pt);
\draw [color=black, line width=.8pt] (.3,.5) -- node[above] {$1/R_4$} (1,.5) -- node[above] {$1/R_5$} (1.8,.5);
\draw [color=black, line width=.8pt] (.3,.5) to[out=90,in=90,distance=100] node[above, style={pos=.5}] {$1/R_1$}(1.8,.5);
\draw [color=black, line width=.8pt] (.3,.5) to[out=-90,in=-90,distance=100] node[below, style={pos=.5}] {$1/R_3$}(1.8,.5);
\draw [color=black, line width=.8pt] (.3,.5) to[out=-180,in=180,distance=120] node[left, style={pos=.5},xshift=-2] {$1/R_2$}(1.05,1.6) to[out=0,in=0,distance=120] (1.8,.5);
\draw[color=black] (.3,.5) node[anchor=north east] {$1$};
\draw[color=black] (1,.5) node[anchor=north,yshift=-2] {$2$};
\draw[color=black] (1.8,.5) node[anchor=north west] {$3$};
\end{tikzpicture}
\end{center}
and the corresponding Laplace matrix is
$$
L'=\begin{pmatrix}
R_1+R_2+R_3+R_4 & -R_4 & -(R_1+R_2+R_3)\\
-R_4&R_4+R_5 & -R_5\\
-(R_1+R_2+R_3) & -R_5 &R_1+R_2+R_3+R_5
\end{pmatrix}.
$$
The cofactor
$$
L'_{11}=R_4(R_1+R_2+R_3) +R_5(R_1+R_2+R_3+R_4)
$$
is the total weight of the spanning trees of $G'$. And indeed, we have
$$
L'_{11}=L_{11}R_1R_2R_3R_4R_5
$$
as predicted by Proposition~\ref{prop-dual}. Furthermore, we get
$$
L'_{12,12}=R_1+R_2+R_3+R_5
$$
which gives according to Proposition~\ref{prop-matrix-tree}, after multiplication
by $R_4$, the total weight of the trees in $G'$ which contain the dual edge $e'$.
Now, the equivalent resistances over edge $e$ and $e'$ respectively are, according to Proposition~\ref{prop-resistance},
$$\arraycolsep=2pt\def2.2{2.2}
\begin{array}{lll}
r_4&=\displaystyle{\frac{L_{34,34}}{L_{11}} }&=\displaystyle{ \frac{R_4R_5(R_1+R_2+R_3)}{R_4(R_1+R_2+R_3) +R_5(R_1+R_2+R_3+R_4)}}\\
r'_4&=\displaystyle{\frac{L'_{12,12}}{L'_{11}}}&= \displaystyle{\frac{R_1+R_2+R_3+R_5}{R_4(R_1+R_2+R_3) +R_5(R_1+R_2+R_3+R_4)}}
\end{array}
$$
and finally indeed, with $R'_4=1/R_4$,
$$
\frac{r_4}{R_4}+\frac{r'_4}{R'_4}=1.
$$
\end{example}
\bibliographystyle{plain}
| {
"timestamp": "2018-05-04T02:11:44",
"yymm": "1805",
"arxiv_id": "1805.01380",
"language": "en",
"url": "https://arxiv.org/abs/1805.01380",
"abstract": "Let $G$ be a finite plane multigraph and $G'$ its dual. Each edge $e$ of $G$ is interpreted as a resistor of resistance $R_e$, and the dual edge $e'$ is assigned the dual resistance $R_{e'}:=1/R_e$. Then the equivalent resistance $r_e$ over $e$ and the equivalent resistance $r_{e'}$ over $e'$ satisfy $r_e/R_e+r_{e'}/R_{e'}=1$. We provide a graph theoretic proof of this relation by expressing the resistances in terms of sums of weights of spanning trees in $G$ and $G'$ respectively.",
"subjects": "Combinatorics (math.CO)",
"title": "Resistors in dual networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864299677056,
"lm_q2_score": 0.7981867705385762,
"lm_q1q2_score": 0.7901940714129373
} |
https://arxiv.org/abs/0904.1024 | Symmetric products, duality and homological dimension of configuration spaces | We discuss various aspects of `braid spaces' or configuration spaces of unordered points on manifolds. First we describe how the homology of these spaces is affected by puncturing the underlying manifold, hence extending some results of Fred Cohen, Goryunov and Napolitano. Next we obtain a precise bound for the cohomological dimension of braid spaces. This is related to some sharp and useful connectivity bounds that we establish for the reduced symmetric products of any simplicial complex. Our methods are geometric and exploit a dual version of configuration spaces given in terms of truncated symmetric products. We finally refine and then apply a theorem of McDuff on the homological connectivity of a map from braid spaces to some spaces of `vector fields'. | \section{Introduction}
Braid spaces or configuration spaces of \textit{unordered pairwise
distinct} points on manifolds have important applications to a number
of areas of mathematics and physics. They were of crucial use in the
seventies in the work of Arnold on singularities and then later in the
eighties in work of Atiyah and Jones on instanton spaces in gauge
theory. In the nineties they entered in many works on the homological
stability of holomorphic mapping spaces. No more important perhaps had
been their use than in stable homotopy theory in the sixties and early
seventies through the work of Milgram, May, Segal and Fred Cohen who
worked out the precise connection with loop space theory. This work
has led in particular to the proof of Nishida's nilpotence theorem and
to Mahowald's infinite family in the stable homotopy groups of spheres
to name a few.
Given a space $M$, define $B(M,n)$ to be the space of finite subsets
of $M$ of cardinality $n$. This is usually referred to as the $n^{\rm th}$
``braid space'' of $M$ and in the literature it is often denoted by
$C_n(M)$ (Atiyah and Jones \cite{atiyah}, B{\"o}digheimer, Cohen and
Taylor \cite{bct}, Cohen \cite{cohen}). Its fundamental group written
$Br_n(M)$ is the ``braid group'' of $M$. The object of this paper is
to study the homology of braid spaces and the main approach we adopt
is that of duality with the symmetric products. In so doing we take
the opportunity to refine and elaborate on some classical material.
Next is a brief content summary.
\fullref{braids} describes the homotopy type of braid spaces of some
familiar spaces and discusses orientation issues. \fullref{tp}
introduces truncated products, as in B{\"o}digheimer, Cohen and
Milgram \cite{bcm} and Milgram and L{\"o}ffler \cite{lm}, states the
duality with braid spaces and then proves our first main result on the
cohomological dimension of braid spaces. \fullref{punctured} uses
truncated product constructions to split in an elementary fashion the
homology of braid spaces for punctured manifolds. In \fullref{bounds}
we prove our sharp connectivity result for \textit{reduced} symmetric
products of CW complexes which seems to be new and a significant
improvement on work of Nakaoka and Welcher \cite{welcher}. In
\fullref{spec} we make the link between the homology of symmetric and
truncated products by discussing a spectral sequence introduced by
B\"odigheimer, Cohen and Milgram and exploited by them
to study ``braid homology'' $H_*(B(M,n))$. Finally \fullref{stability}
completes a left out piece from McDuff and Segal's work on
configuration spaces \cite{dusa}. In that paper, $H_*(B(M,n))$, for
closed manifolds $M$, is compared to the homology of some spaces of
``compactly supported vector fields'' on $M$ and the main theorem there
states that these homologies are isomorphic up to a range that
increases with $n$. We make this range more explicit and use it for
example to determine the abelianization of the braid groups of a
closed Riemann surface. A final appendix collects some homotopy
theoretic properties of section spaces that we use throughout.
Below are precise statements of our main results which we have divided
up into three main parts. Unless explicitly stated, all spaces are
assumed to be connected. The $n^{\rm th}$ symmetric group
is written $\mathfrak{S}_n$.
\subsection{Connectivity and cohomological dimension}
For $M$ a manifold, we write $H^*(M,\pm {\mathbb Z} )$ for the cohomology of $M$
with coefficients in the orientation sheaf $\pm{\mathbb Z}$; in other words
$H^*(M,\pm{\mathbb Z})$ is the homology of\break $\Hom_{{\mathbb Z}[\pi_1(X)]}(C_*(\tilde
M), {\mathbb Z})$, where $C_*(\tilde M)$ is the singular chain complex of the
universal cover $\tilde M$ of $M$, and where the action of (the class
of) a loop on the integers ${\mathbb Z}$ is multiplication by $\pm 1$
according to whether this loop preserves or reverses orientation.
Similarly one defines $H_*(M,\pm{\mathbb Z} ):=
H_*(C_*(\tilde M)\otimes_{{\mathbb Z}[\pi_1(x)]}{\mathbb Z})$.
\begin{rem}\label{twisted} (see \fullref{folklore})\qua When $M$ is simply connected
and $\dim M:=d > 2$, $\pi_1(B(M,k))=\mathfrak{S}_k$ and $\tilde B(M,k) =
F(M,k)\subset M^k$ is the subspace of $k$ \textit{ordered} pairwise distinct
points in $M$ (\fullref{braids}). It follows that
$H^*(B(M,k);\pm{\mathbb Z})$ is the homology of the chain complex
$\Hom_{{\mathbb Z}[\mathfrak{S}_k]}(C_*(F(M,k),{\mathbb Z})$ where $\mathfrak{S}_k$ acts on
${\mathbb Z}$ via $\sigma (1) = (-1)^{sg( \sigma)\cdot d }$ and $sg (\sigma
)$ is the sign of the permutation $\sigma\in\mathfrak{S}_k$. \end{rem}
We denote by $\hbox{cohdim}_{\pm{\mathbb Z}}(M)$ (cohomological dimension) the smallest
integer with the property that
$$H^i(M;\pm{\mathbb Z} ) = 0\ ,\ \ \forall i > \hbox{cohdim}_{\pm{\mathbb Z} }(M)\ .$$
If $M$ is orientable, then $H^*(M,\pm{\mathbb Z} ) = H^*(M,{\mathbb Z} )$ and
$\hbox{cohdim}_{\pm{\mathbb Z}}(M) =$\break $\hbox{cohdim} (M)$, the
cohomological dimension of $M$.
A space $X$ is $r$--connected if $\pi_i(X)=0$ for $0\leq i\leq r$. The
connectivity of $X$; $\conn(X)$, is the largest integer with such a
property. This connectivity is infinite if $X$ is contractible. The
following is our first main result
\begin{thm}\label{main3} Let $M$ be a compact manifold of dimension $d\geq 1$,
with boundary $\partial M$, and let $U\subset M$ be a closed subset such
that $U\cap\partial M= \emptyset$ and $M-U$ connected.
We denote by $r$ the connectivity of $M$
if $U\cup\partial M=\emptyset$, or the connectivity of the quotient
$M/U\cup\partial M$ if $U\cup\partial M\neq \emptyset$. We assume $0\leq
r<\infty$ and $k\geq 2$. Then
$$
\hbox{cohdim}_{\pm{\mathbb Z}}(B(M-U,k)) \leq
\begin{cases} (d-1)k-r+1, &\hbox{if}\ U\cup\partial M=\emptyset,\\
(d-1)k-r, &\hbox{if}\ U\cup\partial M\neq \emptyset.
\end{cases}
$$ When $M$ is even dimensional orientable, then replace
$\hbox{cohdim}_{\pm{\mathbb Z}}$ by $\hbox{cohdim}$.
\end{thm}
\begin{rem}\label{numbered}
We check this theorem against some known examples:
\begin{enumerate}
\item $B(S^d-\{p\},2)=B({\mathbb R}^d,2)\simeq{\mathbb R} P^{d-1}$
(see \fullref{tp}) and
$\hbox{cohdim}_{\pm{\mathbb Z}}(B({\mathbb R}^d,2))$
$= 2(d-1)-r = d-1 =\hbox{cohdim}_{\pm{\mathbb Z}}({\mathbb R} P^{d-1})$ indeed, where
$r=d-1= \conn(S^d)$.
\item
$B(S^d,2)\simeq{\mathbb R} P^d$ (see \fullref{tp}) and
$\hbox{cohdim}_{\pm{\mathbb Z}} (B(S^d,2)) = d$\ in agreement with our formula.
\item
It is known that for odd primes $p$ and $d\geq 2$,
$H^{(d-1)(p-1)}(B({\mathbb R}^d,p);{\mathbb F}_p)$ is non-trivial and an isomorphic
image of $H^{(d-1)(p-1)}(\mathfrak{S}_p;{\mathbb F}_p)$ (Ossa \cite{ossa} and
Vassiliev \cite{vassiliev}). Our result states that, at least for
even $d$, no higher homology can occur. The cohomological dimension
of $B({\mathbb R}^d,k)$ when using ${\mathbb F}_2$ coefficients is known to be
$(k-\alpha (k))\cdot (d-1)$ where $\alpha (k)$ is the number of 1's in
the dyadic decomposition of $k$ (see Roth \cite{frido}). In the case
$d=2$, $B({\mathbb R}^2,k)$ is the classifying space of Artin braid group
$B_k:= Br_k({\mathbb R}^2)$ and is homotopy equivalent to a
$(k-1)$--dimensional CW complex so that $\hbox{cohdim} (B({\mathbb R}^2,k))\leq k-1$
in agreement with our calculation.
\end{enumerate}
\end{rem}
\begin{rem} The theorem applies to when $M=S^1$ and $U$ is either empty
or a single point. In that case $M-U\cong S^1,{\mathbb R}$. But one knows that
for $k\geq 1$, $B(S^1,k)\simeq S^1$ (\fullref{cns1})
and $B({\mathbb R},k)$ is contractible.
\end{rem}
\begin{cor}\label{twocomplexes} Let $S$ be a Riemann surface and
$Q\subset S$ a finite subset. Then $H^i(B(S-Q,k)) = 0$ if $i\geq k+1$ and
$Q\cup\partial S\neq\emptyset$ ; or if $i> k+1$ and $Q\cup\partial
S=\emptyset$. \end{cor}
This corollary gives an extension of the ``finiteness'' result of
Napolitano \cite{nap1}. When $S$ is an open surface, then $B(S,k)$ is a Stein
variety and hence its homology vanishes above the complex dimension;
ie, $H_i(B(S,k)) = 0$ for $i> k$. This also agrees with the above computed bounds.
The proof of \fullref{main3} relies on a useful connectivity result of
Nakaoka (\fullref{nakak}). We also use this result to produce sharp
connectivity bounds for the \textit{reduced} symmetric products
\fullref{bounds}. Recall that $\sp{n}(X)$, the $n^{\rm th}$ symmetric product of
$X$, is the quotient of $X^n$ by the permutation action of the symmetric group
$\mathfrak{S}_n$ so that $B(X,n)\subset\sp{n}(X)$ is the subset of
configurations of distinct points. We always assume $X$ is based so there is
an embedding $\sp{n-1}(X)\hookrightarrow\sp{n}(X)$ given by adjoining the
basepoint, with cofiber $\bsp{n}(X)$ the ``$n^{\rm th}$ reduced symmetric'' product of
$X$. The following result expresses the connectivity of $\bsp{n}X$ in terms of
the connectivity of~$X$.
\begin{thm}\label{connectivity} Suppose $X$ is a based $r$--connected simplicial
complex with $r\geq 1$. Then $\bsp{n}(X)$ is $(2n+r-2)$--connected.
\end{thm}
In particular the embedding $\sp{n-1}(X){\ra{1.5}}\sp{n}(X)$ induces
homology isomorphisms in degrees up to $(2n+r-3)$. The proof of this
theorem is totally inspired from Kallel and Karoui \cite{kk} where
similar connectivity results are stated, and it uses the fact that the
homology of symmetric products only depends on the homology of the
underlying complex (Dold \cite{dold}). Note that the bound $2n+r-2$
is sharp as is illustrated by the case $X=S^2$, $r=1$ and
$\bsp{n}(S^2)=S^{2n}$. A slightly weaker connectivity bound than ours
can be found in Welcher \cite[Corollary 4.9]{welcher}.
Note that \fullref{connectivity} is stated for simply connected
spaces. To get connectivity results for reduced symmetric products of
a compact Riemann surface for example we use geometric input from
Kallel and Salvatore \cite{ks2}. This applies to any two dimensional
complex.
\begin{prop}\label{conntwo} Let $X = \bigvee^wS^1\cup
(D^2_1\cup\cdots\cup D^2_r)$ be a two dimensional CW complex with one
skeleton a bouquet of $w$ circles. Then $\bsp{n}X$ is
$(2n-\min(w,n)-1)$--connected.
\end{prop}
\subsection{Puncturing manifolds}
We give generalizations and a proof simplification of results of
Napolitano \cite{nap1,nap2}. For $S$ a two dimensional topological
surface, $p$ and the $p_i$ points in $S$, it was shown in \cite{nap1}
that, for field coefficients ${\mathbb F}$,
\begin{equation}\label{secondsplit}
H^j(B(S -\{p_1,p_2\},n);{\mathbb F} )\cong
\bigoplus_{t=0}^nH^{j-t}(B(S -\{p\},n-t);{\mathbb F} )\ .
\end{equation}
Here and throughout $H^* = 0$ when $*<0$ and $B(X,0)$ is basepoint.
When $S$ is a closed orientable surface and
${\mathbb F}={\mathbb F}_2$, \cite{nap1} establishes furthermore a splitting:
\begin{equation}\label{firstsplit}
H^j(B(S,n);{\mathbb F}_2)\cong
H^{j}(B(S -\{p\},n);{\mathbb F}_2)\oplus H^{j-2}(B(S -\{p\},n-1);{\mathbb F}_2)
\end{equation}
Similar splittings occur in
Cohen \cite{cohen2} and Gorjunov \cite{goryunov}. These splittings as we show extend
to any closed topological manifold $M$ and to any number of
punctures. If $V$ is a vector space, write
$V^{\oplus k}:= V\oplus\cdots\oplus V$ ($k$--times). Given
positive integers $r$ and $s$, we write $p(r,s)$ the number of ways we can
partition $s$ into a sum of $r$ \textit{ordered} positive (or null) integers.
For instance $p(1,s)=1$, $p(2,s)=s+1$ and $p(r,1)=r$.
\begin{thm}\label{main} Let $M$ be a closed connected manifold of
dimension $d$ and $p\in M$. Then:
\begin{equation}\label{main11}
H^j(B(M,n);{\mathbb F}_2)\cong
H^{j}(B(M-\{p\},n);{\mathbb F}_2)\oplus H^{j-d}(B(M-\{p\},n-1);{\mathbb F}_2)
\end{equation}
If moreover $M$ is oriented and even dimensional, then:
\begin{align}\label{main22}
H^j(B(M-&\{p_1,\cdots, p_k\},n);{\mathbb F} )\\
&\cong \bigoplus_{0\leq r\leq n}
H^{j - (n-r)(d-1)}
(B(M-\{p\},r);{\mathbb F} )^{\oplus p(k-1,n-r)}\notag
\end{align}
For an arbitrary closed manifold, \eqref{main22}
is still true with ${\mathbb F}_2$--coefficients.
\end{thm}
\begin{rem}
As an example we can set $M=S^2, k=2=d$ and obtain the
additive splitting $H^j(B({\mathbb C}^*,n);{\mathbb F} )\cong
\bigoplus_{0\leq r\leq n} H^{j - (n-r)} (B({\mathbb C} ,r);{\mathbb F} )$\
as in \eqref{secondsplit}, where
${\mathbb C}^*$ is the punctured disk (this isomorphism holds integrally
according to \cite{goryunov}). Note that the left hand side is the
homology of the hyperplane arrangement of `Coxeter type'' $B_n$;
that is $B({\mathbb C}^*,n)$ is an Eilenberg--MacLane space
$K(Br_n({\mathbb C}^*),1)$ with fundamental group isomorphic to the subgroup of
Artin's braids $Br_{n+1}({\mathbb C} )$ consisting of those braids which leave
the last strand fixed. It can be checked that the abelianization
of this group for $n\geq 2$ is ${\mathbb Z}^2$ which is consistent with the calculation
of $H^1$ obtained from the above splitting.
\end{rem}
Napolitano's approach to \eqref{secondsplit} is through spectral
sequence arguments and ``resolution of singularities'' as in Vassiliev
theory. Our approach relies on a simple geometric manipulation of the
truncated symmetric products as discussed earlier (see \fullref{punctured}). \fullref{main} is a consequence of combining a
Poincar\'e--Lefshetz duality statement, the identification of truncated
products of the circle with real projective space, Mostovoy
\cite{mostovoy}, and a homological splitting result due to Steenrod
(\fullref{tp}). Note that the splitting in (3) is no longer true
with coefficients other than ${\mathbb F}_2$ and is replaced in general by a
long exact sequence (\fullref{longexact}).
\subsection{Homological stability}\label{homstab}
This is the third and last part of the paper.
For $M$ a closed smooth manifold of dimension
$\dim M = d$, let $\tau^+M$ be the fiberwise one-point
compactification of the tangent bundle $\tau M$ of $M$ with fiber $S^d$.
We write $\Gamma (\tau^+M)$ the space of sections of $\tau^+M$. Note that this
space has a preferred section (given by the points at infinity).
There are now so called ``scanning'' maps for any $k\in{\mathbb N}$
(Mcduff \cite{dusa}, B{\"o}digheimer, Cohen and Taylor \cite{bct},
Kallel \cite{quarterly})
\begin{equation}\label{firstscan}
S_k \co B(M,k){\ra{1.5}} \Gamma_k (\tau^+M )
\end{equation}
where $\Gamma_k(\tau^+M)$ is the component of degree $k$ sections (see
\fullref{scan}). In important work, McDuff shows that $S_k$
induces a homology isomorphism through a range that increases with
$k$. In many special cases, this range needs to be made explicit and
this is what we do next.
We say that a map $f\co X\rightarrow Y$ is homologically $k$--connected (or a
homology equivalence up to degree $k$) if $f_*$ in homology is an isomorphism
up to and including degree $k$.
\begin{prop}\label{main4}
Let $M$ be a closed manifold of dimension $d\geq 2$ and $k\geq 2$.
Assume the map $+\co B(M-p,k){\ra{1.5}} B(M-p,k+1)$ which consists of adding a point
near $p\in M$ (see \fullref{stability}) is
homologically $s(k)$--connected. Then
scanning $S_k$ is homologically $s(k-1)$--connected. Moreover
$s(k)\geq [k/2]$ (Arnold).
\end{prop}
When $k=1$, we give some information about $S_1 \co
M{\ra{1.5}}\Gamma_1(\tau^+M)$ in \fullref{s1}. Note that $s(k)$ is an
increasing function of $k$. Arnold's inequality $s(k)\geq [k/2]$ is
proven by Segal in \cite{segal1}. This bound is far from being
optimal in some cases since for instance, for $M$ a compact Riemann
surface, $s(k) = k-1$ (Kallel and Salvatore \cite{ks}). Note that the
actual connectivity of the map $+\co B(M-p,k){\ra{1.5}} B(M-p,k+1)$ is often
$0$ since if $\dim M > 2$, this map is never trivial on $\pi_1$ (see
\fullref{folklore}).
The utility of \fullref{main4} is that in some particular cases, knowledge
of the homology of braid spaces in a certain range informs on the homology of
some mapping spaces. Here's an interesting application to computing the
abelianization of the braid group of a surface (this was an open problem for
some time).
\begin{cor} \label{cor1} For $S$ a compact Riemann surface of genus $g\geq 1$,
and $k\geq 3$, we have the isomorphism:
$H_1(B(S,k);{\mathbb Z} ) = {\mathbb Z}_2\oplus{\mathbb Z}^{2g}$.
\end{cor}
\begin{proof} $\tau^+S$ is trivial since $S$ is stably parallelizable
and $\Gamma (\tau^+S) \simeq \Map (S, S^2)$. Suppose $S$ has odd genus,
then $S_k \co H_1(B(S,k)){\ra{1.5}} H_1(\Map_k(S,S^2))$ is degree preserving
(where degree is $k$)
and according to \fullref{main4} it is an isomorphism when
$k\geq 3$ using the bound provided by Arnold.
But $\pi:=\pi_1(\Map_k(S,S^2))$ was
computed in \cite{contemp} and it is some extension
$$0{\ra{1.5}}{\mathbb Z}_{2|k|}{\ra{1.5}}\pi{\ra{1.5}} {\mathbb Z}^{2g}{\ra{1.5}} 0$$
with a generator $\tau$ and torsion free generators $e_1,\ldots, e_{2g}$
with non-zero commutators $[e_i, e_{g+i}] = \tau^2$ and with $\tau^{2|k|} = 1$.
Its abelianization $H_1$ is
${\mathbb Z}^{2g}\oplus{\mathbb Z}_2$ as desired. When $g$ is even,
$S_k \co B(S,k){\ra{1.5}} \Map_{k-1}(S,S^2)$
decreases degree by one (see \fullref{sectionspace})
but the argument and the conclusion are still the same.
\end{proof}
\begin{rem} The above corollary is also a recent calculation of
Bellingeri, Gervais and Guaschi \cite{bellingo} which is more
algebraic in nature and relies on the full presentation of the braid
group $\pi_1(B(S,k))$ for a positive genus Riemann surface
$S$.\end{rem}
\begin{exam} We can also apply \fullref{main4} to the case when $M$ is a
sphere $S^n$. Write $\Map(S^n,S^n) = \coprod_{k\in{\mathbb Z}}
\map{k}(S^n,S^n)$ for the space of self-maps of $S^n$;
$\map{k}(S^n,S^n)$ being the component of degree $k$ maps. Since
$\tau^+S^n$ is trivial there is a homeomorphism $\Gamma (\tau^+S^n
)\cong \Map(S^n,S^n)$. However and as pointed out by Salvatore in
\cite{paolo}, one has to pay extra care about components :
$\Gamma_k(\tau^+S^n)\cong \map{k}(S^n,S^n)$ if $n$ is odd and
$\Gamma_k(\tau^+S^n)\cong \map{k-1}(S^n,S^n)$ if $n$ is even (see
\fullref{sectionspace}). Let $p(n)=1$ if $n$ is even and $0$ if
$n$ is odd. Vassiliev \cite{vassiliev} checks that
$H_*(B({\mathbb R}^n,k);{\mathbb F}_2){\ra{1.5}}$ $ H_*(B({\mathbb R}^n,k+1);{\mathbb F}_2)$ is an
isomorphism up to degree $k$ and so we get that the map of the $k^{\rm th}$
braid space of the sphere into the higher free loop space
$$B(S^n,k){\ra{1.5}} \map{k-p(n)}(S^n,S^n)$$ is a mod--$2$ homology
equivalence up to degree $k-1$.
The homology of $\Map(S^n,S^n)$ is worked out for all field coefficients in
\cite{paolo}.
\end{exam}
\begin{rem} The braid spaces fit into a filtered construction
$$B(M,n)=\co B^1(M,n)\hookrightarrow B^2(M,n)
\hookrightarrow\cdots\hookrightarrow B^n(M,n):=\sp{n}(M)$$ where
$B^p(M,n)$ for $1\leq p\leq n$ is defined to be the subspace
\begin{equation}\label{spnd}
\{[x_1,\ldots, x_n]\in\sp{n}(M)\ |\ \hbox{no more than
$p$ of the $x_i$'s are equal}\}\ .
\end{equation}
Many of our results can be shown to extend with
straightforward changes to $B^p(M,n)$ and $p\geq 1$ when $M$ is a compact
Riemann surface. Some detailed statements and calculations can be found in
\cite{ks}.
\end{rem}
{\bf Acknowledgements}\qua We are grateful to the referee for his
careful reading of this paper. We would like to thank Toshitake Kohno,
Katsuhiko Kuribayashi and Dai Tamaki for organizing two most enjoyable
conferences first in Tokyo and then in Matsumoto. Fridolin Roth, Daniel
Tanr\'e and Stefan Papadima have motivated part of this work with relevant
questions. We finally thank Fridolin and Paolo Salvatore for
commenting through an early version of this paper.
\section{Basic examples and properties}\label{braids}
As before we write an element of $\sp{n}(X)$ as an unordered $n$--tuple
of points\break $[x_1,\ldots, x_n]$ or sometimes also as an abelian finite
sum $\sum x_i$ with $x_i\in X$. For a closed manifold $M$,
$\sp{n}(M)$ is again a closed manifold for $n>1$ if and only if $M$ is of
dimension two, Wagner \cite{wagner}. We define
$$B(M,n) = \{[x_1,\ldots, x_n]\in\sp{n}(M), x_i\neq x_j, i\neq j\}\ .$$
It is convenient as well to define the ``ordered'' $n$--fold configuration space
$F(M,n)= M^n - \Delta_{\rm fat}$ where
\begin{equation}\label{fat}
\Delta_{\rm fat}:= \{(x_1,\ldots, x_n)\in M^n\ |\ x_i=x_j\ \hbox{for some}\
i=j\}
\end{equation}
is the \textit{fat diagonal} in $M^n$.
The configuration space $B(M,n)$ is obtained as the quotient
$F(M,n)/\mathfrak{S}_n$ under the free permutation action of $\mathfrak{S}_n$
\footnote{In the early literature on embedding theory, Feder \cite{feder},
$B(M,2)$ was referred to
as the ``reduced symmetric square''.}.
Both $F(M,n)$ and $B(M,n)$ are (open) manifolds of dimension
$nd$, $d=\dim M$.
Next are some of the simplest non-trivial braid spaces one can describe.
\begin{lem}\label{c2sn} $B(S^n,2)$ is an open $n$--disc bundle over ${\mathbb R} P^n$.
When $n=1$, this is the open M\"{o}bius band (see \fullref{cns1}).
\end{lem}
\begin{proof} There is a surjection $\pi \co B(S^n,2){\ra{1.5}}{\mathbb R} P^n$ sending
$[x,y]$ to the unique line $L_{[x,y]}$ passing through the origin and
parallel to the non-zero vector $x-y$. The preimage $\pi^{-1}(L_{[x,y]})$
consists of all pairs $[a,b]$ such that $a-b$ is a multiple of $x-y$.
This can be identified with an ``open'' hemisphere determined
by the hyperplane orthogonal to $L_{[x,y]}$ (ie $B(S^n,2)$
can be identified with the dual tautological bundle over ${\mathbb R} P^n$).
\end{proof}
\begin{exam}\label{c2rn} Similarly we can see that
$B({\mathbb R}^{n+1},2)\simeq{\mathbb R} P^{n}$ and that
$B(S^n,2)\hookrightarrow B({\mathbb R}^{n+1},2)$ is a deformation retract.
Alternatively one can see directly that
$B(S^n,2)\simeq{\mathbb R} P^n$ for there are an inclusion $i$ and a retract $r$:
\begin{eqnarray*}
i \co S^n\hookrightarrow F(S^n,2)\ \ &,&\ \ \ r \co F(S^n,2){\ra{1.5}} S^n\cr
x\longmapsto (x,-x)\ \ &&\ \ \ \ \ \ \ (x,y)\mapsto {x-y\over |x-y|}
\end{eqnarray*}
Identify $S^n$ with $i(S^n)$ as a subset of $F(S^n,2)$.
Then $F(S^n,2)$ deformation retracts onto this subset via
$$
f_t(x,y) = \left({x-ty\over |x-ty|} , {y-tx\over |y-tx|}\right)
$$
(which one checks is well-defined). We have that $f_t$ is
${\mathbb Z}_2$--equivariant with respect to the involution $(x,y)\mapsto
(y,x)$, that $f_0=id$ and that $f_1 \co F(S^n,2){\ra{1.5}} S^n$ is
${\mathbb Z}_2$--equivariant with respect to the antipodal action on $S^n$. That
is $S^n$ is a ${\mathbb Z}_2$--equivariant deformation retraction of $F(S^n,2)$ which
yields the claim.
\end{exam}
\begin{exam} $B({\mathbb R}^2,3)$ is up to homotopy the
complement of the trefoil knot in $S^3$. \end{exam}
\begin{exam}\label{c2rp2}
There is a projection $B({\mathbb R} P^2,2){\ra{1.5}}{\mathbb R} P^2$
which, to any two distinct lines through the origin in ${\mathbb R}^3$, associates
the plane they generate and this is an element of the Grassmann manifold
$Gr_2({\mathbb R}^3)\cong Gr_1({\mathbb R}^3)
= {\mathbb R} P^2$. The fiber over a given plane
parameterizes various choices of two distinct lines in
that plane and that is $B({\mathbb R} P^1,2)=B(S^1,2)$.
As we just discussed, this is an open M\"{o}bius band $M$ and
$B({\mathbb R} P^2,2)$ fibers over ${\mathbb R} P^2$ with fiber $M$ (see Feder \cite{feder}).
Interestingly $\pi_1(B({\mathbb R} P^2,2))$ is a quaternion
group of order $16$ (Wang \cite{wang}).
\end{exam}
To describe the braid spaces of the circle we can consider the
multiplication map:
$$m \co \sp{n}(S^1){\ra{1.5}} S^1\ \ ,\ \ [x_1,\ldots, x_n]\mapsto
x_1x_2\cdots x_n$$ Morton \cite{morton} shows that $m$ is a locally
trivial bundle with fiber the closed $(n-1)$--dimensional disc and
this bundle is trivial if $n$ is odd and non-orientable if $n$ is
even. In particular $\sp{2}(S^1)$ is the closed M\"{o}bius band. In
fact one can identify $m^{-1}(1)$ with a closed simplex $\Delta^{n-1}$
so that the configuration space component $m^{-1}(1)\cap B(S^1,n)$
corresponds to the open part. This is a non-trivial construction that
can be found in Morton \cite{morton} and Morava \cite{jack}. Since
$B(S^1,n)$ fits in $\sp{n}(S^1)$ as the open disk bundle one gets that
\begin{prop}\label{cns1} $B(S^1,n)$ is a bundle over $S^1$ with fiber
the open unit disc $D^{n-1}$. This bundle is
trivial if and only if $n$ is odd.
\end{prop}
Examples \ref{c2rn} and \ref{c2rp2} show that when $\dim M$ is odd $\neq 1$
or $M$ is not orientable, then $B(M,k)$ fails to be orientable.
The following explains why this needs to be the case.
\begin{lem}[Folklore]\label{folklore}
Suppose $M$ is a manifold of dimension $d\geq 2$ and pick $n\geq 2$. Then
$B(M,n)$ is orientable if and only if $M$ is orientable of even dimension.
\end{lem}
\begin{proof}
We consider the $\mathfrak{S}_n$--covering $\pi \co F(M,n)\fract{\mathfrak{S}_n}{{\ra{1.5}}}
B(M,n)$. If $M$ is not orientable, then so is $M^n$. Now $i \co
F(M,n)\hookrightarrow M^n$ is the inclusion of the complement of
codimension at least two strata
so that $\pi_1(F(M,n)){\ra{1.5}} \pi_1(M)^n$ is surjective and hence so is the
map on $H_1$. The dual map in cohomology is an injection mod $2$ and hence
$w_1(F(M,n))=i^*(w_1(M^n))\neq 0$ since $w_1(M^n)\neq 0$. This implies that
$F(M,n)$ is not orientable if $M$ isn't. It follows that the quotient
$B(M,n)$ is not orientable as well.
Suppose then that $M$ is orientable.
If $d:= \dim M = 2$, then $M$ is a Riemann surface,
$B(M,n)$ is open in $\sp{n}(M)$ which is a complex manifold and hence is
orientable. Suppose now that $d:= \dim M > 2$ so that
$\pi_1F(M,n) = \pi_1(M^n)$ (since the fat diagonal has codimension $>2$).
Notice that we have an embedding
$\iota \co B({\mathbb R}^d,n)\hookrightarrow B(M,n)$ coming from the embedding
of an open disc ${\mathbb R}^d\hookrightarrow M$. Now $\pi_1(B({\mathbb R}^d,n))=\mathfrak{S}_n$
when $d>2$, and $\iota$ induces a section of the short exact sequence
of fundamental groups for the $\mathfrak{S}_n$--covering $\pi$ so we have
a semi-direct product decomposition
$$\pi_1(B(M,n)) = \pi_1(M^n) \ltimes\mathfrak{S}_n\ , \ \ \ d > 2\ .$$
Let's argue then that $B({\mathbb R}^d,n)$ is orientable if and only if $d$ is
even. Denote by $\tau_x$ the tangent space at $x\in {\mathbb R}^d$ and write $\pi \co
F({\mathbb R}^d,n){\ra{1.5}} B({\mathbb R}^d,n)$ the quotient map. A transposition
$\sigma\in\mathfrak{S}_n$ acts on the tangent space to $B({\mathbb R}^d,n)$ at some
chosen basepoint say $[x_1,\ldots, x_n]$ which is identified with the tangent
space $\tau_{x_1}\times\cdots\times\tau_{x_n}$ at say $(x_1,\ldots,
x_n)\in\pi^{-1}([x_1,\ldots, x_n])\subset F({\mathbb R}^d,n)\subset ({\mathbb R}^d)^n$. The
action of $\sigma = (ij)$ interchanges both copies $\tau_{x_i}M$ and
$\tau_{x_j}M\cong{\mathbb R}^d$ and thus has determinant $(-1)^d$. Orientation is
preserved only when $d$ is even and the claim follows (for the relation
between orientation and fundamental group see Novikov \cite[Chapter 4]{novikov}).
\end{proof}
Note that the lemma above is no longer true in the one-dimensional
case according to \fullref{cns1}.
\section{Truncated symmetric products and duality}\label{tp}
The heroes here are the truncated symmetric product functors $TP^n$
which were first put to good use by B{\"o}digheimer, Cohen and Milgram
in \cite{bcm} and Milgram and L{\"o}ffler in \cite{lm}. For $n\geq 2$,
define the identification space
$$TP^n(X) := \sp{n}(X)/_{\hbox{\footnotesize$\sim$}}\ ,\ \
[x,x,y_1\ldots, y_{n-2}]\sim [*,*,y_1,\cdots, y_{n-2}]$$
where as always $*\in X$ is the basepoint.
Clearly $TP^1X = X$ and
we set $TP^0(X) = *$. Note that by adjunction of basepoint
$[x_1,\ldots, x_n]\mapsto [*,x_1,\ldots, x_n]$, we obtain topological
embeddings $\sp{n}(X){\ra{1.5}}\sp{n+1}(X)$ and $TP^n(X){\ra{1.5}} TP^{n+1}(X)$ of
which limits are ${SP}^{\infty} (X)$ and
$TP^{\infty}(X)$ respectively.
We identify $\sp{n-1}(X)$ and $TP^{n-1}(X)$ with their images in $\sp{n}(X)$
and $TP^{n}(X)$ under these embeddings and we write
\begin{equation}\label{tpnbar}
\overline{TP}^n(X):= TP^n(X)/TP^{n-1}(X)
\end{equation}
for the \textit{reduced} truncated product.
These are based spaces by construction. We will set
$\overline{TP}^0(X):= S^0$. The following two properties are crucial.
\begin{thm}\label{property}\
\begin{enumerate}
\item {\rm(Dold and Thom \cite{dt})}\qua
$\pi_i(TP^{\infty}(X))\cong\tilde H_i(X;{\mathbb F}_2)$
\item {\rm(Milgram and L{\"o}ffler \cite{lm})}\qua There is a
splitting$$H_*(TP^n(X);{\mathbb F}_2)\cong H_*(TP^{n-1}(X);{\mathbb F}_2)\oplus
\tilde H_*(\overline{TP}^{n}X;{\mathbb F}_2 ).$$
\end{enumerate}
\end{thm}
The splitting in (2) is obtained from the long exact sequence for the
pair $(TP^n(X),$\break$TP^{n-1}(X))$ and the existence of a retract
$H_*(TP^n(X);{\mathbb F}_2){\ra{1.5}} H_*(TP^{n-1}(X);{\mathbb F}_2 )$ constructed using
a transfer argument. In fact this splitting can be viewed as a
consequence of the following homotopy equivalence discussed in
\cite{lm} and Zanos \cite{zanos}.
\begin{lem}
$TP^{\infty}(TP^n(X))\simeq TP^{\infty}(\overline{TP}^n(X))
\times TP^{\infty}(TP^{n-1}(X))$.
\end{lem}
Further interesting splittings of the sort for a variety of other
functors are investigated in \cite{zanos}. The prototypical and basic
example of course is Steenrod's original splitting of the homology of
symmetric products (which holds with integral coefficients).
\begin{thm}[Steenrod, Nakaoka]\label{steenrod}
The induced basepoint adjunction map on
homology $H_*(\sp{n-1}(X);{\mathbb Z} ){\ra{1.5}} H_*(\sp{n}(X);{\mathbb Z} )$
is a split monomorphism.
\end{thm}
\subsection{Duality and homological dimension}
The point of view we adopt here is that $B(M,n) = TP^n(M)-TP^{n-2}(M)$
as spaces. A version of Poincar\'e--Lefshetz duality
(\fullref{duality}) can then be used to relate the cohomology of
$B(M,k)$ to the homology of reduced truncated products. This idea is
of course not so new (see B{\"o}digheimer, Cohen and Taylor \cite{bct}
or M{\`u}i \cite{mui}).
If $U\subset X$ is a closed cofibrant subset of $X$, define in
$\sp{n}(X)$ the ``ideal'':
\begin{equation}\label{ideal}
\underline{U} := \{[x_1,\ldots, x_n]\in\sp{n}(X),
x_i\in U\ \hbox{for some $i$}\}
\end{equation}
For example and if $*\in X$ is the basepoint, then $\underline{*} =
\sp{n-1}(X)\subset\sp{n}(X)$. Let $S$ be the ``singular set'' in
$\sp{n}(X)$ consisting of unordered tuples with at least two repeated
entries. This is a closed subspace.
\begin{lem}\label{quotient} With $U\neq\emptyset$,
$\sp{n}(X)/({\underline{U}\cup S})= \overline{TP}^{n}(X/U)$.
\end{lem}
\begin{proof} Denote by $*$ the basepoint of $X/U$ which is the image of $U$
under the quotient $X{\ra{1.5}} X/U$. Then by inspection
$$\sp{n}(X)/(\underline{U}\cup S) = \sp{n}(X/U)/(\underline{*}\cup S)\ .$$
Moding out $\sp{n}(X/U)$ by $S$ we obtain
$TP^n(X/U)/TP^{n-2}(X/U)$. Moding out further
by $\underline{*}$ we obtain the desired quotient.
\end{proof}
The next lemma is the fundamental observation which states that for
$M$ a compact manifold with boundary and $U\hookrightarrow M$ a closed
cofibration, $B(M-U,n) \cong \sp{n}(M) - \underline{U\cup\partial
M}\cup S$ is Poincar\'e--Lefshetz dual to the quotient
$\sp{n}(M)/(\underline{U\cup\partial M}\cup S)$. More precisely, set
\begin{equation}\label{m+}
\overline{M} = M/(U\cup\partial M)
\end{equation}
with the understanding that $\bar M=M$ if $U\cup\partial M=\emptyset,
\{\hbox{point}\}$.
The following elaborates on \cite[Theorem 3.2]{bcm}.
\begin{lem}\label{duality}
If $M$ is a compact manifold of dimension $d\geq 1$, $U\subset M$ a closed
subset with $M-U$ connected, $U\cap\partial
M=\emptyset$ and $\overline{M}$ as in \eqref{m+}, then
$$ H^i(B(M-U,k);\pm{\mathbb Z} )\cong \begin{cases}
H_{kd-i}(TP^k(\overline{M}), TP^{k-1}(\overline{M});{\mathbb Z} ),&\
\hbox{if}\ U\cup\partial M\neq\emptyset,\\ H_{kd-i}(TP^k(M),
TP^{k-2}(M);{\mathbb Z} ),&\ \hbox{if}\ U\cup\partial M=\emptyset.
\end{cases}
$$
The isomorphism holds with coefficients ${\mathbb F}_2$. When $M$ is even
dimensional and orientable, we can replace $\pm{\mathbb Z}$ by the trivial
module ${\mathbb Z}$. \end{lem}
\begin{proof}
Suppose $X$ is a compact oriented $d$--manifold with boundary
$\partial X$. Then Poincar\'e--Lefshetz duality gives an isomorphism
$H^{d-q}(X;{\mathbb Z} )\cong H_q(X,\partial X;{\mathbb Z} )$. Apply this to the
following situation: $X$ is a finite $d$--dimensional CW--complex,
$V\subset X$ is a closed subset of $X$, and $N$ is a
tubular neighborhood of $V$ which deformation retracts onto it;
$$V\subset N\subset X$$
$\bar N$ its closure and
$\partial\bar N = \partial (X-N)=\bar N-N$.
Assume that $X-N$ is an orientable $d$--dimensional manifold
with boundary $\partial\bar N$. Then
we have a series of isomorphisms:
\begin{equation}\label{iso}
H^{d-q}(X-V;{\mathbb Z} )\cong
H^{d-q}(X-N;{\mathbb Z} )\cong H_q(X-N,\partial\bar N;{\mathbb Z} )
\cong H_q(X,V;{\mathbb Z} )
\end{equation}
Let's now apply \eqref{iso} to the case when $X =
\sp{k}(\overline{M})$ with $M$ as in the lemma and with $V$ the
closed subspace consisting of configurations $[x_1,\ldots, x_k]$ such
that
(i)\qua $x_i=x_j$ for some $i\neq j$,\qua or
(ii)\qua for some $i$, $x_i=*$
the point at which $U\cup\partial M$ is collapsed out.
As discussed in \fullref{quotient},
$\sp{k}(\overline{M})/\underline{*} =
\sp{k}(M)/(\underline{U\cup\partial M})$ so that
$\sp{k}(\overline{M})/V$ $ = \sp{k}(M)/(\underline{U\cup\partial M}\cup
S)$ with $S$ again being the image of the fat diagonal in $\sp{k}(M)$.
Then, according to \fullref{quotient} and to its proof we see that
$$\sp{k}(\overline{M})/V =
\begin{cases}
TP^k(\overline{M})/TP^{k-1}(\overline{M}),& \hbox
{if $\partial M\neq\emptyset$ or $U\neq\emptyset$}, \\
TP^k(M)/TP^{k-2}(M),& \hbox{if $M$ closed and $U=\emptyset$}.
\end{cases}
$$
Now $B(M-U,k)\cong \sp{k}(M)- \underline{U\cup\partial M}\cup S
=\sp{k}(\overline{M})-V$ is connected (since $M-U$ is), it is $kd$
dimensional and is orientable if $M$ is even dimensional
orientable (\fullref{folklore}).
Applying \eqref{iso} yields the
result in the orientable case. When $B(M-U,k)$ is non orientable,
Poincar\'e--Lefshetz duality holds with twisted coefficients.
\end{proof}
A version of this lemma has been greatly exploited in \cite{bcm,ks} to
determine the homology of braid spaces and analogs. The following is
immediate.
\begin{cor}\label{conR} With $M$, $U\subset M$ as in \fullref{duality},
let
$$R_k = \begin{cases}
\conn(TP^k(\overline{M})/TP^{k-1}(\overline{M})), &\hbox{if}\ U\cup\partial M\neq\emptyset,\\
\conn(TP^k(M)/TP^{k-2}(M)), &\hbox{if}\ U\cup\partial M=\emptyset.
\end{cases}
$$
Then $\hbox{cohdim}_{\pm{\mathbb Z}}(B(M-U,k)) = dk-R_k-1$.
\end{cor}
\fullref{main3} is now a direct consequence of the following result.
\begin{lem}\label{R}
Let $M, U$ and $\overline{M}$ as above,
$r=\conn(\overline{M})$ with $r\geq 1$. Then
$$R_k \geq \begin{cases}
k+r-1, &\hbox{if}\ U\cup\partial M\neq\emptyset,\\
k+r-2, &\hbox{if}\ U\cup\partial M=\emptyset .
\end{cases}$$
\end{lem}
The proof of this key lemma is based on a computation of Nakaoka
\cite[Proposition 4.3]{nakaoka}. We write $Y^{(k)}$ for the $k$--fold smash
product of a based space $Y$ and $X_{\mathfrak{S}_k}$ the orbit space of a
$\mathfrak{S}_k$--space $X$.
\begin{thm}[Nakaoka]\label{nakak}
If $Y$ is $r$--connected, then
$(Y^{(k)}/\Delta_{\rm fat})_{\mathfrak{S}_k}$
is $r+k-1$--connected.
\end{thm}
\begin{rem}
In fact nakaoka only proves the homology version of this result and
also assumes $r\geq 1$. An inspection of his proof shows that $r\geq
0$ works as well. Also his homology statement can be upgraded to a
genuine connectivity statement. To see this, we can assume that
$k\geq 2$ (the case $k=1$ being trivial). One needs to show in that
case that $\pi_1((Y^{(k)}/\Delta_{\rm fat})_{\mathfrak{S}_k})=0$. This
follows by an immediate application of Van Kampen and the fact that
$\pi_1(Y^{(k)}/\mathfrak{S}_k)=\pi_1(\bsp{k}Y)=0$ for $k\geq 2$. To
see this last statement, recall that the natural map
$\pi_1(Y){\ra{1.5}}\pi_1(\sp{k}Y)$ factors through $H_1(Y;{\mathbb Z} )$ and
then induces an isomorphism $H_1(Y;{\mathbb Z})\cong\pi_1(\sp{k}Y)$ when
$k\geq 2$ (Smith \cite{smith}). But if $\sp{k-1}(Y)\hookrightarrow
\sp{k}(Y)$ induces a surjection on fundamental groups, then the
cofiber is simply connected (Van Kampen). \end{rem}
\begin{proof} (of \fullref{R} and \fullref{main3}) By construction we
have the equality $\overline{TP^k}(Y)=
(Y^{(k)}/\Delta_{\rm fat})_{\mathfrak{S}_k}$. The connectivity of
$TP^k(M)/TP^{k-1}(M)$ is (at least) $k+r-1$ according to \fullref{nakak}, while that of $TP^{k-1}(M)/TP^{k-2}(M)$ is at least $k+r-2$
which means that $\conn (TP^k(M)/TP^{k-2}(M)) \geq k+r-2$ (by the long exact
sequence of the triple $(TP^{k-2}(M),TP^{k-1}(M),TP^k(M))$). This produces
the lower bounds on $R_k$ in \fullref{R}. Since the cohomology of $B(M-U,k)$
starts to vanish at $dk-R_k$ (\fullref{conR}), \fullref{main3}
follows.
\end{proof}
\section{Braid spaces of punctured manifolds}\label{punctured}
We start with a simple proof of \fullref{main}, \eqref{main11};\
$\dim M=d\geq 2$ throughout.
\begin{proof}[Proof of \fullref{main}, \eqref{main11}]
This is a direct computation (with $M$ closed)
\begin{eqnarray*}
H^j(B(M,n);{\mathbb F}_2)
&\cong& H_{nd-j}(TP^nM, TP^{n-2}M;{\mathbb F}_2 )\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \
\hbox{(\fullref{duality})}\\
&\cong&
\tilde H_{nd-j}(\overline{TP}^nM;{\mathbb F}_2 )\oplus
\tilde H_{nd-j}(\overline{TP}^{n-1}M;{\mathbb F}_2 ) \ \ \ \ \ \ (\text{by}\ \ref{property}, (2))\\
&\cong&H^j(B(M-\{p\},n);{\mathbb F}_2)\oplus
H^{j-d}(B(M-\{p\},n-1);{\mathbb F}_2 )
\end{eqnarray*}
In this last step we have rewritten
$H_{nd-j}$ as
$H_{(n-1)d-(j-d)}$ and reapplied
\fullref{duality}.
\end{proof}
\textbf{Example}\qua When $M=S^d$ and $n=2$, then
$B(S^d,2)\simeq{\mathbb R} P^d$ and $B(S^d-p,2)=B({\mathbb R}^d,2) = {\mathbb R} P^{d-1}$
in full agreement with the splitting. This shows more importantly that
the splitting is not valid for coefficients other than ${\mathbb F}_2$. The
general case is covered by the following observation of Segal and
McDuff.
\begin{lem}\label{longexact}{\rm (McDuff \cite{dusa})}\qua There
is a long exact sequence:
\begin{align*}
{\ra{1.5}} H_{*-d+1}(B(M-*,n-1))&{\ra{1.5}}
H_*(B(M-*,n))\\&{\ra{1.5}} H_*(B(M,n)){\ra{1.5}}
H_{*-d}(B(M-*,n-1))\cdots
\end{align*}
\end{lem}
\begin{proof}
Let $U$ be an open disc in $M$ of radius $<\epsilon$ and let $N =
M-U$. We have that $B(M-*,n)\simeq B(N,n)$. There is an obvious
inclusion $B(N,n){\ra{1.5}} B(M,n)$ and so we are done if we can show
that the cofiber of this map is $\Sigma^dB(N,n-1)_+$. To that end
using a trick as in \cite{dusa} (proof of theorem 1.1) we replace
$B(M,n)$ by the homotopy equivalent model $B'(M,n)$ of
configurations $[x_1,\ldots, x_n]\in B(M,n)$ such that at most one
of the $x_i$'s is in $U$. The cofiber of $B(N,n)\hookrightarrow
B'(M,n)$ is a based space at $*$ and consists of pairs $(x,D)\in
\bar U\times B(N,n-1)$ such that if $x\in\partial\bar U$ then
everything is collapsed out to $*$. But $U\cong D^d$ and $\bar
U/\partial \bar U = S^d$ so that the cofiber is the half-smash
product $S^d\rtimes B(N,n-1) = \Sigma^dB(N,n-1)_+$ as asserted.
\end{proof}
In order to prove \fullref{main} we need the following result of Mostovoy.
\begin{lem}{\rm(Mostovoy \cite{mostovoy})}\label{circle}\qua
There is a homeomorphism $\tp{n}(S^1)\cong{\mathbb R} P^n$.\end{lem}
\begin{rem} We only need that the spaces be homotopy equivalent.
It is actually not hard to see that
$\tp{n}(S^1)$ has the same homology as ${\mathbb R} P^n$ since it can be
decomposed into cells one for each dimension less than $n$ and with
the right boundary maps. The $k^{\rm th}$ skeleton is $\tp{k}(S^1)$. Indeed
identify $S^1$ with $[0,1]/\sim$. A point in $\tp{k}(S^1)$ can be
written as a tuple $0\leq t_1\leq\cdots\leq t_k\leq 1$ with
identifications at $t_1=0, t_k=1$ and $t_i=t_{i+1}$. The set of all
such points is therefore the image $\sigma^k$ of a $k$--simplex
$\Delta^k{\ra{1.5}}\tp{k}(S^1)$ with identifications along the faces
$F_i\Delta^k$. Since all faces corresponding to $t_i=t_{i+1}$ map to
the lower skeleton ($\tp{k-2}(S^1))$ and since the last face
$F_k\Delta^k$ (when $t_k=1$) is identified with the zeroth face
($t_1=0$) in $\tp{k}(S^1)$, the corresponding \textit{chain} map sends
the boundary chain $\partial\sigma_k$ to the image of
$\partial\Delta^k = \sum_{i=0}^k (-1)^iF_i\Delta^k$; that is to the
image of $F_0\Delta^k + (-1)^kF_k\Delta^k$ which is
$(1+(-1)^k)\sigma^{k-1}$.
\end{rem}
We need one more lemma.
\begin{lem} Set $\overline{TP}^0(X) = S^0$. Then
$\overline{TP}^n(X\vee Y) = \bigvee_{r+s=n} \overline{TP}^r(X)
\wedge\overline{TP}^s(Y)$.
\end{lem}
\begin{proof}
Here the smash products are taken with respect to the canonical
basepoints of the various $\overline{TP}$'s. A configuration
$[z_1,\ldots, z_n]$ in $TP^n(X\vee Y)$ can be decomposed into a pair
of the form $[x_1,\ldots, x_r]\times [y_1,\ldots, y_s]$ in
$TP^r(X)\times TP^s(Y)$ for some $r+s = n$. This decomposition is
unique if we demand that the basepoint (chosen to be the wedgepoint
$*$) is not contained in the configuration. The ambiguity coming
from this basepoint is removed when we quotient out $TP^n(X\vee Y)$
by $\underline{*}=TP^{n-1}(X\vee Y)$, and when we quotient out
$\bigcup_{r+s = n}TP^r(X)\times TP^s(Y)$ by those pairs of
configurations with the basepoint in either one of them. The proof
follows.
\end{proof}
We are now in a position to prove the second splitting \eqref{main22}.
\begin{proof}[Proof of \fullref{main}, \eqref{main22}] Let $Q_k = \{p_1,\ldots,
p_k\}$ be a finite subset of $M$ of cardinality $k$. We note that
the quotient $M/Q_k$ is of the homotopy type of the bouquet
$M\vee\underbrace{S^1\vee\cdots\vee S^1}_{k-1}$, and that
$\overline{TP}^l(S^1) = {\mathbb R} P^l/{\mathbb R} P^{l-1} = S^l$. Using field
coefficients we then have the folowing, where, whenever we quote \fullref{duality},
we assume that either $M$ is even dimensional orientable or
that ${\mathbb F}={\mathbb F}_2$:
\begin{eqnarray*}
H^j(B&(M&-Q_k,n);{\mathbb F} )\\
&\cong& \tilde H_{nd-j}(\overline{TP}^n (M/Q_k))\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\hbox{(\fullref{duality} with $U\cup\partial M=Q_k$)}\\
&\cong&
\tilde H_{nd-j}(\overline{TP}^n(M\vee\bigvee_{k-1}S^1))\\
&\cong& \tilde H_{nd-j}\left(\bigvee_{r+s_1+\cdots + s_{k-1}=n}
\overline{TP}^r(M)\wedge \overline{TP}^{s_1}(S^1)\wedge\cdots\wedge
\overline{TP}^{s_{k-1}}(S^1)\right)\\
&\cong&
\tilde H_{nd-j}\left(\bigvee_{r+s_1+\cdots + s_{k-1}=n}
S^{n-r}\wedge \overline{TP}^rM \right)\\
&\cong&
\bigoplus_{r+s_1+\cdots + s_{k-1}=n}
\tilde H_{nd-j-n+r}(\overline{TP}^rM )\\
&\cong&\bigoplus_r
\tilde H_{nd-j-n+r} (\overline{TP}^rM )^{\oplus p(k-1,n-r)}\\
&\cong&\bigoplus_{r=0}^n
H^{j - (n-r)(d-1)}
(B(M-\{p\},r);{\mathbb F} )^{\oplus p(k-1,n-r)}\ \ \ \ \ \ \ \ \ \hbox{(\fullref{duality})}
\end{eqnarray*}
This is what we wanted to prove.
\end{proof}
\section{Connectivity of symmetric products}\label{bounds}
In this section we prove \fullref{connectivity} and \fullref{conntwo} of the introduction.
\begin{thm}\label{connectivity2} Suppose $X$ is a based $r$--connected
simplicial complex with $r\geq 1$ and let $n\geq 1$. Then
$\bsp{n}(X)$ is $2n+r-2$--connected.
\end{thm}
\begin{proof} The claim is tautological for $n=1$ and so we assume throughout
that $n>1$. We use some key ideas from Arone and Dwyer \cite{dwyer} and
Kallel and Karoui \cite{kk}. Start with $X$ simply connected and
choose a CW complex $Y$ such that $H_*(\Sigma Y)=H_*(X)$. If $X$ is
based and $r$--connected, then $Y$ is based and $(r-1)$--connected. A
crucial theorem of Dold \cite{dold} now asserts that $H_*(\sp{n}X)$,
and hence $H_*(\bsp{n}X)$, only depends on $H_*(X)$ so that in our
case $H_*(\bsp{n}X)=H_*(\bsp{n}\Sigma Y)$. As before we write
$X^{(n)}$ the $n$--fold smash product of $X$ so that we can identify
$\bsp{n}X$ with the quotient $X^{(n)}/\mathfrak{S}_n$ by the action
of $\mathfrak{S}_n$. It will also be convenient to write
$X^{(n)}_{\mathfrak{S}_n}:=X^{(n)}/\mathfrak{S}_n$. Note that
$X^{(n)}$ has a preferred basepoint which is fixed by the action of
$\mathfrak{S}_n$ (ie the action is \textit{based}). By
construction we have equivalences
\begin{equation}\label{bspn}
\bsp{n}(\Sigma Y) = (\Sigma Y)^{(n)}_{\mathfrak{S}_n}
= (S^1\wedge Y)^{(n)}_{\mathfrak{S}_n} = (S^1)^{(n)}\wedge_{\mathfrak{S}_n}Y^{(n)}
\end{equation}
where here $A\wedge_{\mathfrak{S}_n}B$ is the notation for the quotient by
the diagonal action of $\mathfrak{S}_n$ on $A\wedge B$ where $A$ admits
a based right action of $\mathfrak{S}_n$ and $B$ a based left action.
We next observe that the quotient $(S^1)^{(n)}/K$ is contractible for
any non-trivial Young subgroup $K=\mathfrak{S}_{k_1}\times
\mathfrak{S}_{k_2}\times\cdots\times\mathfrak{S}_{k_r}\subset
\mathfrak{S}_n$, $\sum k_i=n$.
This follows from the fact that $(S^1)^{(n)}/K =
S^n/K=S^{k_1}/{\mathfrak S_{k_1}}\wedge \cdots\wedge
S^{k_r}/{\mathfrak S_{k_r}}$, and that for some $k_i\geq 2$,
$S^{k_i}/{\mathfrak S_{k_i}}=\bsp{k_i}(S^1)$ is contractible since
the basepoint inclusion $\sp{k_i-1}(S^1){\ra{1.5}}\sp{k_i}(S^1)$ is a
homotopy equivalence between two copies of the circle (see section
\ref{braids}).
We can then use \cite[Proposition 7.11]{dwyer} to
conclude that $(S^1)^{(n)}\wedge_{\mathfrak{S}_n} \Delta_{\rm fat}$ is
contractible with $\Delta_{\rm fat}$ as in \eqref{fat}. This subspace can
then be collapsed out in the expression of $\bsp{n}(\Sigma Y)$ of
\eqref{bspn} without changing the homotopy type and one obtains
\begin{equation}\label{finalform}
\bsp{n}(\Sigma Y)\simeq (S^1)^{(n)}\wedge_{\mathfrak{S}_n}
\left(Y^{(n)}/\Delta_{\rm fat}\right)\ .
\end{equation}
The point of expressing $\bsp{n}(X)$ in this form is to take advantage
of the fact that the action of $\mathfrak{S}_n$ on $Y^{(n)}/\Delta_{\rm fat}$ is
based free (ie, free everywhere but at a single fixed point say $x_0$
to which the entire $\Delta_{\rm fat}$ is collapsed out).
Consider the projection $W_n :=
S^{n}\times_{\mathfrak{S}_n}(Y^{(n)}/\Delta_{\rm fat}) \rightarrow
(Y^{(n)}/\Delta_{\rm fat})_{\mathfrak{S}_n}$. This map is a fibration on the
complement of the point $x_0$ with fiber $S^n$ there, and over $x_0$ the fiber
is $F_0 = S^{n}/\mathfrak{S}_n$ (which is contractible). The space
$\bsp{n}(\Sigma Y)$ in \eqref{finalform} is obtained from $W_n$ by collapsing
out $F_0$ (being contractible this won't matter) and
$X_n:=*\times_{\mathfrak{S}_n}(Y^{(n)}/\Delta_{\rm fat}) =
(Y^{(n)}/\Delta_{\rm fat})_{\mathfrak{S}_n}$. Consider the sequence of maps
$(S^n,*){\ra{1.5}} (W_n,X_n)\fract{}{{\ra{1.5}}} (X_n,X_n)$. This is a fibration away from the
point $x_0\in X$ as we pointed out. One can then construct a relative serre
spectral sequence (as in \cite[Section 6]{kk}) with $E^2$--term:
$$E^2 = \tilde H_*(X_n; \tilde H_*(S^n))\ \Longrightarrow\
H_*(W_n,X_n)\cong H_*(\bsp{n}(\Sigma Y))$$
But $X_n$ is $r+n-2$--connected (\fullref{nakak}), $r+n-2\geq 1$, so that
the $E^2$--term is made out of terms of homological dimension $r+n-1+n =2n+r-1$
or higher which implies that $\bsp{n}(\Sigma Y)=\bsp{n}(X)$
has trivial homology up to $2n+r-2$.
But $\bsp{n}(X)$ is simply connected if $n\geq 2$ (see remark after
\fullref{nakak}) and the proof follows by the Hurewicz Theorem.
\end{proof}
\begin{exam} There is a homotopy equivalence
\ $\bsp{2}(S^k)\simeq \Sigma^{k+1}{\mathbb R} P^{k-1}$\
(see Hatcher \cite[Chapter 4 , Example 4K.5]{hatcher}). This
space is $k+1 = 4+ (k-1)-2$--connected as predicted and this is sharp.
\end{exam}
\subsection{Two dimensional complexes}\label{twodim}
To prove \fullref{conntwo} we use a minimal and explicit complex
constructed in \cite{ks2}. The existence of this complex is due to the simple
but exceptional property in dimension two that $\sp{n}{D}$, where
$D\subset{\mathbb R}^2$ is a disc, is again a disc of dimension $2n$. Write $X =
\bigvee^wS^1\cup (D^2_1\cup\cdots\cup D^2_r)$ and denote by $\star $ the
symmetric product at the chain level. In \cite{ks2} we constructed a space
$\tsp{n}X$ homotopy equivalent to $\sp{n}(X)$ and such that $\tsp{}X\simeq
\coprod_{n\geq 0}\tsp{n}X$ has a multiplicative cellular chain complex
generated under $\star $ by a zero dimensional class $v_0$, degree one classes
$e_1,\ldots, e_w$ and degree $2s$ classes $\sp{s}D_i$, $1\leq i\leq r$, $1\leq
s$, under the relations
\begin{eqnarray*}
e_i\star e_j = -e_j\star e_i \,\ (i \neq j)\ ,\ \ e_i\star e_i = 0 \ ,\ \\
\sp{s}D_i\star \sp{t}D_i = {s+t\choose t}\sp{s+t}D_i \ .
\end{eqnarray*}
The cellular boundaries on these cells were also explicitly computed
(but we don't need them here). The point however is that
a cellular chain complex for $\tsp{n}(X)$ consists of the subcomplex
generated by cells
$$v_0^r\star e_{i_1}\star \cdots \star e_{i_t}\star
\sp{s_1}(D_{j_1})\star \cdots \star \sp{s_l}(D_{j_l})$$
with $r+t+s_1+\cdots +s_l= n$ and $t\leq w$ where $w$ again is the number
of leaves in the bouquet of circles. The dimension of such a cell is
$t+2(s_1+\cdots + s_l)$ for pairwise distinct indices among
the $e_i$'s.
A reduced cellular complex for $\bsp{n}X$ can then be taken to be
the quotient of $C_*(\tsp{n}X)$ by the summand
$v_0C_*(\tsp{n-1}X)$ and this has cells of the form
$$e_{i_1}\star \cdots \star e_{i_t}\star \sp{s_1}(D_{j_1})\star \cdots \star \sp{s_l}(D_{j_l})$$
with $t+s_1+\cdots +s_l= n$. The dimension of such a cell is $t+2(s_1+\cdots +
s_l)=2n-t$. The smallest such dimension is $2n-\min(w,n)$. This means that
$\conn(\tsp{n}X/\tsp{n-1}X) = \conn(\bsp{n}X) \geq 2n-\min(w,n)-1$ and
\fullref{conntwo} follows.
\begin{exam}
A good example to illustrate \fullref{conntwo} is when $S$ is a
closed Riemann surface of genus $g$. It is well-known that for $n\geq 2g-1$,
$\sp{n}(S)$ is an analytic fiber bundle over the Jacobian (by a result of
Mattuck)
$${\mathbb P}^{n-g}{\ra{1.5}}\sp{n}(S)\fract{\mu}{{\ra{1.5}}} J(S)$$
where $\mu$ is the Abel--Jacobi map. In fact this is the projectivisation of an
$n-g+1$ complex vector bundle over $J(S)$. Collapsing out fiberwise the hyperplanes
${\mathbb P}^{n-g-1}\subset{\mathbb P}^{n-g}$ we get a fibration $\zeta_n \co S^{2n-2g}{\ra{1.5}}
E_n{\ra{1.5}} J(S)$ with a preferred section, so that for $n\geq 2g$, $\bsp{n}(S)$
is the cofiber of this section. This is $2n-2g-1$--connected as predicted,
and in fact $\tilde H_*(\bsp{n}(S))=
\sigma^{2n-2g}H_*(J(S))$ where $\sigma$ is a formal
suspension operator which raises degree by one.
\end{exam}
\subsection{Connectivity and truncated products}\label{spec}
The homology of truncated products, and hence of braid spaces,
is related to the homology of symmetric products via a very useful
spectral sequence introduced in \cite{bcm}.
This spectral sequence has been used and adapted with relative
success to other situations; eg \cite{ks}. The starting point is the
duality in \fullref{duality}. The problem of computing
$H^*(B(M,n);{\mathbb F} )$ becomes then one of computing the homology of the
relative groups $H_{*}(TP^n\overline{M}, TP^{n-2}\overline{M};{\mathbb F}
)$. The key tool is the following \textit{Eilenberg--Moore type}
spectral sequence with field coefficients ${\mathbb F}$.
\begin{thm}\label{specseq}{\rm\cite{bcm}}\qua
Let $X$ be a connected space with a non-degenerate basepoint. Then there is
a spectral sequence converging to $H_{*}(TP^{n}(X), TP^{n-1}(X);{\mathbb F} )$ ,
with $E^1$--term
\begin{equation}\label{unpunctured}
\bigoplus_{i+ 2j= n}
H_*(\sp{i}X,\sp{i-1}X)\otimes H_*(\sp{j}(\Sigma X),\sp{j-1}(\Sigma X))
\end{equation}
and explicit $d^1$ differentials.
\end{thm}
Field coefficients are used here because this spectral sequence uses
the Kunneth formula to express $E^1$ as in \eqref{unpunctured}. Here
$\sp{-1}(X)=\emptyset$ and $\sp{0}(X)$ is the basepoint.
\begin{exam} When $X=S^1$, then $H_*(TP^{n}(S^1),TP^{n-1}(S^1))=\tilde
H_*(S^n)$. Since $\sp{i}S^1\simeq S^1$ for all $i\geq 1$, the spectral
sequence in this case has $E^1$--term of the form
$$H_*(S^1,*)\otimes H_*({\mathbb P}^{{n-1\over 2}},{\mathbb P}^{{n-1\over 2}-1})
= \sigma\tilde H_*(S^{n-1}) = \tilde H_*(S^n)$$
if $n$ is odd (where $\sigma$ is the suspension operator), or
$E^1_{*,*}= H_*({\mathbb P}^{(n/2)},{\mathbb P}^{(n/2)-1})=
\tilde H_*(S^n)$ if $n$ is even. In all cases the spectral sequence
collapses at $E^1$.
\end{exam}
Now \fullref{duality} combined with \fullref{specseq} gives an easy
method to produce upper bounds for the non-vanishing degrees of $H^*(B(M,n))$.
The least connectivity of the terms $\bsp{i}X\times\bsp{j}(\Sigma X)$ for
$i+2j=n$ translates by duality to such an upper bound. This was in fact
originally our approach to the cohomological dimension of braid spaces. We
illustrate how we can apply this spectral sequence by deriving \fullref{twocomplexes} from \fullref{conntwo}.
\begin{proof}[Proof of \fullref{twocomplexes}]
Suppose $Q\cup\partial S\neq\emptyset$.
The spectral sequence of \fullref{specseq} converging to
the homology of $(TP^k(\overline{S}),TP^{k-1}(\overline{S}))$
takes the form
\begin{equation}\label{e1}
E^1 = \tilde H_*(\bsp{k}\overline{S})
\bigoplus\oplus_{i+2j=k}(H_*(\bsp{i}\overline{S})\otimes
H_*(\bsp{j}(\Sigma \overline{S}))
\bigoplus \tilde H_*(\bsp{k/2}(\Sigma \overline{S}))
\end{equation}
(if $k$ odd, the far right term is not there).
We have that $R_k$ (as in \fullref{conR})
is at least the connectivity of this $E^1$--term.
Since $\overline{S}$ is a two dimensional complex,
the connectivity of $\bsp{i}(\overline{S})$
is at least $2i-\min(w,i)-1$ (for some $w\geq 0$).
The connectivity of $\bsp{j}(\Sigma \overline{S})$ is at least
$2j + r-2\geq 2j-1$
since $\Sigma \overline{S}$ is now simply connected
(\fullref{connectivity2}).
The connectivity of
$\bsp{i}(\overline{S})\wedge \bsp{j}(\Sigma \overline{S})$
for non-zero $i$ and $j$ is then at least
$$(2i-\min(w,i)-1)+(2j-1)+1 = i+k-\min(w,i)-1$$
When $i=0$, then $j={k\over 2}$ ($k$ even) and
$\conn(\bsp{k/2}(\Sigma \overline{S}))\geq k-1$.
The connectivity of the $E^1$--term \eqref{e1} is at least the minimum of
$$
\begin{cases}
i+k-\min(w,i)-1, & 1\leq i\leq k-1,\\
2k-\min(w,k)-1,& i=k,\\
k-1,& i=0.
\end{cases}
$$
which is $k-1$. By duality $H^*(B(S-Q,k)) = 0$ for $* \geq
2k-k+1 = k+1$. If $S$ is closed, then the same argument shows that this bound
needs to be raised by one.
\end{proof}
\section{Stability and section spaces}\label{stability}
In this final section,
we extrapolate on standard material and make slightly more precise a
well-known relationship between configuration spaces and section spaces
\cite{dusa,bct,segal1,quarterly}.
When manifolds have a boundary or an end (eg a puncture),
one can construct embeddings
\begin{equation}\label{marching}
+ \co B(M,k){\ra{1.5}} B(M,k+1)\ .
\end{equation}
by ``addition of points'' near the boundary, near ``infinity'' or near the
puncture. In the case when $\partial M\neq\emptyset$ for example,
one can pick a component $A$
of the boundary and construct a nested sequence of collared
neighborhoods $V_1\supset V_2\supset\cdots \supset A$ together with
sequences of points $x_k\in V_{k}-V_{k+1}$. There are then embeddings
$B(M-V_k,k){\ra{1.5}} B(M-V_{k+1},k+1)$ sending $\sum z_i$ to $\sum
z_i+x_k$. Now we can replace $B(M-V_k,k)$ by $B(M-A,k)$ and then by
$B(M,k)$ up to small homotopy. In the direct limit of these
embeddings we obtain a space denoted by $B(M,\infty )$. Note that an
easy analog of Steenrod's splitting \cite{bcm} gives the splitting
\begin{equation}\label{split2}
H_*(B(M,\infty ))\cong\bigoplus_{k=0}
H_*(B(M,k+1), B(M,k))
\end{equation}
(here $B(M,0)=\emptyset$). In fact \eqref{split2} is a special case
of a trademark \textit{stable splitting} result for configuration
spaces of open manifolds or manifolds with boundary. Denote by
$D_k(M)$ the cofiber of \eqref{marching}. For example
$D_1(M)=B(M,1)=M$.
\begin{thm}\label{split3}{\rm (B{\"o}digheimer \cite{bodig},
Cohen \cite{cohen})}\qua For $M$ a manifold with non-empty boundary,
there is a stable splitting (ie, after sufficiently many suspensions):
$$B(M,k)\simeq_s\bigvee_{i=0}^kD_i(M)$$
\end{thm}
The classical case of $M=D^n$ (closed $n$--ball) is due to Victor
Snaith. A short and clever argument of proof for this sort of
splittings is due to Fred Cohen \cite{cohen}. The next stability bound is
due to Arnold and a detailed proof is in an appendix of \cite{segal1}.
\begin{thm}{\rm (Arnold)}\label{arnold}\qua The embedding $B(M,k)\hookrightarrow
B(M,k+1)$ induces a homology monomorphism and a homology equivalence up to
degree $[k/2]$.
\end{thm}
The monomorphism statement is in fact a consequence of \eqref{split2}.
Arnold's range is not optimal. For instance
\begin{thm}{\rm \cite{ks}}\qua If $S$ is a compact Riemann surface and $S^*=S-\{p\}$,
then $B(S^*,k)\hookrightarrow B(S^*,k+1)$ is a homology equivalence up
to degree $k-1$.
\end{thm}
We define $s(k)$ to be the homological connectivity
of $+ \co B(M,k){\ra{1.5}} B(M,k+1)$ (see \fullref{homstab}) . By Arnold, $s(k)\geq
[k/2]$ .
\subsection{Section spaces}\label{sectionspace}
If $\zeta \co E{\ra{1.5}} B$ is a fiber bundle over a base space $B$, we
write $\Gamma (\zeta )$ for its space of sections. If $\zeta$ is
trivial then evidently $\Gamma(\zeta )$ is the same as maps into the
fiber. Let $M$ be a closed smooth manifold of dimension $d$,
$U\subset M$ a closed subspace and $\tau^+M$ the fiberwise one-point
compactification of the tangent bundle over $M$ with fiber $S^d = {\mathbb R}^d
\cup\{\infty\}$. Then $\tau^+M{\ra{1.5}}
M$ has a preferred section $s_{\infty}$ which is the section at
$\infty$ and we let $\Gamma(\tau^+M;U )$ be those sections which
coincide with $s_{\infty}$ on $U$. Note that $\Gamma (\tau^+M )$
splits into components indexed by the integers as in
$$\Gamma (\tau^+M) := \coprod_{k\in{\mathbb Z}} \Gamma_k(\tau^+M)\ .$$
This degree arises as follows. Let $s \co M{\ra{1.5}}\tau^+M$ be a section.
By general position argument it intersects $s_{\infty}$ at a finite
number of points and there is a sign associated to each point. This
sign is defined whether the manifold is oriented or not (as in the
definition of the Euler number). The degree is then the signed sum.
Similarly we can define a (relative) degree of sections in $\Gamma
(\tau^+M;U)$.
Observe that if $\tau^+M$ is trivial, then
$\Phi \co \Gamma (\tau^+M)\fract{\simeq}{{\ra{1.5}}}\Map(M,S^d)$, where
$d=\dim M$.
The components of $\Map(M,S^d)$ are indexed by the degree of maps
(Hopf), but at the level of components we have the equivalence
$$\Gamma_{k} (\tau^+M)\simeq\map{k+\ell}(M,S^d)$$ where $\ell$ is such
that $\Phi (s_{\infty})\in\map{\ell}$. In the case when $M=S^{even}$,
then $\Phi (s_{\infty})$ is the antipodal map which has degree $\ell =
-1$ \cite{paolo}. When $M=S$ is a compact Riemann surface, $\ell =-1$
when the genus is even and $\ell = 0$ when the genus is odd \cite{ks}.
Further relevant homotopy theoretic properties of section spaces are
summarized in the appendix.
\subsection{Scanning and stability}\label{scan}
A beautiful and important connection between braid spaces and section
spaces can be found for example in \cite{segal2,dusa,quarterly} (see
Crabb and James \cite{james} for the fiberwise version). This
connection is embodied in the ``scanning'' map
\begin{equation}\label{scanning}
S_k \co B(M-U,k){\ra{1.5}} \Gamma_k(\tau^+M; U\cup\partial M )
\end{equation}
where $U$ is a closed subspace of $M$. Here and throughout we assume
that removing a subspace as in $M-U$ doesn't disconnect the space. The
scanning map has very useful homological properties. A sketch of the
construction of $S_k$ for closed Riemaniann $M$ goes as follows (for a
construction that works for topological manifolds see for example
Dwyer, Weiss and Williams \cite{dww}). First construct $S_1\co
M-U{\ra{1.5}}\Gamma_1$. We can suppose that $M$ has a Riemannian metric and
use the existence of an exponential map for $\tau M$ which is a
continuous family of embeddings $\exp_x\co \tau_xM{\ra{1.5}} M$ for $x\in M$
such that $x\in \im(\exp_x)$ and $\im(\exp_x)^+\cong\tau_x^+M$ (the fiber
at $x$ of $\tau^+M$). By collapsing out for each $x$ the complement
of $\im(\exp_x)$ we get a map $c_x \co M{\ra{1.5}} \im(\exp_x)^+\cong\tau_x^+M$
Let $V$ be an open neighborhood of $U$, $M-V{\ra{1.5}} M-U$ being a
deformation retract. Then we have the map
$$S_1 \co M-V{\ra{1.5}}\Gamma (\tau^+M)\ ,\ y\mapsto (x\mapsto c_x(y))\in
\tau_x^+M\ .$$ Observe that for $x$ near $U$, the section $S_1(y)$ agrees
with the section at infinity (ie, we say it is \textit{null}). In fact and more
precisely, $S_1$ maps into $\Gamma^c(\tau^+M,U)$ the space of sections
which are null outside a compact subspace of $M-U$. A deformation
argument shows that $\Gamma^c\simeq\Gamma$. It will be convenient to
say that a section $s\in\Gamma$ is \textit{supported} in a subset
$N\subset M$ if $s=s_{\infty}$ outside of $N$. A useful observation is
that if $s_1,s_2$ are two sections supported in closed $A$ and $B$ and
$A\cap B=\emptyset$, then we can define a new section which is
supported in $A\cup B$, restricting to $s_1$ on $A$ and to $s_2$ on
$B$.
Extending $S_1$ to $S_k$ is now easy. We first choose
$\epsilon > 0$ so that $B^{\epsilon}(M,k)$ the closed subset of
$B(M,k)$ where particles have pairwise separation $\geq 2\epsilon$ is
homotopic to $B(M,k)$ (this is verified in \cite[Lemma 2.3]{dusa}).
We next choose the exponential maps to be supported in neighborhoods
of radius $\epsilon$. Given a finite subset $Q:=\{y_1,\ldots, y_k\}\in
B^{\epsilon}(M-U,k)$, each point $y_i$
determines a section supported in $V_i := \im(\exp_{y_i} )$. Since the
$V_i$'s are pairwise disjoint, these sections fit together to give a
section $s_Q$ supported in $\bigcup V_i$ so that $S_k(Q):= s_Q$.
When $M$ is compact with boundary, then we get the map in \eqref{scanning} by
replacing $B(M-U,k)$ by $B(M-U\cup\partial M,k)$ and
$\Gamma^c(\tau^+M,U)$ by $\Gamma (\tau^+M, U\cup\partial M)$ the space
of sections that are null outside a compact subspace of
$M-U\cup\partial M$. We let $s(k)$ be the stability range of the
map $B(M-U,k){\ra{1.5}} B(M-U,k+1)$ (as in \S6.1)
The next proposition is a follow up on a main result of \cite{dusa} (see
also \cite{quarterly}).
\begin{prop}\label{dusa1} Suppose $M$ is a closed manifold and $U\subset M$
a non-empty closed subset, $M-U$ connected.
Then the map $S_{k*}\co H_*(B(M-U,k)){\ra{1.5}}
H_*(\Gamma_k(\tau^+M,U))$
is a monomorphism in all dimensions and an isomorphism up to dimension $s(k)$.
\end{prop}
\begin{proof} It is easy to see that the maps $S_k$ for various $k$ are
compatible up to homotopy with stabilization so we obtain a map $S \co
B(M,\infty ){\ra{1.5}} \Gamma_{\infty}(\tau^+M,U):=\lim_k\Gamma_k(\tau^+M,U)$
which according to the main
theorem of McDuff is a homology equivalence (in fact all components of
$\Gamma (\tau^+M,U)$ are equivalent and $\Gamma_{\infty}$ can be chosen to
be the component containing $s_{\infty}$). But according to \eqref{split2}
$H_*(B(M-U,k))\rightarrow H_*(B(M-U,\infty ))$ is a monomorphism, and then an
isomorphism up to dimension $s(k)$. The claim follows.
\end{proof}
This now also implies our last main result from the introduction.
\begin{proof}[Proof of \fullref{main4}] Suppose that $M$ is a closed
manifold of dimension $d$, $U$ a small open neighborhood of the basepoint $*$
and consider the fibration (see \hyperlink{App}{the appendix})
$$\Gamma_k (\tau^+M;\bar U){\ra{1.5}}\Gamma_k(\tau^+M){\ra{1.5}} S^d$$ The main
point is to use the fact as in \cite[proof of Theorem 1.1]{dusa} that
scanning sends the exact sequence in \fullref{longexact} to the Wang
sequence of this fibration. Let $N=M-U$ so that we can identify
$\Gamma_k (\tau^+M;\bar U)$ with $\Gamma_k(\tau^+N;\partial N)$ which
we write for simplicity $\Gamma^c_k(\tau^+N)$ as before. Under these
identifications and by a routine check we see that scanning induces
commutative diagrams:
$$\small
\begin{matrix}
\!\!\rightarrow\!\!&H_{q-d+1}(B(N,k-1))&\!\!\rightarrow\!\!&
H_q(B(N,k))&\!\!\rightarrow\!\!&H_q(B(M,k))&\!\!\rightarrow\!\!&H_{q-d}(B(N,k-1))
&\!\!\rightarrow\!\!\\
&\decdnar{S}&&\decdnar{S}&&\decdnar{S}&&\decdnar{S}&\\
\!\!\rightarrow\!\!&H_{q-d+1}(\Gamma^c_{k}(\tau^+N))&\!\!\rightarrow\!\!&
H_q(\Gamma^c_{k}(\tau^+N))&\!\!\rightarrow\!\!&H_q(\Gamma_{k}(\tau^+M))
&\!\!\rightarrow\!\!&H_{q-d}(\Gamma^c_{k}(\tau^+N))
&\!\!\rightarrow\!\!
\end{matrix}
$$
where the top sequence is the homology exact sequence for the pair
$(B(M,k),B(N,k))$ as discussed in \fullref{longexact}
and the lower exact sequence is the Wang sequence of
the fibration $\Gamma_k(\tau^+M){\ra{1.5}} S^d$.
According to \fullref{dusa1}, the map
$S_{k*}\co H_q(B(N,k))$ ${\ra{1.5}} H_q(\Gamma^c_k(\tau^+N))$ is an isomorphism
up to degree $q=s(k)$. It follows that all vertical maps
in the diagram above involving the subspace $N$ together with the
next map on the right (which doesn't appear in the diagram)
are isomorphisms whenever $q\leq s(k-1)\leq s(k)$. By the $5$--lemma the
middle map is then an isomorphism within that range as well. This proves
the proposition.
\end{proof}
We can say a little more when $k=1$, $M$ closed always.
\begin{lem}\label{s1}
The map $S_1 \co M{\ra{1.5}} \Gamma_{1} (\tau^+M)$
induces a monomorphism in homology in degrees $r+1,r+2$, where
$r=\conn(M)$, $r\geq 1$.
\end{lem}
\begin{proof}
Consider $\Gamma(s\tau^+M)$ the space of sections
of the fibration
$s\tau^+M{\ra{1.5}} M$ obtained from $\tau^+M$ by applying fiberwise
the functor ${SP}^{\infty}$. It is easy to see that scanning has a stable analog
$st\co {SP}^{\infty} (M_+){\ra{1.5}} \Gamma (s\tau^+M)$ but harder to verify that
$st$ is a (weak) homotopy
equivalence \cite{dww,quarterly}. Note that ${SP}^{\infty} (M_+)\simeq{SP}^{\infty}
M\times{\mathbb Z}$
and ${SP}^{\infty} (M)$ is equivalent to a connected component (any of them)
say $\Gamma_0 (s\tau^+M )$.
By construction the following diagram homotopy commutes
$$
\begin{matrix}\label{fromstosp}
M&\fract{S_1}{{\ra{1.5}}}&\Gamma_{1} (\tau^+M)\\
\decdnar{}&&\decdnar{\alpha}\\
{SP}^{\infty} (M)&\fract{st}{{\ra{1.5}}}&\Gamma_0 (s\tau^+M )
\end{matrix}
$$
where the right vertical map $\alpha$ is induced from the natural fiber
inclusion $\alpha \co S^d\hookrightarrow{SP}^{\infty} (S^d)$. When $M$ is
$r$--connected, the map $M{\ra{1.5}} {SP}^{\infty} (M)$ induces an isomorphism in homology in
dimensions $r+1$ and $r+2$ \cite[Corollary 4.7]{nakaoka}. This means that
the composite $M\rightarrow \Gamma_{1} (\tau^+M)\rightarrow \Gamma_1 (s\tau^+M
)$ is a homology isomorphism in those dimensions and the claim follows.
\end{proof}
\begin{rem} If $M$ has boundary, then by scanning $M_0:=M-\partial M$ we
obtain a map into the compactly supported sections $\Gamma
(\tau^+M)$. This map extends to a map $S \co M/\partial M{\ra{1.5}} \Gamma
(\tau^+M)$ which is according to Aouina and Klein \cite{aouina}
$(d-r+1)$--connected if $M$ is $r$--connected of dimension $d\geq 2$.
\end{rem}
\hypertarget{App}{\smash{$\phantom{9}$}}
\section{Appendix: Some homotopy properties of section spaces}
All spaces below are assumed connected.
We discuss some pertinent statements from Switzer \cite{switzer}.
Let $p\co E{\ra{1.5}} B$ be a Serre fibration, $i\co A\hookrightarrow X$
a cofibration ($A$ can be empty) and $u\co X{\ra{1.5}} E$ a given map.
Slightly changing the notation in that paper, we define
$$\Gamma_u (X,A; E,B) = \{f\co X{\ra{1.5}} E\ |\ f\circ i = u\circ i,
p\circ f = p\circ u\}$$
This is a closed subspace of the space of all maps $\Map(X,E)$ and is in
other words the solution space for the extension problem
$$\disablesubscriptcorrection\xysavmatrix{
A\ar[r]^{ui}\ar[d]^i&E\ar[d]^p\\
X\ar[r]_{pu}\ar[ru]^u&B
}
$$
with data $u_{|A}\co A{\ra{1.5}} E$ and $pu \co X{\ra{1.5}} B$. When $A=\{x_0\}$ and $B =
\{y_0\}$ then $\Gamma (X,x_0;E,y_0) = \bmap{}(X,E)$ is the space of based maps
from $X$ to $Y$ sending $x_0$ to $y_0$. On the other hand and when $X=B$ and
$A=\emptyset$, then $\Gamma_u(B,\emptyset;E,B) = \Gamma (E)$ is the section
space of the fibration $\zeta = (E\fract{p}{{\ra{1.5}}} B)$.
\begin{prop}{\rm\cite{switzer}}\label{switzer}\qua
\begin{itemize}
\item
If $A\subset X'\subset X$ is a nested sequence of NDR pairs, and
$j\co X'\hookrightarrow X$ the inclusion, then the induced map
$\Gamma_u(X,A; E,B){\ra{1.5}} \Gamma_{uj}(X',A;E,B)$\
yields a fibration with $\Gamma_u (X,X';E,B)$ as fibre.
\item
If $E{\ra{1.5}} E'{\ra{1.5}} B$ are two fibrations and $q\co E{\ra{1.5}} E'$
the projection, then\break the induced map
$\Gamma_u(X,A; E,B){\ra{1.5}} \Gamma_{qu}(X,A;E',B)$\
is a fibration with\break $\Gamma_u (X,A;E,E')$ as fibre.
\end{itemize}
\end{prop}
The first part of Switzer's result implies that restriction of the
bundle $\zeta \co E{\ra{1.5}} B$ to $X\subset B$ is a fibration $\Gamma
(\zeta ){\ra{1.5}} \Gamma (\zeta_{|X})$ with fiber the section space
$\Gamma (\zeta, X)$ ie, those sections of $\zeta$ which are
``stationary'' over $X$ (compare \cite[Chapter 1, Section 8]{james}).
An example of relevance is when $\zeta = \tau^+M$ is the fiberwise
one-point compactification and $s_{\infty}$ is the section mapping at
infinity. Denote by $S^d$ the fiber over $x_0\in M$. If $U$ is a
small open neighborhood of $x_0$, then $\Gamma (\zeta_{|\bar U})\simeq
S^d$ and we have a fibration
\begin{equation}\label{fibration}
\Gamma (\tau^+M,\bar U){\ra{1.5}}\Gamma (\tau^+M)\fract{res}{{\ra{1.5}}} S^d
\end{equation}
where the fiber consists of those sections which coincide with $s_{\infty}$ on
$U$. So for instance if $M=S^d$, $\Gamma (\tau^+M,\bar U)\simeq\Omega^dS^d$
and the fibration reduces to the evaluation fibration
$\Omega^dS^d\rightarrow \Map(S^d,S^d)\rightarrow S^d$.
Finally and
according to \cite[page 29]{james}, if $E{\ra{1.5}} B$ is
a Hurewicz fibration and $s,t$ are two sections, then $s$ and $t$ are
homotopic if and only if they are section homotopic. We use this to deduce the
following lemma.
\begin{lem}
\label{mono} Let $\pi \co E{\ra{1.5}} B$ be a fibration with a preferred section
$s_{\infty}$ (which we choose as basepoint). Then the inclusion $\Gamma
(E){\ra{1.5}}\Map(B,E)$ induces a monomorphism on homotopy groups.
\end{lem}
\begin{proof} We give $\Gamma (E)\subset\Map(B,E)$ the common basepoint
$s_{\infty}$. An element of $\pi_i\Gamma (E)$ is the homotopy class of a
(based) map $\phi \co S^i{\ra{1.5}}\Gamma (E)$ or equivalently a map $\phi \co
S^i\times B{\ra{1.5}} E$ (where $\phi (-,b)\in\pi^{-1}(b)$ and $\phi
(N,-)=s_\infty(-)$, $N$ the north pole of $S^i$) and the homotopy is through
similar maps. Write $\Phi$ the image of $\phi$ via the composite
$S^i{\ra{1.5}}\Gamma (E){\ra{1.5}}\Map(B,E)$. Now $\Phi$ can be viewed as a section
of $S^i\times E{\ra{1.5}} S^i\times B$ and a null-homotopy of
$\Phi$ is a homotopy to $id\times s_{\infty}$. Since this null-homotopy can
be done fiberwise it is a null-homotopy in $\Gamma (E)$ from $\phi$ to
$s_{\infty}$.
\end{proof}
\bibliographystyle{gtart}
| {
"timestamp": "2009-04-06T23:43:03",
"yymm": "0904",
"arxiv_id": "0904.1024",
"language": "en",
"url": "https://arxiv.org/abs/0904.1024",
"abstract": "We discuss various aspects of `braid spaces' or configuration spaces of unordered points on manifolds. First we describe how the homology of these spaces is affected by puncturing the underlying manifold, hence extending some results of Fred Cohen, Goryunov and Napolitano. Next we obtain a precise bound for the cohomological dimension of braid spaces. This is related to some sharp and useful connectivity bounds that we establish for the reduced symmetric products of any simplicial complex. Our methods are geometric and exploit a dual version of configuration spaces given in terms of truncated symmetric products. We finally refine and then apply a theorem of McDuff on the homological connectivity of a map from braid spaces to some spaces of `vector fields'.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Symmetric products, duality and homological dimension of configuration spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811146,
"lm_q2_score": 0.8006920092299293,
"lm_q1q2_score": 0.7901046005793154
} |
https://arxiv.org/abs/1412.3491 | Non-separability of the Lipschitz distance | Let $X$ be a compact metric space and $\mathcal M_X$ be the set of isometry classes of compact metric spaces $Y$ such that the Lipschitz distance $d_L(X,Y)$ is finite. We show that $(\mathcal M_X, d_L)$ is not separable when $X$ is a closed interval, or an infinite union of shrinking closed intervals. | \section{Introduction}
For compact metric spaces $(X,d_X)$ and $(Y,d_Y)$, {\it the Lipschitz distance} $d_{L}(X,Y)$ is defined to be the infimum of $\epsilon\ge 0$ such that an $\epsilon$-isometry $f: X \to Y$ exists.
Here a bi-Lipschitz homeomorphism $f:X\to Y$ is called {\it an $\epsilon$-isometry} if
\begin{align*}
|\log {\rm dil}(f)| + |\log {\rm dil}(f^{-1})| \le \epsilon,
\end{align*}
where ${\rm dil}(f)$ denotes the smallest Lipschitz constant of $f$, called {\it the dilation of $f$}:
$${\rm dil}(f)=\sup_{\substack{x,y\in X \\ x\neq y}}\frac{d_Y(f(x), f(y))}{d_X(x,y)}.$$
Let $\mathcal M$ be the set of isometry classes of compact metric spaces. It is well-known that $(\mathcal M, d_L)$ is a complete metric space. See, e.g., \cite[Appendix A]{S} for the proof of the completeness and see, e.g., \cite{BBI01, Gro99} for details of the Lipschitz distance.
Then the following question arises:
\begin{description}
\item[(Q)]: Is the metric space $(\mathcal M, d_L)$ separable?
\end{description}
The answer is {\it no}, which can be seen easily by the following facts:
\begin{description}
\item[(a)]if $d_L(X,Y)<\infty$, the Hausdorff dimensions of $X$ and $Y$ must coincide;
\item[(b)]for any non-negative real number $d$, there is a compact metric space $X$
whose Hausdorff dimension is equal to $d$.
\end{description}
See, e.g., \cite[Proposition 1.7.19]{BBI01} for (a) and \cite{SS} for (b).
The fact (b) indicates that $(\mathcal M, d_L)$ is too big to be separable. Then we change the question (Q) to the following more reasonable one (Q'):
For a compact metric space $X$, let $\mathcal M_X$ be the set of isometry classes of compact metric spaces $Y$ such that $d_L(X,Y)<\infty$. Any elements of $\mathcal M_X$ have a common Hausdorff dimension by (a). Then the following question arises:
\begin{description}
\item[(Q')]: Is the metric space $(\mathcal M_X, d_L)$ separable?
\end{description}
The main results of this paper give the negative answer for this question for several $X$. To be more precise, we give two examples for $X$ such that $(\mathcal M_X, d_L)$ is not separable:
\begin{description}
\item[(i)] Infinite unions of shrinking closed intervals with zero
\[ \{ 0 \} \cup \bigcup_{n=1}^{\infty}\l[\frac{1}{2^n}, \frac{1}{2^{n}}+\frac{1}{2^{n+1}}\r]; \]
\item[(ii)] Closed interval $[0,1]$.
\end{description}
We would like to stress that $(\mathcal M_X, d_L)$ becomes non-separable even when $X$ are the above elementary cases.
We note that the non-separability of the first example follows from the non-separability of the second example. The first example, however, is easier to show the non-separability than the second example.
The present paper is organized as follows: In the first section, we show that the set of isometry classes of the infinite unions of shrinking closed intervals with zero is not separable. In the second section, we show that the set of isometry classes of
the closed interval is not separable.
\section{The first example}
Let ${\mathbb Z}_{>0} = \{ n \in {\mathbb Z}: n>0\}$ denote the set of positive integers. For $n,m \in \Z_{>0}$, let $I(n,m)$ be an interval in ${\mathbb R}$ defined as follows:
$$I(n,m)=\l[\frac{1}{2^n}, \frac{1}{2^n}+\frac{1}{2^{n+m}}\r].$$
For each $u=(u_n)_{n \in \Z_{>0}} \in \{1,2\}^{\Z_{>0}}$, we define the following subset in ${\mathbb R}$:
\begin{align} \label{eq: Xu}
X_{u} = \{0\} \cup \bigcup_{n=1}^{\infty} I(n,{u_n}).
\end{align}
We equip $X_{u}$ with the usual Euclidean metric in ${\mathbb R}$:
\[d(x,y)=|x-y|, \quad x,y \in X_{u}.\]
Then it is easy to check that $(X_{u},d)$ is a compact metric space.
Let $\mathbf 1=(1,1,1,...) \in \{1,2\}^{\Z_{>0}}$ denote the element in $\{1,2\}^{\Z_{>0}}$ such that all components are equal to one. Let $X_{\mathbf 1}$ be the set defined in \eqref{eq: Xu} for the element $\mathbf 1$.
Let $\mathcal M_{X_{\mathbf 1}}$ denote the set of isometry classes of compact metric spaces $X$ whose Lipschitz distances from $X_{\mathbf 1}$ are finite, that is, $d_L(X, X_\mathbf 1)<\infty$. Then we have the following result:
\begin{theorem} \label{thm: Xu}
$(\mathcal M_{X_{\mathbf 1}}, d_L)$ is not separable.
\end{theorem}
\proof
It is enough to find a certain discrete subset $\mathbb X \subset \mathcal M_{X_\mathbf 1}$ with the continuous cardinality.
We introduce a subset $\mathbb X \subset \mathcal M$, which is the set of isometry classes of all $X_u$ for $u \in {\mathbb Z}_{>0}$:
\[ \mathbb{X}=\{ (X_{u},d) : u \in \{1,2\}^{{\mathbb Z}_{>0}}\}/\text{isometry}.\]
It is clear that the cardinality of $\mathbb X$ is continuum.
We show that $\mathbb{X} \subset \mathcal{M}_{X_{\mathbf 1}}$ and $\mathbb{X}$ is discrete (i.e., every point in $\mathbb X$ is isolated).
We first show that $\mathbb{X} \subset \mathcal{M}_{X_{\mathbf 1}}$.
For $u=(u_n)_{n \in \Z_{>0}} \in \{1,2\}^{{\mathbb Z}_{>0}}$ and $v=(v_n)_{n \in \Z_{>0}} \in \{1,2\}^{{\mathbb Z}_{>0}}$, let $f_{u,v}$ be a function from $X_{u}$ to $X_{v}$ defined by
\begin{align*}
f_{u,v}(x)=
\begin{cases} \displaystyle
0 \quad (x=0),\\
\displaystyle \frac{2^{u_n}}{2^{v_n}}\l(x-\frac{1}{2^n}\r)+ \frac{1}{2^n} \quad \bigl(x \in I({n},{u_n}) \bigr).
\end{cases}
\end{align*}
Then $f_{u,v}$ is a bi-Lipschitz continuous function from $X_{u}$ to $X_{v}$ and for $x,y \in X_{u}$,
\[\frac{1}{2}|x-y| \leq |f_{u,v}(x)-f_{u,v}(y)|\leq 2|x-y|.\]
Therefore the Lipschitz distance between $X_u$ and $X_v$ is bounded by
\begin{align*}
d_L(X_{u},X_{v}) \leq 2\log 2 \quad \text{for any $u,v \in \Z_{>0}$}.
\end{align*}
Thus we have that $\mathbb{X} \subset \mathcal{M}_{X_{\mathbf 1}}$.
Second we show that $\mathbb{X}$ is discrete:
\begin{lemma}\label{CE}
Let $X_{u}, X_{v} \in \mathbb{X}$.
If $d_L(X_{u},X_{v})<\log 2$, then $u=v$.
\end{lemma}
\begin{proof}
Let $u=(u_n)_{n \in \Z_{>0}} \in \{1,2\}^{{\mathbb Z}_{>0}}$ and $v=(v_n)_{n \in \Z_{>0}} \in \{1,2\}^{{\mathbb Z}_{>0}}$.
We show that $u_n=v_n$ for all $n \in \Z_{>0}$.
By the assumption $d_L(X_{u},X_{v})<\log 2$, there exists a bi-Lipschitz function $f:X_{u} \to X_{v}$ such that
\begin{equation}\label{ex1-ass}
|\log \mbox{dil}(f)|+|\log \mbox{dil}(f^{-1})|<\log 2.
\end{equation}
Since $f$ is homeomorphic, any intervals must be mapped to intervals by $f$.
That is, there exists a bijection $P : {\mathbb Z}_{>0} \to {\mathbb Z}_{>0}$ as $n \mapsto P(n)$ such that
\[f(I(n,{u_n}))=I({P(n)},{v_{P(n)}}).\]
To show $u_n=v_n$ for all $n \in \Z_{>0}$, we have two steps:
\begin{description}
\item[(i)] $n+u_n = P(n) + v_{P(n)}$;
\item[(ii)] $P(n)=n$.
\end{description}
We start to show (i) by contradiction.
Assume there exists $n_0 \in {\mathbb Z}_{>0}$ such that $n_0+u_{n_0} \neq P(n_0)+v_{P(n_0)}$.
Since $f|_{I({n_0},{u_{n_0}})}$ is homeomorphic, the endpoints of $I({n_0},{u_{n_0}})$ must be mapped to the endpoints of $I({P(n_0)},{v_{P(n_0)}})$ by $f|_{I({n_0},{u_{n_0}})}$.
Therefore
\[
\l| f\l(\frac{1}{2^{n_0}}\r) - f\l(\frac{1}{2^{n_0}}+ \frac{1}{2^{n_0+u_{n_0}}} \r) \r| = \frac{1}{2^{P(n_0)+v_{P(n_0)}}}.
\]
Thus the dilation of $f$ is at least bigger than
\begin{align*}
{\rm dil}(f)
&\ge \frac{| f(1/2^{n_0}) - f(1/2^{n_0}+1/2^{n_0+u_{n_0}})|}{|1/2^{n_0}-(1/2^{n_0}+1/2^{n_0+u_{n_0}})|}
\\
& = \frac{1}{2^{P(n_0)+v_{P(n_0)}-(n_0+u_{n_0})}}.
\end{align*}
By the assumption of $n_0+u_{n_0} \neq P(n_0)+v_{P(n_0)}$, we have that ${\rm dil}(f) \ge 2$ or ${\rm dil}(f^{-1}) \ge 2$.
This implies
$$|\log\mbox{dil}(f)|\geq \log 2 \quad \text{or} \quad |\log\mbox{dil}(f^{-1})|\geq \log 2. $$
This contradicts the inequality \eqref{ex1-ass}.
Hence we have $n+u_n = P(n) + v_{P(n)}$ for all $ n \in {\mathbb Z}_{>0}$.
We start to show (ii) by contradiction.
Assume there exists $n_0 \in {\mathbb Z}_{>0}$ such that $P(n_0)\neq n_0$.
Let us define
\[ n_*= \min \{ n \in {\mathbb Z}_{>0}| P(n) \neq n\}.\]
Then $P(n_*)>n_*$ by definition.
Since we know that $n+u_n=P(n)+v_{P(n)}$ by the first step (i), and that $u_n$ and $v_{P(n)}$ are in $\{1,2\}$, thus the possibility of values of $P(n)$ is that $P(n)=n-1$, $n$ or $n+1$.
This implies that $P(n_*)=n_*+1$, $P(n_*+1)=n_*$ and $P(n_*+2) =n_*+2,$ or $n_*+3$.
Since the endpoints of intervals must be mapped to the endpoints of intervals by $f$, the possibility of values of $f( 1/2^{n_*+1})$ and $ f( 1/2^{n_*+2})$ is
\[ f\l( \frac{1}{2^{n_*+1}}\r)= \frac{1}{2^{n_*}}, \quad \text{or} \quad \frac{1}{2^{n_*}}+\frac{1}{2^{n_*+v_{n_*}}}, \]
and
\begin{align*}
f\l( \frac{1}{2^{n_*+2}}\r) &= \frac{1}{2^{n_*+2}}, \ \frac{1}{2^{n_*+2}} + \frac{1}{2^{n_*+2+v_{(n_*+2)}}}, \ \frac{1}{2^{n_*+3}},
\\
&\text{or} \ \frac{1}{2^{n_*+3}} + \frac{1}{2^{n_*+3+v_{(n_*+3)}}}.
\end{align*}
Thus, by noting $v_{P(n_*+2)} \in \{1,2\}$, we have the following estimate:
\begin{align*}
& \l|f\l( \frac{1}{2^{n_*+1}}\r)-f\l( \frac{1}{2^{n_*+2}}\r)\r|
\\
&\geq \l|\frac{1}{2^{n_*}}- \l(\frac{1}{2^{n_*+2}} + \frac{1}{2^{n_*+2+v_{P(n_*+2)}}}\r)\r|
\\
&\ge \frac{5}{2} \frac{1}{2^{n_*+2}}.
\end{align*}
This shows $|\log\mbox{dil}(f)|\geq \log (5/2)$ and contradicts the inequality \eqref{ex1-ass}.
Hence we have $P(n)=n$ for all $n \in {\mathbb Z}_{>0}$.
By the above two steps, we have that $u_n=v_n$ for all $n \in \Z_{>0}$, and we have completed the proof of Lemma \ref{CE}.
\end{proof}
We resume the proof of Theorem \ref{thm: Xu}.
{\it Proof of Theorem \ref{thm: Xu}.} By using Lemma \ref{CE}, we know that $(\mathbb X, d_L)$ is discrete.
Since the cardinality of $\mathbb X$ is continuum and $\mathbb X \subset \mathcal M_{X_\mathbf 1}$, we have that $(\mathcal M_{X_\mathbf 1}, d_L)$ is not separable. We have completed the proof.
\qed
\section{The second example}
In this section, we show the non-separability of $\mathcal M_{[0,1]}$:
\begin{theorem} \label{thm: Int}
The metric space $(\mathcal M_{[0,1]},d_L)$ is not separable.
\end{theorem}
{\it Proof.}
It is enough to find a certain discrete subset $\mathbb Y \subset \mathcal M_{[0,1]}$ with the continuous cardinality.
Define two subsets, {\it flat parts} $J(n,0)$, and {\it pulse parts} $J(n,1)$ in ${\mathbb R}^2$:
\begin{itemize}
\item Flat part: for $n \in \Z_{>0}$,
\begin{align*}
J(n,0)=&\l[ \frac{1}{2^n},\frac{1}{2^{n-1}} \r]\times \{0\},
\end{align*}
\item Pulse part: for $n \in \Z_{>0}$,
\begin{align*}
J(n,1)=&\l[ \frac{3}{2^{n+1}},\frac{1}{2^{n-1}} \r]\times \{0\}\\
&\cup \l\{ \l(x,\frac{3}{2^{n+1}}-x \r) : \frac{5}{2^{n+2}} \leq x \leq \frac{3}{2^{n+1}} \r\}\\
&\cup \l\{\l(x,x-\frac{1}{2^n}\r): \frac{1}{2^n} \leq x \leq \frac{5}{2^{n+2}}\r\}.
\end{align*}
\end{itemize}
See the figures below:
\begin{figure}[htbp]
\begin{center}
\includegraphics{NonSep1.eps}
\caption{The left is $J(n,0)$ and the right is $J(n,1)$.}
\label{picture.1}
\end{center}
\end{figure}
\newpage
\noindent For each $u=(u_n)_{n \in \Z_{>0}} \in \{0,1\}^{\Z_{>0}}$, let $Y_u$ be a subset in ${\mathbb R}^2$ as an infinite union of flat parts and pulse parts with the origin:
\[Y_{u} = \{(0,0)\} \cup \bigcup_{n=1}^{\infty} J(n,u_n) \subset {\mathbb R}^2.\]
See the figure below:
\begin{figure}[h]
\begin{center}
\includegraphics{NonLip2.eps}
\caption{A picture of $Y_u$.}
\label{picture.2}
\end{center}
\end{figure}
\noindent We equip $Y_u$ with the usual Euclidean distance in ${\mathbb R}^2$:
\begin{align} \label{equal: Euc}
d((x_1,x_2),(y_1,y_2))=((x_1-y_1)^2+(x_2-y_2)^2)^{1/2}.
\end{align}
It is easy to check that $(Y_{u},d)$ is a compact metric space.
Let $\mathbb Y$ be the set of isometry classes of $Y_u$ for all $u \in \{0,1\}^{\Z_{>0}}$:
\[\mathbb{Y}=\{Y_{u}:u \in \{0,1\}^{{\mathbb Z}_{>0}}\}/\text{isometry}.\]
Now we show that $\mathbb Y \subset \mathcal M_{[0,1]}$.
For $u \in \{0,1\}^{{\mathbb Z}_{>0}}$, let $f_{u}$ be the projection from $Y_u$ to $[0,1]$ such that $x=(x_1,x_2) \mapsto x_1$.
Then it is easy to see that $f_{u}$ is bi-Lipschitz continuous and, for $x,y\in Y_u$,
\begin{align} \label{ineq: Lip2}
\frac{1}{\sqrt{2}}d(x,y) \leq |f_{u}(x)-f_{u}(y)| \leq d(x,y).
\end{align}
Therefore the Lipschitz distance between $[0,1]$ and $Y_u$ is bounded by
\[d_L([0,1],Y_{u})\leq \frac{1}{2}\log 2 \quad \text{$\forall u \in \{0,1\}^{\Z_{>0}}$}.\]
Thus we have $\mathbb Y \subset \mathcal M_{[0,1]}$.
Now we show that $\mathbb Y$ is discrete:
\begin{lemma}\label{CE2}
Let $Y_{u},Y_{v} \in \mathbb{Y}.$
If $$d_L(Y_{u},Y_{v} )<\frac{\log (\sqrt{2}+1)-\log \sqrt{5}}{2},$$ then $u=v$.
\end{lemma}
\begin{proof}
Let $u=(u_n)_{n \in \Z_{>0}} \in \{1,2\}^{{\mathbb Z}_{>0}}$ and $v=(v_n)_{n \in \Z_{>0}} \in \{1,2\}^{{\mathbb Z}_{>0}}$.
We show that $u_n=v_n$ for all $n \in \Z_{>0}$.
By the assumption, there exists a bi-Lipschitz function $f$ from $Y_{u}$ to $Y_{v}$ such that
\begin{equation}\label{dil-2}
|\log \mbox{\rm dil}(f)|+|\log \mbox{\rm dil}(f^{-1})| <\frac{\log (\sqrt{2}+1)-\log \sqrt{5}}{2}.
\end{equation}
Let us define a subset in $\Z_{>0}$ as follows: $$ P_u=\{n \in \Z_{>0}: u_n=1\}. $$
Without loss of generality, we may assume that $P_u$ is not empty. That is, $Y_u$ has at least one pulse.
The pulse part $J(n, u_n)$ of $Y_u$ for $n \in P_u$ is called {\it $n$-pulse of $Y_u$}. We note that, by the definition of the pulse parts, the peak of the $n$-pulse is attained at $5/2^{n+2}$ in $x$-axis.
It is enough for the desired result to show that $P_u=P_v$. We show that there is a bijection $F: P_u \to P_v$ such that $F(n)=n$.
To show this, we have the following three steps:
\begin{description}
\item[(i)] The first step: for $n \in P_u$,
\begin{align*}
f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) \in & \l\{ \frac{5}{2^{m+2}}: m \in {\mathbb Z}_{>0} \r\}
\\
& \cup \l\{ \frac{3}{2^{m+1}}: m \in {\mathbb Z}_{>0}\r\}
\\
& \cup \l\{ \frac{1}{2^m}:m \in {\mathbb Z}_{>0} \r\} .
\end{align*}
\item[(ii)] The second step: for $n \in P_u$,
\begin{align*}
f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) \notin & \l\{ \frac{3}{2^{m+1}}: m \in {\mathbb Z}_{>0}\r\}
\\
& \cup \l\{ \frac{1}{2^m} : m\in {\mathbb Z}_{>0} \r\} .
\end{align*}
\item[(iii)] The third step: for $n \in P_u$,
\[ f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) =\frac{5}{2^{n+2}}\quad \text{and} \quad v_n=1 .\]
\end{description}
In fact, if we show the above three statements, each maximizers $5/2^{n+2}$ of $n$-pulses of $Y_u$ are mapped to the maximizers $5/2^{n+2}$ of $n$-pulses of $Y_v$ by $f_{v}\circ f \circ f^{-1}_{u}$.
This correspondence of $n$-pulses defines the map $F: P_u \to P_v$ such that $F(n)=n$.
The proof of the all three steps (i)-(iii) are governed by the same scheme:
\begin{description}
\item[(A)] Assume that the statements do not hold (proof by contradiction);
\item[(B)] Estimate lower bounds of the dilations of $f$ and $f^{-1}$;
\item[(C)] The lower bounds obtained in (B) contradict the inequality \eqref{dil-2}.
\end{description}
We start to show the first step (i). Since $f$ is homeomorphic, the maximizer $5/2^{n+2}$ of the pulse cannot be mapped to the endpoints of $[0,1]$ by $f_{v} \circ f \circ f^{-1}_{u}$.
Assume that, for some $n \in P_u$,
\begin{align*}
f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) \notin & \l\{ \frac{5}{2^{m+2}} :m\in {\mathbb Z}_{>0} \r\}
\\
& \cup \l\{ \frac{3}{2^{m+1}} :m \in {\mathbb Z}_{>0}\r\}
\\
& \cup \l\{ \frac{1}{2^m} :m \in {\mathbb Z}_{>0} \r\},
\end{align*}
and prove (i) by contradiction. By the continuity of $f$, there exists $0<\delta< 1/2^{n+3}$ such that, for any $x \in [5/2^{n+2}-\delta, 5/2^{n+2}+\delta]$, we have
\begin{align*}
f_{v}\circ f \circ f^{-1}_{u}\l ( x\r) \notin & \l\{ \frac{5}{2^{m+2}} :m \in {\mathbb Z}_{>0} \r\}
\\
& \cup \l\{ \frac{3}{2^{m+1}}:m\in {\mathbb Z}_{>0}\r\}
\\
& \cup \l\{ \frac{1}{2^m} :m \in {\mathbb Z}_{>0} \r\} .
\end{align*}
Therefore we have
\[\begin{split}
&d \l(f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}-\delta\r),f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}+\delta\r) \r)\\
=&d \l(f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}-\delta\r),f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r) \r)\\
& +d \l(f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r),f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}+\delta\r) \r).
\end{split}\]
Here we use the fact that the three points $f\circ f^{-1}_{u}(5/2^{n+2}-\delta)$, $f\circ f^{-1}_{u}(5/2^{n+2})$ and $f\circ f^{-1}_{u}(5/2^{n+2}+\delta)$ are on the same line.
By using the inequality \eqref{ineq: Lip2}, the dilation of $f$ is estimated as follows:
\[\begin{split}
\mbox{dil}(f)
&\geq \frac{d \Bigl(f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}-\delta\r),f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}+\delta\r) \Bigr)}{d\Bigl( f^{-1}_{u}\l(\frac{5}{2^{n+2}}-\delta\r), f^{-1}_{u}\l(\frac{5}{2^{n+2}}+\delta\r) \Bigr)}\\
& = \frac{d \Bigl( f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}-\delta\r),f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r) \Bigr)}{2\delta}
\\
&\quad +\frac{d \Bigl(f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r),f\circ f^{-1}_{u}\l(\frac{5}{2^{n+2}}+\delta\r) \Bigr)}{2\delta}
\\
&\geq \frac{d \Bigl( f^{-1}_{u}\l(\frac{5}{2^{n+2}}-\delta\r), f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r) \Bigr) }{2\delta \mbox{ dil}(f^{-1})}
\\
& \quad + \frac{d \Bigl( f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r), f^{-1}_{u}\l(\frac{5}{2^{n+2}}+\delta\r) \Bigr) }{2\delta \mbox{ dil}(f^{-1})}\\
&= \frac{\sqrt{2}}{\mbox{dil}(f^{-1})}.
\end{split}\]
In the above last line, we just calculated the distance following the Euclidean distance \eqref{equal: Euc} in the $n$-pulse $J(n,1)$.
This implies that $\mbox{dil}(f)\geq 2^{\frac{1}{4}}$, or $\mbox{dil}(f^{-1}) \geq 2^{\frac{1}{4}}$. Thus we have
\[d_L(Y_{u},Y_{v}) \geq \frac{\log 2}{4}.\]
This contradicts the inequality \eqref{dil-2}.
Therefore we have, for any $n \in P_u$,
\begin{align*}
f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) \in & \l\{ \frac{5}{2^{m+2}}: m \in {\mathbb Z}_{>0} \r\}
\\
&\cup \l\{ \frac{3}{2^{m+1}} :m \in {\mathbb Z}_{>0}\r\}
\\
&\cup \l\{ \frac{1}{2^m} :m \in {\mathbb Z}_{>0} \r\}.
\end{align*}
We start to show the second step (ii) by contradiction. Assume that, for some $n \in P_u$,
$$f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) \in \l\{ \frac{1}{2^m} :m \in {\mathbb Z}_{>0} \r\}.$$
Then there exists $n_1\in {\mathbb Z}_{>0}$ such that
\begin{align} \label{eq: step2}
f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) = \frac{1}{2^{n_1}}.
\end{align}
By the same argument as the first step (i), we can obtain $v_{n_1}=1$, that is, the $n_1$-pulse exists in $Y_v$.
By the continuity of $f$, there exists $0< \delta < 1/2^{n_1+3}$ such that
\begin{align*}
f_{u} \circ f^{-1} \circ f^{-1}_{v} \l( \l[\frac{1}{2^{n_1}}-\delta,\frac{1}{2^{n_1}} \r) \r) \subset & \l(\frac{5}{2^{n+2}}-\frac{1}{2^{n+3}},\frac{5}{2^{n+2}} \r)
\\
& \cup \l(\frac{5}{2^{n+2}},\frac{5}{2^{n+2}}+\frac{1}{2^{n+3}} \r),
\end{align*}
and
\begin{align*} f_{u} \circ f^{-1} \circ f^{-1}_{v} \l( \l(\frac{1}{2^{n_1}},\frac{1}{2^{n_1}} +\delta \r] \r) \subset & \l(\frac{5}{2^{n+2}}-\frac{1}{2^{n+3}},\frac{5}{2^{n+2}} \r)
\\
& \cup \l(\frac{5}{2^{n+2}},\frac{5}{2^{n+2}}+\frac{1}{2^{n+3}} \r).
\end{align*}
Noting the definition of $(Y_v,d)$, for $ x \in [\frac{1}{2^{n_1}}-\delta,\frac{1}{2^{n_1}} )$ and $y \in (\frac{1}{2^{n_1}},\frac{1}{2^{n_1}} +\delta ]$, we have
\[\begin{split}
d\bigl(f^{-1}_{v}(x),f^{-1}_{v}(y)\bigr)&=\l(|x-y|^2+\l|y-\frac{1}{2^{n_1}}\r|^2\r)^{1/2},\\
d\l(f^{-1}_{v}(x),f^{-1}_{v}\l(\frac{1}{2^{n_1}}\r)\r)&=\l|x-\frac{1}{2^{n_1}}\r|,\\
d\l(f^{-1}_{v}(y),f^{-1}_{v}\l(\frac{1}{2^{n_1}}\r)\r)&=\sqrt{2}\l|y-\frac{1}{2^{n_1}}\r|.
\end{split}
\]
Since $|x-y|=|x-2^{-n_1}|+ |y-2^{-n_1}|$, we have the following inequality:
\begin{align} \label{ineq: triangle1}
&d\l(f^{-1}_{v}(x),f^{-1}_v\l(\frac{1}{2^{n_1}}\r)\r)+d\l(f^{-1}_{v}(y),f^{-1}_v\l(\frac{1}{2^{n_1}}\r)\r)
\\
& = ( |x-2^{-n_1}|^2 + 2\sqrt{2} | x-2^{-n_1} || y-2^{-n_1} | \notag \\
& \hspace{5cm} + 2| y-2^{-n_1} |^2 )^{1/2}\notag \\
& \leq ( \sqrt{2}|x-2^{-n_1}|^2 + 2\sqrt{2} | x-2^{-n_1} || y-2^{-n_1} | \notag \\
& \hspace{5cm} + 2\sqrt{2} | y-2^{-n_1} |^2 )^{1/2}\notag \\
& = 2^{\frac{1}{4}} d(f^{-1}_{v}(x),f^{-1}_{v}(y)). \notag
\end{align}
On the other hand, there exist $ x_0 \in [\frac{1}{2^{n_1}}-\delta,\frac{1}{2^{n_1}} )$ and $y_0 \in (\frac{1}{2^{n_1}},\frac{1}{2^{n_1}} +\delta ]$ such that
\begin{align*}
&d\l(f^{-1} \circ f^{-1}_{v}(x_0),f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r) \r)
\\
&= d\l(f^{-1} \circ f^{-1}_{v}(y_0),f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r) \r).
\end{align*}
Thus the triangle determined by the three vertices $f\circ f^{-1}_{v}(x_0)$, $f\circ f^{-1}_{v}(y_0)$ and $f^{-1}_{u}(5/2^{n+2})$ is an isosceles right triangle, and we can calculate
\begin{align} \label{eq: triangle}
&d\l(f^{-1} \circ f^{-1}_{v}(x_0),f^{-1} \circ f^{-1}_{v}(y_0) \r)\\ \notag
&=\frac{1}{\sqrt{2}}d\l(f^{-1} \circ f^{-1}_{v}(x_0),f^{-1}_{u}\l(\frac{5}{2^{n+2}}\r) \r) \\ \notag
& \quad + \frac{1}{\sqrt{2}}d\l(f^{-1} \circ f^{-1}_{v}(y_0),f^{-1}_{u}\l(\frac{5}{2^{n+2}} \r)\r).
\end{align}
By \eqref{eq: step2}, \eqref{ineq: triangle1} and \eqref{eq: triangle}, we have a bound for the dilation of $f$:
\begin{align} \label{ineq: dilation}
\begin{split}
\frac{1}{\mbox{dil}(f)} & \leq \frac{ d\Bigl(f^{-1} \circ f^{-1}_{v}(x_0),f^{-1} \circ f^{-1}_{v}(y_0)\Bigr)}{d\Bigl(f^{-1}_{v}(x_0),f^{-1}_{v}(y_0)\Bigr)} \\
& = \frac{d\Bigl(f^{-1} \circ f^{-1}_{v}(x_0),f^{-1}_{u}(\frac{5}{2^{n+2}}) \Bigr)}{\sqrt{2}d(f^{-1}_{v}(x_0),f^{-1}_{v}(y_0))}
\\
& \quad +\frac{d\Bigl(f^{-1} \circ f^{-1}_{v}(y_0),f^{-1}_{u}(\frac{5}{2^{n+2}}) \Bigr)}{\sqrt{2}d(f^{-1}_{v}(x_0),f^{-1}_{v}(y_0))}
\\
\leq &\frac{\mbox{dil}(f^{-1})d\bigl( f^{-1}_{v}(x_0),f^{-1}_{v}(\frac{1}{2^{n_1}}) \bigr)}{\sqrt{2}d(f^{-1}_{v}(x_0),f^{-1}_{v}(y_0))}
\\
& \quad + \frac{\mbox{dil}(f^{-1}) d\bigl(f^{-1}_{v}(y_0),f^{-1}_{v}(\frac{1}{2^{n_1}}) \bigr)}{\sqrt{2}d(f^{-1}_{v}(x_0),f^{-1}_{v}(y_0))}
\\
\le & \frac{1}{2^{\frac{1}{4}}}\mbox{dil}(f^{-1}).
\end{split}
\end{align}
Here we used the equality \eqref{eq: triangle} in the second line, the equality \eqref{eq: step2} and the definition of the dilation in the third line, and the inequality \eqref{ineq: triangle1} in the last line.
The inequality \eqref{ineq: dilation} implies that $\mbox{dil}(f)\geq 2^{\frac{1}{8}}$ or $\mbox{dil}(f^{-1}) \geq 2^{\frac{1}{8}}$. Thus we have
\[d_L(Y_{u},Y_{v}) \geq \frac{\log 2}{8}.\]
This contradicts the inequality \eqref{dil-2}.
Therefore we have, for any $n \in P_u$,
\[f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) \notin \l\{ \frac{1}{2^{m}} :m \in {\mathbb Z}_{>0} \r\}.\]
By the same argument as above, we have, for any $n \in P_u$,
\[f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) \notin \l\{ \frac{3}{2^{m+1}} :m \in {\mathbb Z}_{>0}\r\}.\]
Now we start to show the third step (iii).
By the above two steps (i) and (ii), we have that, for any $n \in P_u$, there exists $p_f(n) \in {\mathbb Z}_{>0}$ such that
\[f_{v}\circ f \circ f^{-1}_{u}\l ( \frac{5}{2^{n+2}}\r) =\frac{5}{2^{p_f(n)+2}} .\]
By the same argument as the first step (i), we can check that $p_f(n) \in P_v$, that is, $v_{p_f(n)}=1$.
Also for the inverse function $f^{-1}$, we have that,
for any $n \in P_v$, there exists $p_{f^{-1}}(n) \in P_u$ such that
\[f_{u} \circ f^{-1} \circ f^{-1}_{v}\l ( \frac{5}{2^{n+2}}\r) =\frac{5}{2^{p_{f^{-1}}(n)+2}}.\]
Since $f$ is a bijection,
the map $p_f$ is a bijection from $P_u$ to $P_v$ and $p^{-1}_f=p_{f^{-1}}$.
Now it suffices to show that $p_f(n)=n$ for all $n \in P_u$. We assume that there exists $l\in P_u$ such that $p_f(l) \neq l$.
Without loss of generality, we may assume $p_f(l)>l$.
We first show that
\begin{align} \label{incl2-0}
f_{u} \circ f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r) \in \l(\frac{1}{2^{l}},\frac{5}{2^{l+2}}\r) \cup \l(\frac{5}{2^{l+2}},\frac{3}{2^{l+1}}\r).
\end{align}
To show this, it suffices to show that
$$\frac{\sqrt{2}}{2^{l+2}} > d\l(f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r),f^{-1}_{u}\l(\frac{5}{2^{l+2}}\r)\r),$$
where the above inequality means that the point $f^{-1} \circ f^{-1}_{v}(\frac{1}{2^{p_f(l)}})$ belongs to one of two edges in the $l$-pulse crossing at the right angle.
By $p_{f^{-1}}\circ p_f(l)=p^{-1}_{f}\circ p_f(l)=l$, we have
\[\begin{split}
\frac{\sqrt{2}\mbox{dil}(f^{-1})}{2^{p_f(l)+2}} =& \mbox{dil}(f^{-1})d\l(f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r),f^{-1}_{v}\l(\frac{5}{2^{p_f(l)+2}}\r)\r)\\
\ge & d\l( f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r),f^{-1}\circ f^{-1}_{v}\l(\frac{5}{2^{p_f(l)+2}}\r)\r)
\\
= & d\l( f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r),f^{-1}_{u}\l(\frac{5}{2^{p_{f^{-1}}\circ p_f(l)+2}}\r)\r).
\\
= & d\l( f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r),f^{-1}_{u}\l(\frac{5}{2^{l+2}}\r)\r).
\end{split}\]
Since we have $\mbox{dil}(f^{-1})\leq 2^{\frac{1}{4}}$ (by the inequality \eqref{dil-2}) and $p_f(l) \ge l+1$, it holds
$$\frac{\sqrt{2}}{2^{l+2}} > \frac{\sqrt{2}\mbox{dil}(f^{-1})}{2^{p_f(l)+2}} \ge d\l(f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r),f^{-1}_{u}\l(\frac{5}{2^{l+2}}\r)\r).$$
Thus we have shown \eqref{incl2-0}.
By the continuity of $f$ and \eqref{incl2-0}, there exists $\delta>0$ such that $\delta< \frac{1}{2^{p_f(l)+3}}$ and
\begin{align*}
& f_{u} \circ f^{-1} \circ f^{-1}_{v}\l( \l[\frac{1}{2^{p_f(l)}}-\delta,\frac{1}{2^{p_f(l)}}+\delta\r]\r)
\\
& \subset \l(\frac{1}{2^{l}},\frac{5}{2^{l+2}}\r) \cup \l(\frac{5}{2^{l+2}},\frac{3}{2^{l+1}}\r).
\end{align*}
Since the three points $f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}}-\delta \r), f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}} \r)$ and $ f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}}+\delta \r)$ are on the same line, we have
\begin{align} \label{eq-2-1}
\begin{split}
& d\l(f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}}-\delta \r),f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}}+\delta \r)\r)\\
=& d\l(f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}}-\delta \r), f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}} \r)\r)\\
&+d\l(f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}}\r),f^{-1} \circ f^{-1}_{v}\l( \frac{1}{2^{p_f(l)}}+\delta \r)\r).
\end{split}
\end{align}
Thus the inclusion \eqref{incl2-0} and the equality \eqref{eq-2-1} imply the following bound of the dilation of $f$:
\begin{align*}
\begin{split}
& \mbox{dil}(f^{-1})
\\ \geq & \frac{d \l(f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}-\delta\r),f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}+\delta\r) \r)}{d\l( f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}-\delta\r), f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}+\delta\r) \r)}\\
=& \frac{d \l(f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}-\delta\r),f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r) \r) }{\sqrt{5}\delta}\\
&+\frac{d \l(f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r),f^{-1} \circ f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}+\delta\r) \r)}{\sqrt{5}\delta}\\
\geq & \frac{d \l( f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}-\delta\r), f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r) \r) }{\sqrt{5}\delta \mbox{ dil}(f)}
\\
& \quad + \frac{d \l( f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}\r), f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}+\delta\r) \r) }{\sqrt{5}\delta \mbox{ dil}(f)}\\
=& \frac{\sqrt{2}+1}{\sqrt{5}\mbox{dil}(f)}.
\end{split}
\end{align*}
Here we used the following equality in the second line:
\begin{align*}
\begin{split}
& d\l( f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}-\delta\r), f^{-1}_{v}\l(\frac{1}{2^{p_f(l)}}+\delta\r)\r)
\\
&=\l(\l|\frac{1}{2^{p_f(l)}}-\delta-\l(\frac{1}{2^{p_f(l)}}+\delta\r)\r|^2+|\delta|^2\r)^{1/2}\\
&=\sqrt{5}\delta.
\end{split}
\end{align*}
Thus $\mbox{dil}(f) \geq \l(\frac{\sqrt{2}+1}{\sqrt{5}}\r)^{1/2}$ or $\mbox{dil}(f^{-1}) \geq \l(\frac{\sqrt{2}+1}{\sqrt{5}}\r)^{1/2}$.
This contradicts the inequality \eqref{dil-2}.
Therefore we have $p_f(n)=n$ for any $n \in P_u$.
We have completed all of the three steps. Setting $F(n)=p_f(n)$, we have that the map $F: P_u \to P_v$ is a bijection such that $F(n)=n$ and this implies $P_u=P_v$. We have completed the proof.
\end{proof}
We resume the proof of Theorem \ref{thm: Int}.
{\it Proof of Theorem \ref{thm: Int}.}
By using Lemma \ref{CE2}, we know that $(\mathbb Y, d_L)$ is discrete.
Since the cardinality of $\mathbb Y$ is continuum and $\mathbb Y \subset \mathcal M_{[0,1]}$, we have that $(\mathcal M_{[0,1]}, d_L)$ is not separable. We have completed the proof.
\qed
\begin{remark} \label{rem: Ref}
Theorem \ref{thm: Int} states that $\mathcal M_{[0,1]}=\{X \in \mathcal M: d([0,1],X)<\infty\}$ is not separable. By the proof of Theorem \ref{thm: Int}, moreover we know the following stronger result, that is, the non-separability holds locally:
{\it Let $B_{d_L}([0,1], \delta)$ denote the ball in $\mathcal M_{[0,1]}$ centered at $[0,1]$ with radius $\delta>0$ with respect to the Lipschitz distance $d_L$, that is,
$$B_{d_L}([0,1], \delta)=\{X \in \mathcal M_{[0,1]}: d_L([0,1], X)<\delta\}.$$
Then, for any $\delta>0$, $B([0,1], \delta)$ is not separable.
}
In fact, let \begin{align*}
J^{\epsilon}(n, 1)=&[3/2^{n+1}, 1/2^{n-1}] \times \{ 0 \}
\\
&\cup \{ (x, \epsilon (3/2^{n+1}-x): 5/2^{n+2}\le x \le 3/2^{n+1} \} \\
& \cup \{ (x, \epsilon (x-1/2^n)): 1/2^n \le x \le 5/2^{n+2} \},
\\
J^\epsilon(n,0)=&J(n,0),
\\
Y^{\epsilon}_{u}=&\{(0, 0)\} \cup \bigcup_{n=1}^{\infty}J^{\epsilon}(n, u_n), \quad u=(u_n)_{n \in {\mathbb Z}_{>0}} \in \{0,1\}^{{\mathbb Z}_{>0}}.
\end{align*}
Then, by the similar proof to that of Theorem \ref{thm: Int}, we obtain
\begin{description}
\item[(i)] For every $\epsilon>0$, the set
\[\mathbb Y^{\epsilon}=\{Y^{\epsilon}_u: u \in \{0, 1\}^{{\mathbb Z}_{>0}}\} /\mbox{isometry}\]
is discrete with cardinality of the continuum.
\item[(ii)] For every $\delta >0$, there exists $\epsilon>0$ such that $\mathbb Y^\epsilon \subset B_{d_L}([0,1], \delta)$.
\end{description}
The statement (ii) implies that $B_{d_L}([0,1], \delta)$ is not separable for any $\delta>0$.
\end{remark}
\section*{Acknowledgment}
We would like to thank an anonymous referee for careful reading of our manuscript and pointing out Remark \ref{rem: Ref}.
The first author was supported by Grant-in-Aid for JSPS Fellows Number 261798 and DAAD PAJAKO Number 57059240.
| {
"timestamp": "2015-03-09T01:12:24",
"yymm": "1412",
"arxiv_id": "1412.3491",
"language": "en",
"url": "https://arxiv.org/abs/1412.3491",
"abstract": "Let $X$ be a compact metric space and $\\mathcal M_X$ be the set of isometry classes of compact metric spaces $Y$ such that the Lipschitz distance $d_L(X,Y)$ is finite. We show that $(\\mathcal M_X, d_L)$ is not separable when $X$ is a closed interval, or an infinite union of shrinking closed intervals.",
"subjects": "Metric Geometry (math.MG)",
"title": "Non-separability of the Lipschitz distance",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777175136814,
"lm_q2_score": 0.8006920092299293,
"lm_q1q2_score": 0.7901045990225294
} |
https://arxiv.org/abs/1211.6875 | Permutations over cyclic groups | Generalizing a result in the theory of finite fields we prove that, apart from a couple of exceptions that can be classified, for any elements $a_1,...,a_m$ of the cyclic group of order $m$, there is a permutation $\pi$ such that $1a_{\pi(1)}+...+ma_{\pi(m)}=0$. | \section{\@startsection {section}{1}{\z@}{-1.5ex plus -.5ex
\begin{document}
\maketitle
\begin{abstract}
Generalizing a result in the theory of finite fields we prove that, apart from a couple of exceptions that can be classified, for any elements $a_1,\dots ,a_m$ of the cyclic group of order $m$, there is a permutation $\pi$ such that $1a_{\pi(1)}+ \cdots +ma_{\pi(m)}=0$.
\end{abstract}
\section{Introduction}
\noindent The starting point of the present paper is the following result of G\'acs, H\'eger, Nagy and P\'alv\"olgyi.
\begin{theorem}\cite{Gacs}\label{prim} Let $\{ a_1,a_2,\dots ,a_p\}$ be a multiset in the finite field $GF(p)$, $p$ a prime. Then after a suitable permutation of the indices, either $\sum _iia_i=0$, or
$a_1=a_2=\cdots =a_{p-2}=a$, $a_{p-1}=a+b$, $a_p=a-b$ for field elements $a$ and $b$, $b\ne 0$.
\end{theorem}
A similar result using a slightly different terminology was obtained by Vinatier \cite{Vin} under the extra assumption that $a_1,\dots ,a_p$, when considered as nonnegative integers, satisfy $a_1+\cdots +a_p=p$.
The former result can be extended to arbitrary finite fields in the following sense.
\begin{theorem}\cite{Gacs} \label{primhatvany} Let $\{ a_1,a_2,\dots ,a_q\}$ be a multiset in the finite field $GF(q)$. There are no distinct field elements $b_1,b_2,\dots ,b_q$ such that $\sum _ia_ib_i=0$ if and only if after a suitable permutation of the indices, $a_1=a_2=\cdots =a_{q-2}=a$, $a_{q-1}=a+b$, $a_q=a-b$ for some field elements $a$ and $b$, $b\ne 0$.
\end{theorem}
This theorem can be reformulated in the language of finite geometry and also have an application about the range of polynomials over finite fields. For more details, see \cite{Gacs}.
Our aim is to find a different kind of generalization, more combinatorial in nature, which refers only to the group structure. First we extend the result to cyclic groups of odd order.
\begin{theorem}\label{odd} Let $\{ a_1,a_2,\dots ,a_m\}$ be a multiset in the Abelian group $\mathbb{Z}_m=\mathbb{Z}/m\mathbb{Z}$, where $m$ is odd. Then after a suitable permutation of the indices, either $\sum _iia_i=0$, or
$a_1=a_2=\cdots =a_{m-2}=a$, $a_{m-1}=a+b$, $a_m=a-b$ for elements $a$ and $b$, $(b,m)=1$.
\end{theorem}
\medskip
The situation is somewhat different if the order of the group is even. In this case we have to deal with two types of exceptional structures.
The following statements are easy to check.
\begin{prop}\label{except}
Let $m$ be an even number represented as $m=2^kn$, where $n$ is odd.\begin{itemize} \item[(i)]If a multiset $M=\{ a_1,a_2,\dots ,a_m\}$ of $\mathbb{Z}_m$ consists of elements having the same odd residue $c$ mod ${2^k}$, then $M$ has no permutation for which $\sum _iia_i=0$ holds.
\item[(ii)] If $M=\{a,a, \ldots, a+b, a-b\}$ mod ${m}$, where $a$ is even and $(b,m)=1$ holds, then $M$ has no permutation for which $\sum _iia_i=0$ holds. \end{itemize} These two different kind of structures we call \textsl{homogeneous and inhomogeneous exceptional multisets}, respectively.
\end{prop}
\begin{theorem}\label{even} Let $M=\{ a_1,a_2,\dots ,a_m\}$ be a multiset in the Abelian group $\mathbb{Z}_m$, $m$ even. If $M$ is not an exceptional multiset as defined in Proposition \ref{except}, then after a suitable permutation of the indices $\sum _iia_i=0$ holds.
\end{theorem}
The presented results might be extended in different directions. One may ask whether there exists a permutation of the elements of a given multiset $M$ of $\mathbb{Z}_m$ (consisting of $m$ elements), for which the sum $\sum _iia_i$ is equal to a prescribed element of $ \mathbb{Z}_m$.
This question is related to a conjecture of Britnell and Wildon, see \cite[p.~20]{britnell}, which can be reformulated as follows. Given a multiset $M=\{ a_1,a_2,\dots ,a_m\}$ of $\mathbb{Z}_m$, all elements of $\mathbb{Z}_m$ are admitted as the value of the sum $\sum_{i=1}^m ia_{\pi(i)}$ for an appropriate permutation $\pi \in S_m$, unless one of the following holds: \begin{itemize} \item $M=\{a,\ldots,a, a+b, a-b\}$, \item there exists a prime divisor $p$ of $m$ such that all elements of $M$ are the same mod $p$.
\end{itemize}
Our result may in fact be considered as a major step towards the proof of their conjecture, which would provide a classification of values of determinants associated to special types of matrices. When $m$ is a prime, the conjecture is an immediate consequence of Theorem \ref {prim} and Lemma \ref{trafo1} (ii).
As for another direction, these questions are also meaningful for arbitrary finite Abelian groups, but to find the exact characterization appears to be a difficult task in general. For example, in the Klein group $\mathbb{Z}_2^2$, the multiset consisting of all different group elements has no zero `permutational sum', whereas all other multisets do have. Meanwhile in the group $\mathbb{Z}_2^3$, all multisets have a permutational sum which is zero.
As it was briefly explained in \cite{Gacs}, the problem has a connection to Snevily's conjecture \cite{Sn}, solved recently by Arsovski \cite{Snevily}. It would be natural to try to adapt the techniques which were successful for Snevily's problem, but our problems are apparently more difficult. In order to prove Theorems \ref{prim} and \ref {primhatvany}, we had to replace the relatively simple approach of Alon \cite{Alon1} by a more delicate application of the Combinatorial Nullstellensatz \cite{Alon}, \cite{Karolyi} and we do not see how Theorem \ref{odd}, for example, could be obtained by the method of \cite{Snev}.
The paper is organized as follows. In Section 2, we collect several simple observations that are used frequently throughout the paper and sketch our proof strategy. Section~3 is devoted to the proof of Theorem \ref{odd}. In Section 4 we will verify Theorem \ref{even} in some particular cases, whose proof do not exactly fit in the general framework (and may be skipped at a first reading). The complete proof, which is more or less parallel to that of Theorem \ref{odd}, is carried out in Section 5.
\bigskip
\section{Preliminaries}
\bigskip
\begin{defn} Let $M = \{ a_1,\dots ,a_m\}$ be a multiset in $\mathbb{Z}_m$. A permutational sum of the elements of $M$ is any sum of the form $\sum_{i=1}^m ia_{\pi(i)}$, $\pi \in S_m$. If, after some rearrangement, we fix the order of the elements of $M$, then the permutational sum of $M$ considered as a sequence $(a_1, \ldots, a_m)$ is simply $\sum _{i=1}^m ia_i$.
\end{defn}
Accordingly, the aim is to determine which multisets admit a zero permutational sum. This property is invariant under certain transformations.
\begin{lemma} \label{trafo1}
Let $m$ be odd, and $M$ be a multiset in $\mathbb{Z}_m$ of cardinality $m$.
\begin{itemize}
\item[(i)] If no permutational sum of $M$ admits the value $0$, then the same holds for any translate $M+c$ of $M$, and also for any dilate $cM$ in case $(c,m)=1 $.
\item[(ii)] If the permutational sums of $M$ admit a value $w$, then they also admit the value $kw$ for every integer $k$ with $(m,k)=1$. As a consequence, if $(m,w)=1$, then the permutational sums take at least $\varphi(m)$ different values.
\item[(iii)] Assume that $M$ has the exceptional structure, i.e. $M= \{ a,\dots, a, a+b, a-b \}$ where $(b,m)=1$. Then the permutational sums of $M$ admit each element of $\mathbb{Z}_m$ except zero.
\end{itemize}
\end{lemma}
\begin{proof}
Parts (i) and (iii) are straightforward, for $1+2+\ldots +m \equiv 0 \pmod m$. Part (ii) follows from the fact that $\pi\in S_m$ holds for the function $\pi$ defined by $\pi (i)=ki $ .
\end{proof}
The sumset or Minkowski sum $C+D$ of two subsets $C$ and $D$ of an Abelian group $G$ written additively is $C+D=\{c+d ~|~ c\in C, d\in D\}$.
The following statement is folklore.
\begin{lemma}\label{sumset} For $C, D \subseteq\mathbb{Z}_m$, $|C|+|D|>m$ implies $C+D=\mathbb{Z}_m$.
\end{lemma}
In the remaining part of this section, we sketch the proof of Theorem \ref{odd}. Recall that the arithmetic function $\Omega(n)$ represents the total number of prime factors of $n$. Similarly to the classical result in zero-sum combinatorics due to Erd\H os, Ginzburg and Ziv \cite{erdos}, we proceed by induction on $\Omega(m)$.
The initial case is covered by Theorem \ref{prim}, so in the sequel we assume that $m$ is a composite number and fix a prime divisor $p$ of $m$ and write $m=p^kn$, where $(p,n)=1$.
The proof is carried out in several steps (of which the first two will be quite similar to the beginning of the proof of Theorem \ref{even}).
\bigskip
\noindent {\bf2.1. First step}
\bigskip
\noindent We introduce the notion of \textit{initial order} as follows.
\begin{defn}
Let $s=(b_1, b_2, \ldots, b_m) $ be any sequence in $\mathbb{Z}_{m}$.
\begin{itemize}
\item[(i)] A cyclic translate of $s$ is any sequence of the form $(b_i, b_{i+1}, \ldots, b_m, b_1, \ldots, b_{i-1}).$
\item[(ii)] The sequence $s$ is separable (relative to the prime divisor $p$ of $m$) if equivalent elements mod ${p^l}$ are consecutive for every $1\leq l\leq k$.
\end{itemize}
\end{defn}
\noindent Thus separability means that for $1\leq i<j\leq m$ and every $l\leq k$, $a_i \equiv a_j \pmod {p^l}$ implies $a_i \equiv a_h \pmod {p^l}$ for every $i<h<j$.
Note that one can always order the elements of $M$ into a separable sequence. Choose and fix an initial order of the elements of $M$ such that some cyclic translate of the sequence $(a_1, a_2, \ldots, a_m)$ is separable.
A useful property of such an ordering is summarized in the following lemma whose proof is straightforward.
\begin{lemma}\label{rem} Consider a sequence of $m$ elements in $\mathbb{Z}_{m}$, which admits a separable cyclic translate. Partition the elements into $k\geq 3$ consecutive blocks $T_1, \ldots, T_k$. If for an integer $l$, a certain residue $r$ mod ${p^l}$ occurs in every block, then at most two of the blocks may contain an element having a residue different from $r$.
The same conclusion holds if the elements are rearranged inside the individual blocks.
\end{lemma}
Let $(a_1, \ldots a_m)$ be an initial order. Form $p$ consecutive blocks of equal size, denoted by $T_1, T_2, \ldots, T_p$, each containing $m^*:={m/p}$ consecutive elements. More precisely, $$ T_i= \{a_{(i-1)m^*+1}, a_{(i-1)m^*+2}, \ldots, a_{im^*}\}.$$
$S_i$ denotes the sum of the elements in $T_i$, while $R_i$ denotes the permutational sum of the block $T_i$ (as a multiset), that is, $R_i= \sum_{j=1}^{m^*}ja_{j+(i-1)m^*}$.\\
Writing $R=\sum_{i=1}^p R_i$, the permutational sum of $M$ takes the form
$$\Phi= \sum_{j=1}^m ja_j = \sum_{i=1}^p \left(R_i + m^*(i-1)S_{i}\right) = R + m^*\sum_{i=0}^{p-1}iS_{i+1}.$$
\bigskip
\noindent {\bf2.2. Second step}
\bigskip
\noindent Our aim here is to ensure that $m^*\mid \Phi$ holds after a well structured rearrangement of the elements. That is, we want to achieve that $m^*\mid R$ holds.
To this end we allow reordering the elements inside the individual blocks. Such a permutation will be referred to as a \textit{block preserving permutation}.
We distinguish three different cases.
First, if there is no exceptionally structured block mod ${m^*}$, then by the inductional hypothesis the elements in each block $T_i$ can be rearranged so that $m^*$ divides $R_i$. Thus, after a block preserving permutation, $m^*\mid R$.
Next, if there is an exceptionally structured block $T_i$, then the permutational sums over $T_i$ take $m^*-1$ different values mod ${m^*}$, see Lemma \ref{trafo1} (iii). If there are at least two exceptionally structured blocks, then it follows from Lemma \ref{sumset} that there is a block preserving permutation that ensures $m^*\mid R$.
Finally, if there is exactly one exceptionally structured block $\{a,\ldots, a, a+b, a-b\}$ $\pmod {m^*}$, then a permutational sum of this block can take any value except $0$ mod ${m^*}$.
So after a block preserving permutation we are ready, unless zero is the only value that the other blocks admit, that is, all elements must be the same in each block mod ${m^*}$.\\ This latter case can be avoided by a suitable choice of the initial order in the first step.
Indeed, translating the initial order cyclically so that it starts with an appropriate element from the exceptional block will break down this structure.
\bigskip
\noindent {\bf2.3. Third step}
\bigskip
\noindent To complete the proof, based on the relation $m^*\mid \Phi$ we further reorganize the elements to achieve a zero permutational sum, or else to conclude to (one of) the exceptional case(s). We only outline here the strategy of the proof, as the following section is devoted to the detailed discussion.
As a first approximation, we try to change the order of the blocks to obtain
$$ \sum_{i=0}^{p-1}iS_{i+1}\equiv -R':= - \frac{R}{m^*} \pmod p,$$
which would imply $m \mid \Phi.$ One is tempted to argue that the case $R'\equiv 0 \pmod p$ would be easy to resolve applying Theorem \ref{prim} for the multiset $\{S_1, \ldots, S_p\}$. As it turns out, the main difficulty is to handle exactly this case, since the multiset $\{S_1, \ldots, S_p\}$ may have the exceptional structure. A remedy for this is what we call the `braid trick'. The main idea of this tool will be to consider the transposition of a pair of elements whose indices differ by a fixed number $x$ (typically a multiple of $m^*$). By this kind of transposition of a pair ($a_i, a_{i+x}$), the permutational sum increases by $x(a_i- a_{i+x})$, providing a handy modification.
\bigskip
\section{The case of odd order}
\bigskip
\noindent In this section we complete the proof of Theorem \ref{odd}. We continue with the details of the third step outlined in the previous section.
We distinguish two cases according to whether $R'$ is divisible by $p$ or not.\\
\noindent {\bf3.1. $ R'$ is not divisible by $p$.}
\bigskip
\noindent Note that $ \sum_{i=0}^{p-1}iS_{i+1}$ can be viewed as a permutational sum of the multiset \ $\mathcal{S}=\{S_1, S_2, \ldots, S_p\}$. If there are two elements $S_i \not \equiv S_j \pmod p$, then their transposition changes the value of the permutational sum of $\mathcal{S}$ mod $p$. In particular, the permutational sums of $\mathcal{S}$ admit a nonzero value mod $p$. From Lemma \ref{trafo1} (ii) it follows that they admit each nonzero element of $\mathbb{Z}_{p}$ and in particular $-R'$ too.
Otherwise, we have $S_1\equiv S_2\equiv \ldots \equiv S_p \pmod p$.
We use the \texttt{braid trick}: we look at the pairs $(a_i, a_{i+m^*} )$ for every $i$.
The elements $a_i$ and $a_{i+m^*}$ occupy the same position in two consecutive blocks $T_j, T_{j+1}$. If they have different residues mod ${p}$, then their transposition leaves $R$ intact, hence $R'$ does not change either. On the other hand, $S_j$ and $S_{j+1}$ change whereas each other $S_i$ remains the same, therefore the previous argument can be applied.
Finally, we have to deal with the case when $a_i \equiv a_{i+lm^*} \pmod {p}$ holds for every possible $i$ and $l$. This is the point where we exploit the separability property. The initial order has changed only inside the blocks during the second step. Since the number of blocks is at least three, it follows from Lemma \ref{rem} that $a_i\equiv a_j \pmod p$ for all $1\leq i<j\leq m$ in $M$. In this case we prove directly that $M$ has a zero permutational sum.
In view of Lemma \ref{trafo1} (i), we may suppose that every $a_i$ is divisible by $p$.
Consider $M^*:= \{\frac{a_1}{p}, \frac{a_2}{p}, \ldots, \frac{a_m}{p}\}$. Apply the first two steps for this multiset $M^*$. It follows that $M^*$ has a zero permutational sum mod ${m^*}$, which implies that $M$ has a zero permutational sum mod $m$.\\
\noindent{\bf3.2. $ R'$ is divisible by $p$}
\bigskip
\noindent Here our aim is to prove that $p\mid \sum_{i=0}^{p-1}iS_{i+1}$ holds for a well chosen permutation of the multiset $\mathcal{S}:=\{S_1,\ldots, S_p\}$.
This is exactly the problem what we solved in Theorem \ref{prim}, which implies that we can reorder the blocks (and hence the multiset $M$ itself) as required, except when the multiset $\mathcal{S}$ has the form $\{A, A, \ldots, A, A+B, A-B\}$, with the condition $(B,p)=1$.
Once again, we apply the braid trick.
If $a_i$ and $a_{i+lm^*}$ have different residues mod $p$, then we try to transpose them in order to destroy this exceptional structure. As in Subsection \textsl{$3.1$}, $R$ does not change. We call a pair of elements exchangeable if their indices differ by a multiple of $m^*$.
Thus, a zero permutational sum of $M$ is obtained unless no transposition of two exchangeable elements destroys the exceptional structure of $\mathcal{S}$. The following lemma gives a more detailed description of this situation.
\begin{lemma} \label{speceset1}
Suppose that no transposition of two exchangeable elements destroys the exceptional structure of $\mathcal{S}$. Then either this exceptional structure can be destroyed by two suitable transpositions, or $M$ contains only three distinct elements : $t, t+B, t-B$ mod $p$ for some $t$ with the following properties:
\begin{itemize}
\item $t+B$ occurs only in one block, and only once;
\item $t-B$ occurs only in one block, and only once;
\item $t+B$ and $t-B$ occupy the same position in their respective blocks.
\end{itemize}
\end{lemma}
\begin{proof}
Denote by $T^+$ and $T^-$ the blocks for which the sum of the elements is $A+B$ and $A-B$, respectively.
Apart from elements from $T^+$ and $T^-$, two exchangeable elements must have the same residue mod $p$. Furthermore, if a transposition between $a_j\in T^-$ and $a_{j+lm^*}\notin T^+$ does not change the structure of $\mathcal{S}$, that means $a_j\equiv a_{j+lm^*} \pmod p$ or $a_j\equiv a_{j+lm^*}-B \pmod p$. Similar proposition holds for $T^+$.
Consider now a set of pairwise exchangeable elements. One of the followings describes their structure: either they all have the same residue mod $p$, or they have the same residue $t$ mod $p$ except the elements from $T^+$ and $T^-$, for which the residues are $t+B$ and $t-B$, respectively.
Observe that both cases must really occur since the sums $S_i$ of the blocks are not uniformly the same. In particular, there is a full set of $p$ pairwise exchangeable elements having the same residue mod $p$.
Since the number of blocks is at least $3$, we can apply Lemma \ref{rem}. We only used block-preserving permutations so far, hence it follows that all elements have the same residue mod $p$ --- let us denote it by $t$ --- except some $(t+B)$'s in $T^+$, and the same number of $(t-B)$'s in $T^-$, in the very same position relative to their blocks.
We claim that this number of different elements in $T^+$ and $T^-$ must be one, otherwise we can destroy the exceptional structure with two transpositions. Indeed, by contradiction, suppose that there exist two distinct set of exchangeable elements where the term corresponding to $T^+$ and $T^-$ is $t+B$ and $t-B$, respectively. Pick a block different from $T^+$ and $T^-$ and denote it by $T$. Then transpose $t+B\in T^+$ and $t \in T$ in the first set, and $t-B\in T^-$ and $t \in T$ in the second set. The new structure of $\mathcal{S'}$ obtained this way is not exceptional any more.
\end{proof}
\begin{lemma}\label{baratunk}
Suppose that $M$ contains only three distinct elements : $t, t+B, t-B$ mod $p$ for some $t$ with the following properties:
\begin{itemize}
\item $t+B$ occurs only in one block, and only once;
\item $t-B$ occurs only in one block, and only once;
\item $t+B$ and $t-B$ occupy the same position in their respective blocks.
\end{itemize}
Then either a suitable zero permutational sum exists or the conditions on $M$ hold mod ${p^l}$ for every $l\leq k$, with a suitable $B=B_l$ not divisible by $p$.
\end{lemma}
\begin{proof}
We proceed by induction on $l$. Evidently, it holds for $l=1$.\\
According to Lemma \ref{trafo1} (i) we may assume that $t \equiv 0 \pmod p$. Let $a^+$ and $a^-$ denote the elements of $T^+$ and $T^-$ for which $a^+ \equiv B \pmod p$ and $a^- \equiv -B \pmod p$. Note that their position is the same in their blocks.
Suppose that $l\geq 2$ and the conditions hold mod ${p^{l-1}}$.
Consider the residues of the elements mod ${p^l}$ now. We use again the braid trick.
Suppose that there exist $a_i, a_j \not\in \{a^-, a^+\}$ such that $i-j$ is divisible by $p^{k-l}$ but not by $p^{k-l+1}$, and $a_i \not\equiv a_j \pmod {p^l}$. After we transpose them, (the residue of) $R$ does not change mod ${p^{k-1}}$, but it changes by $(i-j)(a_j-a_i)\neq 0 \pmod {p^{k}}$. For the new permutational sum thus obtained, $R' \not\equiv 0 \pmod p$ holds, while the multiset $\mathcal{S}$ may change, but certainly it does not become homogeneous mod $p$. Thus $M$ has a zero permutational sum, as in Subsection 3.1.
Otherwise, in view of Lemma \ref{rem} it is clear that all the residues must be the same mod ${p^l}$, and we may suppose they are zero, except the residues of $a^+$ and $a^-$. In addition, $a^+ + a^- \equiv 0 \pmod {p^l}$ must hold too, since $R' \equiv 0 \pmod p $. This completes the inductional step.
\end{proof}
Lemma \ref{baratunk} applied for $l=k$ completes the proof of Theorem \ref{odd} when $m=p^k$ is a prime power. In the sequel we assume that $n\neq 1$. Let $m=p_1^{k_1}p_2^{k_2}\ldots p_n^{k_r}$ be the canonical form of $m$. Note that the whole argument we had so far is valid for any prime divisor $p$ of $m$. Therefore, to complete the proof , we may assume that $M$ has the exceptional structure mod ${p_i^{k_i}}$ as described in Lemma \ref{baratunk} for every $p=p_i$.
\begin{lemma}\label{kivetel} The conclusion of Theorem \ref{odd} holds if $M$ has exceptional structure modulo each $p_i^{k_i}$.
\end{lemma}
\begin{proof}
We look at the permutational sums of $M$ leaving the elements of $M$ in a fixed order $a_1, a_2, \ldots, a_m$ while permuting the coefficients $1, 2, \ldots, m$.
According to Lemma~\ref{trafo1}~(i) we may assume that all elements, except two, are divisible by $p_1^{k_1};$ all elements, except two, are divisible by $p_2^{k_2}$, and so on. It follows that at least $m-2r$ elements are zero mod $m$, so their coefficients are irrelevant.
So we only have to assign different coefficients to the nonzero elements $x_i$ of $M$. For any $0\neq x\in M$, we choose its coefficient $c_x$ to be either $\frac{m}{(m,x)}$ or $-\frac{m}{(m,x)}$, ensuring that $c_xx=0$ in $\mathbb{Z}_m$. If such an assignment is possible, the permutational sum will be zero .
First, observe that $\frac{m}{(m,x)}$ and $-\frac{m}{(m,x)}$ are the same if and only if $(m,x)=1$. Note that for each $i$, $p_i$ divides $(m,x_i)$ for all $x_i$, except two. Hence there is no triple $x_1, x_2, x_3$ of the elements for which $(m,x_i)$ would be the same. Thus we can assign a different coefficient to each $x_i\neq 0$, except when there exist two of them, for which $(m,x_i)=1$. But this is exactly the exceptional case $M=\{0, 0, \ldots, 0, c, -c \}$, where $(c,m)=1$.
\end{proof}
\bigskip
\section{Special cases of Theorem \ref{even}}
\medskip
\noindent In this section we prove that Theorem \ref{even} holds for some specially structured multisets.
\begin{lemma} \label{hulye}
Let $m=2n$, $n>1$ odd, and let $M$ be a multiset in $\mathbb{Z}_m$ consisting of two blocks of size $n$ in the form $T_1= \{a, \ldots, a, a+b, a-b\}$ and $T_2= \{c, \ldots, c\} \pmod n$, where $(b,n)=1$. If one of the blocks contains elements from only one parity class then Theorem \ref{even} holds.
\end{lemma}
\begin{proof}
First we obtain a permutation for which $n$ divides the permutational sum of $M$. We choose an element $c^*$ from $T_2$. Assume that $c^*\neq a-b \pmod n$ and exchange $c^*$ with $a-b \in T_1$. (If the assumption does not hold then we pick $a+b$ instead of $a-b$ and continue the proof similarly.)
This way we get two blocks $T_1'$ and $T_2'$, which do not have the exceptional structure mod $n$. Thus there exists a block preserving permutation ensuring that $n$ divides the obtained permutational sums of $T_1'$ and $T_2'$, thus $n$ also divides the permutational sum of $M$.
We assume that both odd and even elements occur in $M$, otherwise either $2$ is trivially a divisor of $\Phi$ or $M$ has exceptional homogeneous structure.
If the relation $m\mid \Phi$ does not hold, then we apply the braid trick by looking at the pairs $(x_i, x_{i+n})$. If a pair consists of an odd and an even element, then we may transpose them and the proof is done.
Otherwise the exchangement of $c^*$ and $a-b$ must have destroyed the property of having a uniform block mod $2$ among $T_1$ and $T_2$, that is, $c^*$ and $a-b$ have different parity. Since the choice of $c^* \in T_2$ was arbitrary, we may assume that $T_2= \{c, \ldots, c\} \pmod {2n}$. Moreover, since the braid trick did not help us, every element in $T_1$ congruent to $a$ mod $n$ must have the same parity as the elements $c$, and the parity of element $a+b$ must coincide with that of $a-b$.
In this remaining case consider the blocks $\{a, \ldots, a, c, c\}$ and \mbox{$\{c,\ldots,c,a+b,a-b\}$}. First, if $c\not \equiv a \pmod n$, then neither block is exceptional as a multiset in $\mathbb{Z}_n$, hence an appropriate block preserving permutation ensures that $n$ divides the permutational sum. If the permutational sum happens to be odd, then a suitable transposition via the braid trick will increase its value by $n$, for the first block contains elements from the same parity class in contrast to the second.
Finally, if $c\equiv a \pmod n$, then either $c=a$ is even, providing that $M$ has inhomogeneous exceptional structure, or $c=a$ is odd, in which case the permutational sum will be zero if we set $a_{n}=a+b$ and $a_{2n}=a-b \pmod n$.
\end{proof}
\begin{lemma}\label{atlast} Let $m=2^kn > 4$, $n$ odd and $k>1$. Let $M$ be a multiset of $\mathbb{Z}_m$, consisting of two even elements and $m-2$ odd elements having residue $c$ mod ${2^{k-1}}$. Then the permutational sum of $M$ admits the value zero.
\end{lemma}
\begin{proof}
Denote the even elements by $q_1$ and $q_2$. We distinguish the elements having residue $c$ mod ${2^{k-1}}$ according to their residues mod ${2^{k}}$, which are $c$ and $c^* \equiv c+2^{k-1} \pmod {2^k}$. We may suppose that the number of elements $c$ is greater than or equal to the number of elements $c^*$.
First we solve the case $n=1$ meaning $m=2^k$, $k>2$.
Taking $a_{m/2}=q_1$, $a_{m}=q_2$, the permutational sum will be divisible by $m/2$. If there is no element $c^*$, then the permutational sum is in fact divisible by $m$.
If there exist some elements $c^*$ among the odd elements and the permutational sum is not yet divisible by $m$, then a transposition between two elements $c$ and $c^*$ whose indices differ by an odd number will result in a zero permutational sum mod $m$.\\
Turning to the general case $n>1$, we initially order the elements as follows. Even elements precede the others, elements $c$ mod ${2^k}$ precede the elements $c^*$ mod ${2^k}$, and equivalent elements mod $m$ are consecutive. Form $2^k$ blocks of equal size $n$.
With an argument similar to the one in Section 2.2 we arrive at two cases. Either we obtain a permutational sum congruent to zero mod $n$ after a block preserving permutation, or the structure of the blocks are as follows: there is exactly one exceptional block (as a multiset in $\mathbb{Z}_n$) and the other blocks only admit a zero permutational sum mod $n$ meaning that each of them consists of equivalent elements mod $n$.\\
\noindent \textsl{Case 1)} Consider the block preserving permutation, which results in a permutational sum $\Phi_0$ divisible by $n$.
We modify this permutation, if necessary, to get one corresponding to a zero permutational sum mod ${2^k}$, while the divisibility by $n$ is preserved.
We denote by $f$ and $g$ the indices of $q_1$ and $q_2$ in the considered permutation.
Thus $$\Phi_0 \equiv c\frac {2^k(2^k-1)}{2}+ f(q_1-c) + g (q_2-c)\pmod {2^{k-1}}. \ \ \ \ \ \ \ \ \ \ (*)$
\noindent Note that $\{ln: l=0,1, \ldots, 2^k-1\}$ is a complete system of residues mod ${2^k}$. Let $l$ be the solution of the congruence $$(q_1-c)ln\equiv -\Phi_0 \pmod {2^k}. $$
Thus transposing $q_1=a_f$ with $a_{f+ln} $ implies that
\[\Phi_1\equiv \left\{ \begin{array}{ll} 0 \pmod {2^k} & \textrm{if $a_{f+ln}\equiv c \pmod {{2^k}}$} \\ 2^{k-1} \pmod {2^k} & \textrm{if $a_{f+ln}\equiv c^* \pmod {{2^k}}$.} \end{array} \right. \]
\noindent The relation $n\mid \Phi_1$ still holds. So in the case when $a_{f+ln}\equiv c \pmod {{2^k}}$ we are done, and if $a_{f+ln}\equiv c^* \pmod {{2^k}}$ we have to increase the value of the permutational sum by $2^{k-1}n$ mod $m$.
Recall that each element in the second block is $c$ mod ${2^k}$.
Therefore transposing $a_f\equiv c^*\pmod {{2^k}}$ with $a_{f+n}\equiv c \pmod {{2^k}}$ in this latter case does the job. \\
\noindent \textsl{Case 2) } One of the blocks (not necessarily the first one) has the exceptional structure, while every other is homogeneous mod ${n}$.
We can still argue as in the previous case if, performing the following operation, we can destroy the exceptional structure without changing the position of the even elements $q_1, q_2$ and the entire second block.
Namely, we try to transpose two nonequivalent elements mod $n$, one from the exceptional block and one from another block.
If this is not possible with the above mentioned constraints, then the exceptional block must be among the first two.
Furthermore, every element congruent to $c$ mod ${2^k}$ in the first two blocks must be equivalent mod $n$.
Thus we only have to deal with the following structure: the first block is the exceptional one, $q_1$ and $q_2$ correspond to $a+b$ and $a-b$ in the exceptional structure, all the other elements contained in the first two blocks are equivalent mod $m$ (and congruent to $c $ mod ${2^k}$), and the remaining blocks are all homogeneous mod $n$.
Exchanging $q_2$ with any element from the second block destroys the exceptional structure of the first block, which means that after a suitable block preserving permutation the permutational sum of each block becomes $0$ mod $n$, ensuring $n\mid \Phi$ for the multiset. At this point the indices of the even elements are $n$ and $2n$.
Next, keeping the order inside the blocks we rearrange them so that the first and second blocks become the $2^{k-1}$th and $2^{k}$th, that is, $a_{{m}/{2}}=q_1$ and $a_m = q_2$. Hence, maintaining $n\mid \Phi$ we also achieve $2^{k-1}\mid \Phi$ via equality $(*)$.
Either we are done or $\Phi \equiv 2^{k-1}n \pmod m$. The latter can only happen if there exists an element of type $c^*$. If a block contains both elements of type $c$ and $c^*$, then a transposition of a consecutive pair of them within that block increases $\Phi$ by $2^{k-1}n$. Otherwise there must exist a block containing only elements of type $c^*$. This implies the existence of a pair of $c$ and $c^*$ whose position differs by $n$. Their transposition increases $\Phi$ by $2^{k-1}n^2\equiv 2^{k-1}n \pmod m$, solving the case.
\end{proof}
\bigskip
\section{ The case of even order}
\bigskip
\noindent One main difference between the odd and the even order case is due to the fact that Lemma \ref{trafo1} (i) does not hold if $m$ is even, for $1+2+ \ldots + m$ is not divisible by $m$. That explains the emergence of the exceptional structure, see Proposition \ref{except}.
\begin{remark} \label{note}
It is easy to check that after a suitable permutation of the indices, $\sum _iia_i\equiv m/2 \pmod m$ holds for the exceptionally structured multisets.
\end{remark}
In order to prove Theorem \ref{even}, we fix the notation $m=2^kn$, where $n$ is odd and $k>0$. Since the cases $m=2$ and $m=4$ can be checked easily, we assume that $m>4$ and prove the theorem by induction on $k$. \\
\noindent \textbf{Initial step}\\
\noindent We have $m=2n$, where $n>1$ according to our assumption.
Take the multiset $M = \{ a_1,\dots ,a_m\}$ of $\mathbb{Z}_{m}$. Arrange the elements in such a way that both the odd and the even elements are consecutive.
Form two consecutive blocks of equal size, denoted by $T_1$ and $T_2$, each containing $n$ elements. Using the notation of Section $2$, the permutational sum of $M$ is
$$\Phi= \sum_{j=1}^m ja_j = \left(R_1+R_2 + \frac{m}{2}S_{2}\right) = R + nS_{2}.$$
Our first aim is to ensure that $n\mid \Phi$ holds after a well structured rearrangement of the elements.
To this end, we may take an appropriate block preserving permutation providing that $n\mid R_i$ holds for $i=1,2$.
Such a permutation exists, except when at least one of the blocks are exceptional mod $n$. However it is enough to obtain a block preserving permutation for which $n\mid R$, and such a permutation exists via Lemma \ref{trafo1} (iii), unless one of the blocks has exceptional structure (mod $n$) and the other consists of equivalent elements (mod $n$). This latter case was fully treated in Lemma \ref{hulye}.
The next step is to modify the block preserving permutation such that $2 \mid \Phi$ also holds.\\
If it does not hold, then we try to transpose a pair $(a_i, a_{i+n})$ for which $a_i$ and $a_{i+n}$ have different parity, according to the braid trick. The permutational sum would change by $n$ (mod $m$) and we are done. If all pairs have the same parity, then all elements have the same parity. Therefore either $\Phi$ is automatically even or $M$ has
homogeneous exceptional structure. This completes the initial step. \\
\noindent\textbf{Inductional step}\\
\noindent Assume that $k>1$ and Theorem \ref{even} holds for every even proper divisor of $m$.
Recalling Definition 2.4, we choose a separable sequence relative to the prime divisor $2$ of $m$ as an initial order. Partition the multiset into two blocks of equal size, $T_1$ and $T_2$.
Introduce $m^*:={m/2}=2^{k-1}n$, and assume first that $m^*\mid R_1+R_2$ can be achieved by a suitable block
preserving permutation.
By induction, we can do it if both blocks as multisets have a structure different from the ones mentioned in Proposition \ref{except}.
If both blocks as multisets have exceptional structure mod ${m^*}$, then in view of Remark \ref{note} there exists a block preserving permutation for each block such that $\sum _iia_i \equiv m/4 \pmod {m^*}$, thus $m^*\mid R_1+R_2$ holds.
Finally, we can also achieve this relation if exactly one of the blocks has exceptional structure, and the permutational sum of the other block admits the value $m/4$ mod ${m^*}$.
Suppose that $m \mid R_1+R_2$ does not hold, otherwise we are done.
Apply the braid trick and consider the pairs $(a_i, a_{i+2^{k-1}n})$. They must have the same parity, otherwise transposing them would make $\Phi$ divisible by $m$, which would complete the proof. Due to the separability of the initial order, all elements must have the same parity.
Consider now the pairs $(a_i, a_{i+2^{k-2}n})$. Either we can transpose the elements of such a pair to achieve a zero permutational sum, or the elements must have got the same residue mod ${2^2}$.
Apply this argument consecutively with exponent $s=1, 2, \ldots k$, for pairs $(a_i, a_{i+2^{k-s}n})$ and modulo $2^{k-s}$, respectively. Either $m \mid \Phi$ is obtained during this process by a suitable transposition of a pair $(a_i, a_{i+2^{k-s}n})$ or all elements must have the same residue $r$ mod ${2^k}$.
If $r$ is odd, then $M$ has homogeneous exceptional structure described in Proposition \ref{except}. If $r$ is even, then $2^k$ would divide $\Phi$, for $\Phi \equiv r\frac {2^k(2^k-1)}{2} \pmod {2^k}$. Thus the concusion of the theorem holds in this case.\\
The remaining part of the proof is the case when only one of the blocks is exceptional mod $m^*$, and the permutational sum of the other block does not admit the value $m/4 \pmod {m^*}$. We refer to this latter condition by (**), and we may suppose that the second block is the exceptional one (otherwise we reverse the sequence).
According to Proposition \ref{except},
there are two cases to consider.\\
\noindent{\bf 5.1. The inhomogeneous case}\\
\noindent $T_2 = \{a,a, \ldots,a, q_1=a+b, q_2=a-b\}$ mod ${m^*}$, where $a$ is even and $(b,m)=1$. Note that $T_2$ contains both even and odd elements. Due to the separability of the initial order, all elements in $T_1$ have the same parity.
If $T_1$ consists of odd elements, then we exchange a pair of different odd elements mod ${m^*}$, one from each block. This way $T_2$ becomes non-exceptional. Moreover, an appropriate choice from $\{q_1, q_2\}$ ensures that $T_1$ does not become exceptional either. Thus $m^*$ will be a divisor of the permutational sum after a suitable block preserving permutation. If $m\mid \Phi$ does not hold, we apply the braid trick for a pair $(a_i, a_{i+m^*})$ for which their parity differs and we are done.
If all elements of $T_1$ are even, then we try to transpose a pair of different even elements mod ${m^*}$, one from each block. Note that if it is possible, $T_1$ will not become exceptional.
Hence after a block preserving permutation $m^*$ will be a divisor of the permutational sum. If $m\mid \Phi$ did not hold, we apply the braid trick for a pair $(a_i, a_{i+m^*})$ for which their parity differs and we are done.
Assume that no appropriate transposition exists, that is, $T_1$ must consist of even elements having the same residue $a $ mod ${m^*}$. It may occur that $M$ has the inhomogeneous exceptional structure.
Otherwise either $q_1+q_2=2a+m^*$, or there exists a pair $ a^{(1)} \not \equiv a^{(2)}\pmod{m}$ in $M$ such that $a^{(1)} \equiv a^{(2)}\equiv a \pmod {m^*}$.
We set the permutation now for these cases. Let $q_1$ and $q_2$ be in the positions $1$ and $1+m^*$. Fix arbitrary positions for the rest of elements supposing that if a pair of type $\{a^{(1)}, a^{(2)}\}$ exists, then the elements of such a pair are consecutive. Hence either we are done, or $\Phi \equiv m^* \pmod m$. In the latter case, note that there must exist a pair of type $\{a^{(1)}, a^{(2)}\}$ that is arranged consecutively. Their transposition provides a zero permutational sum which completes the proof.\\
\noindent{\bf 5.2. The homogeneous case}\\
\noindent $T_2 = \{c,c,\ldots,c\}$ mod ${2^{k-1}}$ where $c$ is odd and {(**)} holds for $T_1$.\\
\noindent \textit{Subcase 1)}
Every odd element $c'\in T_1$ is congruent to $c$ mod $2^{k-1}$.
\noindent Since $T_1$ is not exceptional mod $m^*$, it must contain some even elements. Thus $T_1$ consists of even elements and possibly also some odd elements having residue $c$ mod ${2^{k-1}}$.
Choose an even element $q_1$ from $T_1$ and transpose it with $c$ in $T_2$. Since (**) holds for $T_1$, neither $T_1$ nor $T_2$ become exceptional by this transposition.
Take a permutation of each block for which the permutational sum is zero mod ${m^*}$. Either we are done or $\Phi \equiv m^* \pmod m$ holds. Look at the pairs $(a_i, a_i+m^*)$ according to the braid trick. If a pair takes different residues mod $2$, then their transposition makes the permutational sum divisible by $m$ and we are done.
Otherwise we must have two even elements, and the others have residue $c$ mod ${2^{k-1}}$. Hence Lemma \ref{atlast} completes the proof.\\
\noindent \textit{Subcase 2)} There exists an odd $c' \in T_1$ for which $c'\not\equiv c \pmod {2^{k-1}}$.
We transpose $c$ and $c'$ to obtain $T_2'= \{c',c,\ldots,c\}$ (mod $2^{k-1}$).
We claim that $m^* \mid \Phi$ holds for the new blocks $T_1'$ and $T_2'$ after a suitable block preserving permutation.
The permutational sum of $T_2'$ admits the value $m/4$ mod ${m^*}$. Indeed, it has a non-exceptional structure, hence it admits the value zero mod ${m^*}$, and then one transposition between $c'$ and another element is sufficient.
Thus, neither (**) holds for $T_2'$ nor has it exceptional structure.
Hence we may suppose that $m^* \mid \Phi$ holds for the new blocks $T_1'$ and $T_2'$. Either we are done or $\Phi \equiv m^*$ (mod $m$). In the latter case we need a transposition in $T_2'$ between $c'$ and another element congruent to $c$ mod ${2^{k-1}}$, for which the permutation sum changes by $m^*$ mod ${m}$. Such a transposition clearly exists.
\bigskip
\bigskip
\textbf{Acknowledgment.~} I am grateful to the anonymous referees for their valuable help in improving the presentation of the paper.
\newpage
\bibliographystyle{plain}
| {
"timestamp": "2012-11-30T02:02:19",
"yymm": "1211",
"arxiv_id": "1211.6875",
"language": "en",
"url": "https://arxiv.org/abs/1211.6875",
"abstract": "Generalizing a result in the theory of finite fields we prove that, apart from a couple of exceptions that can be classified, for any elements $a_1,...,a_m$ of the cyclic group of order $m$, there is a permutation $\\pi$ such that $1a_{\\pi(1)}+...+ma_{\\pi(m)}=0$.",
"subjects": "Combinatorics (math.CO)",
"title": "Permutations over cyclic groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771778588348,
"lm_q2_score": 0.800692004473946,
"lm_q1q2_score": 0.7901045965089339
} |
https://arxiv.org/abs/1210.4639 | Homological techniques for the analysis of the dimension of triangular spline spaces | The spline space $C_k^r(\Delta)$ attached to a subdivided domain $\Delta$ of $\R^{d} $ is the vector space of functions of class $C^{r}$ which are polynomials of degree $\le k$ on each piece of this subdivision. Classical splines on planar rectangular grids play an important role in Computer Aided Geometric Design, and spline spaces over arbitrary subdivisions of planar domains are now considered for isogeometric analysis applications. We address the problem of determining the dimension of the space of bivariate splines $C_k^r(\Delta)$ for a triangulated region $\Delta$ in the plane. Using the homological introduced by Billera (1988), we number the vertices and establish a formula for an upper bound on the dimension. There is no restriction on the ordering and we obtain more accurate approximations to the dimension than previous methods and furthermore, in certain cases even an exact value can be found. The construction makes also possible to get a short proof for the dimension formula when $k\ge 4r+1$, and the same method we use in this proof yields the dimension straightaway for many other cases. | \section{Introduction}
Let $\Delta$ be a connected, finite two dimensional simplicial
complex, supported on $|\Delta|\subset\R^2$, with $|\Delta|$ homotopy
equivalent to a disk. We denote by $C_k^r(\Delta)$ the vector space
of all $C^r$ functions on $\Delta$ that, restricted to any simplex in
$\Delta$, are given by polynomials of degree less or equal than
$k$. These functions are called splines, and they have many practical
applications, including the finite element method for solving
differential equations (\cite{strangfix}). Recently they have also been considered for
isogeometric analysis applications (\cite{iga-chb-09}), in
computer-aided design for modeling surfaces of arbitrary topology (\cite{f-cscag-93}). In
these application areas, spline functions of low degree are of
particular interest, and the degree of smoothness attainable is an
important design consideration.
A fundamental problem is to determine the dimension of this vector
space as a function of known information about the subdivision
$\Delta$. A serious difficulty for solving this problem is that the
dimension of the space $C_k^r(\Delta)$ can depend not only on the
combinatorics of the subdivision, but also on the geometry of
the triangulation i.e., how $|\Delta|$ is embedded in $\R^{2}$. In
\cite{Sch}, the author presented a lower and an upper bound on the
dimension of spline spaces of arbitrary degree and smoothness for
general triangulations; using Bernstein-B\'ezier methods, a formula for the dimension for
$k\geq 4r+1$ was obtained by \cite{alfschu}. The result was extended to $k\geq 3r+2$
in \cite{hd}, and in \cite{alfschu3r+1} a dimension formula is proved for almost all triangulations for $k=3r+1$. However, there are no explicit formulas for the
dimension of the spline spaces $C_k^r(\Delta)$ for degree $k<3r+2$ for
general triangulations. The use of homological algebra in spline
theory was introduced by \cite{bil}. He obtained the dimension
of $C_k^1(\Delta)$ for all $k$ for generic triangulations.
\cite{Local}, introduced a chain complex different from that used by Billera; this complex was studied by \cite{Local}, \cite{family}, \cite{Sck}, and \cite{gs}.
The lower homology modules of the chain complex in this construction,
differ from the one introduced by Billera, and they have nicer
properties. The connection between fat points and the spline space
defined on $\Delta$, allows to give a complete characterization of the
free resolutions of the ideals generated by power of linear forms
appearing in the chain complex, and so to prove the dimension formula
for $C_k^r(\Delta)$ for sufficiently high degree (\cite{family}).
The main contribution of the paper is a new formula for an upper bound on the
dimension of the spline space. The formula applies to any ordering
established on the interior vertices of the partition, contrarily to the upper
bound formulas in \cite{Sch}, \cite{LaiSchu}. Having no restriction on
the ordering makes it possible to obtain accurate approximation to
the dimension and even exact value in many cases. As a consequence,
we give a simple proof for dimension formula
when $k\geq 4r+1$.
The paper is structured as follows.
In Section \ref{construction} we recall the construction of this chain complex
and some of the properties of the homology modules.
We describe the dimension as a sum of a combinatorial part and the
dimension of an homology module.
This latter term which happens to be zero in high degree is always non-negative, so the
formula gives a lower bound for the dimension of the spline space for
any degree $k$.
We present the main result in Section
\ref{sectionUpperBound}, which is an upper bound on the
dimension of the spline space.
In Section \ref{comp} we
compare the formulas of the lower and the upper bound with those
appearing in \cite{Sch}.
The result about the exact dimension
for $k\geq 4r+1$ is proved in Section
\ref{exactdimension}. This latter result and some other examples that we present in
Section \ref{examples}, illustrate the interest of the homology
construction for proving exact dimension formulas.
\section{Construction of the chain complex}\label{construction}
We reproduce some notations and definitions presented in \cite{Local}, restricting them to the case where $\Delta$ is a planar simplicial complex supported on a disk.
Denote by $\Delta^0$ the set of interior faces of $\Delta$, and by
$\Delta_{i}^0$ ($i=0,1,2$) the set of $i$-dimensional interior faces
whose support is not contained in the boundary of $|\Delta|$. We
denote by $f_i^0$ the cardinality of these sets, and by
$\partial\Delta$ the complex consisting of all $1$-faces lying on just
one $2$-face as well as all subsets of them. As it will be convenient
to study the dimension of the vector space $C_k^r(\Delta)$, we embed
$\Delta$ in the plane $\{z=1\}\subseteq\R^3$ and form the cone
$\hat{\Delta}$ over $\Delta$ with vertex at the origin. Denote by
$C_k^r(\hat{\Delta})$ the set of splines on $\hat{\Delta}$ of
$C^r$-smoothness and degree exactly $k$. Then
$C^r(\hat{\Delta}):=\oplus_{\geq 0}C_k^r(\hat{\Delta})$ is a graded
$\R$-algebra and there is an isomorphism of $\R$-vector spaces between
$C_k^r(\Delta)$ and the elements in $C^{r}(\hat{\Delta})$ of degree
exactly $k$ (\cite{BR1}), in particular
\[\dim C_k^r(\Delta)=\dim C_k^r(\hat{\Delta}).\]
Define $R := \R[x, y, z]$. For an edge $\tau\in\Delta_1^0$, let
$\ell_\tau$ denote a non-zero homogeneous linear form vanishing on
$\hat\tau$, and define the ideal $\J(\beta)$ of $R$ for each simplex
$\beta\in\Delta^0$ as follows:
\begin{align*}
\J(\sigma)&=\langle 0\rangle&&\text{for each} \;\sigma\in\Delta_2^0\\
\J(\tau)&=\langle \ell_{\tau}^{r+1}\rangle &&\text{for each} \;\tau\in\Delta_1^0\\
\J(\gamma)&=\langle \ell_{\tau}^{r+1}\rangle_{\tau\ni\gamma}&&\text{for each} \;\gamma\in\Delta_0^0, \,\tau\in\Delta_{1}^0.
\end{align*}
Consider the chain complex $\mathcal{R}$ defined on $\Delta^0$ as $\mathcal{R}_i=R^{f_{i}^{0}}$, with the usual simplicial boundary maps $\overline{\partial_i}$ used to compute the relative (modulo $\partial\Delta)$ homology with coefficients in $R$ as in \cite{bil}.
Let $\r/\J$ be the chain complex obtained as the quotient of $\r$ by $\J$,
\vspace{0.1cm}
\[0\longrightarrow\bigoplus_{\sigma\in\Delta_2}\mathcal{R}\xrightarrow{\partial_2}\bigoplus_{{\tau\in\Delta}_1^{0}}\mathcal{R/J}(\tau)\xrightarrow{\partial_1}\bigoplus_{{\gamma\in\Delta}_0^{0}}\mathcal{R/J}(\gamma)\longrightarrow 0\]
\vspace{0.1cm}where the maps $\partial_i$ are the induced by the simplicial boundary maps $\overline{\partial_i}$.
The complex $\r/\J$ was introduced by \cite{Local}, and agrees
with the complex studied by Billera except at the vertices.
It was shown in \cite{bil} that $C^r(\hat{\Delta})$ is isomorphic to the top homology module of $\r/\J$
\[H_{2}(\mathcal{R/J}):=\ker(\partial_2).\] Let us consider the short exact sequence of complexes
\[0\longrightarrow\mathcal{J}\longrightarrow\mathcal{R}\longrightarrow\mathcal{R/J}\longrightarrow 0\]
that gives rise to the long exact sequence of homology modules
\begin{align}\label{longsequence}
0\rightarrow H_2(\mathcal{R})\rightarrow &H_2(\mathcal{R}/\J)\rightarrow H_1 (\J)\rightarrow H_1(\mathcal{R})\nonumber\\
&\rightarrow H_1(\mathcal{R}/\J)\rightarrow H_0(\J)\rightarrow H_0(\mathcal{R})\rightarrow H_0 (\mathcal{R}/\J)\rightarrow 0
\end{align}
Since $\Delta$ is supported on a disk, both $H_0(\r)$ and $H_1(\r)$ are zero. Hence, by the long exact sequence, it implies that $H_{0}(\mathcal{R/J})$ is also zero and $H_{1}(\mathcal{R/J})$ is isomorphic to $H_{0}(\mathcal{J})$.
Applying the Euler characteristic equation to the chain complex $\r/\J$
\[\chi(H(\mathcal{R/J}))=\chi(\mathcal{R/J}),\]
and considering the modules in degree exactly $k$, leads to the formula
\begin{equation}\label{eq}
\dim C^r_k(\Delta)=\sum_{i=0}^2 (-1)^i\hspace{-0.1cm}
\sum_{\beta\in\Delta_{2-i}^0}\dim \mathcal{R}/\mathcal{J}(\beta)_k + \dim H_{0}(\J)_k.
\end{equation}
The aim is to determine the modules in the previous formula as functions of known information about the subdivision $\Delta$.
We know that
\begin{align}
&\sum_{\sigma\in\Delta_{2}^{0}}\dim \mathcal{R}_k=f_2^0 \,\binom{k+2}{2}\label{triang}\\
&\sum_{\tau\in\Delta_1^{0}}\dim \mathcal{R}/ \mathcal{J}(\tau)_k=f_1^{0}\,\biggl[\binom{k+2}{2}-\binom{k+2-(r+1)}{2}\biggr].\label{edges}
\end{align}
For the computation of $\dim \mathcal{R}/\mathcal{J}(\gamma)_k$, \cite{family} proposed the following resolution for $\mathcal{R}/\mathcal{J}(\gamma)_k$. Without loss of generality we translate $\gamma$ to the origin and assume that the linear forms in $\mathcal{J}(\gamma)$ involve only variables $x,y$. Letting $\ell_1^{r+1},\dots ,\ell_{t}^{r+1}$ be a minimal
generating set for $\mathcal{J}(\gamma)$, then a free resolution for $\mathcal{R}/\mathcal{J}(\gamma)_k$ is given by:
{\small
\begin{equation}\label{resolution}
0\rightarrow \mathcal{R}(-\Omega-1)^{a}\oplus \mathcal{R}(-\Omega)^{b}\rightarrow\oplus_{j=1}^{t}\mathcal{R}(-r-1){\longrightarrow}
\mathcal{R}\rightarrow \mathcal{R}/\mathcal{J}(\gamma)\rightarrow 0
\end{equation}}where $\Omega-1$ is the socle degree of $\mathcal{R}/\mathcal{J}(\gamma)$; $\Omega$ and the multiplicities $a$ and $b$ are given by
\begin{equation}\label{formulas}
\Omega = \left\lfloor \frac{t\,r}{t-1}\right\rfloor + 1,\quad a=t\,(r+1)+(1-t)\,\Omega,\quad b=t-1-a.
\end{equation}
In the case $t=1$, we take $A=b=\Omega=0$ so that $\mathcal{R}(-\Omega-1)^{a}\oplus \mathcal{R}(-\Omega)^{b}=0$.
Applying this to each vertex $\gamma_i\in\Delta_0^0$, and letting $t_i$, $\Omega_i$, $a_i$ and $b_i$ denote the values for $t$, $\Omega$, $a$ and $b$ at $\gamma_i$ respectively, leads to
\begin{align}\label{vertices}
\dim\bigoplus_{\gamma_i\in\Delta_0^0}\mathcal{R/J}(\gamma_i)_k=\sum_{i=1}^{f_0^0}\biggl[&\binom{k+2}{2}-t_i\binom{k+2-(r+1)}{2}+\nonumber\\
&b_i\,\binom{k+2-\Omega_i}{2}+a_i\,\binom{k+2-(\Omega_i+1)}{2}\biggr].
\end{align}
Let us notice that the previous formula does not change when we take $t_i$ as the number of different slopes of the edges containing the vertex $\gamma_i$. Then
(\ref{triang}), (\ref{edges}), and (\ref{vertices}), and the fact that $\dim H_1(\r/\J)_k\geq 0$, yields the following theorem:
\begin{thm} \label{lowbHOM}
The dimension of $C^{r}_{k}(\Delta)$ is bounded below by
{\small
\begin{align*}
&\dim C_k^r(\Delta)\geq
\binom{k+2}{2}+f_1^{0}\,\binom{k+2-(r+1)}{2}\\
&-\sum_{i=1}^{f_0^0}\biggl[t_i\binom{k+2-(r+1)}{2}-b_i\,\binom{k+2-\Omega_i}{2}-a_i\,\binom{k+2-(\Omega_i+1)}{2}\biggr],
\end{align*}}where $t_{i}$ is the number of different slopes of the edges containing the vertex $\gamma_{i}$, and
\[\Omega_i=\biggl\lfloor\frac{t_i\,
r}{t_i-1}\biggr\rfloor+1,\quad a_i=t_i\, (r+1) +(1-t_i)\,\Omega_i,\quad b_i=t_i-1-a_i.\]
\end{thm}
In \cite{Sck}, it was proved that the homology module $H_1(\mathcal{R/J})_k$ vanishes for sufficiently high degree, thus the lower bound in the latter theorem is actually the exact dimension formula for $C_k^r(\Delta)$ when $k\gg 0$ \cite[Theorem 4.2]{gs}.
\section{An upper bound on the dimension of $C_k^r(\Delta)$}\label{sectionUpperBound}
Let us fix an ordering
$\gamma_{1}, \ldots, \gamma_{f_{0}^{0}}$
for the vertices in $\Delta^0_0$. For each vertex $\gamma_i$, denote by ${N} (\gamma_i)$ the set edges that contain this vertex, and define
$\tilde{t}_i$ as the number of different slopes of the edges connecting $\gamma_i$ to one of the first $i-1$ vertices in the list or to a vertex on the boundary.
\begin{thm}\label{upbHOM}
The dimension of $C^{r}_{k}(\Delta)$ is bounded by
{\small
\begin{align*}
\dim \; &C_k^r(\Delta)\leq
\binom{k+2}{2}+f_1^{0}\,\binom{k+2-(r+1)}{2}\\
&\hspace{-0.7cm}-\sum_{i}\biggl[\tilde{t}_i\binom{k+2-(r+1)}{2}-\tilde{b}_i\,\binom{k+2-\tilde{\Omega}_i}{2}-\tilde{a}_i\,\binom{k+2-(\tilde{\Omega}_i+1)}{2}\biggr]
\end{align*}}with $\tilde{t}_i$ as we have defined above and
\[\tilde\Omega_i=\biggl\lfloor\frac{\tilde{t}_i\, r}{\tilde{t}_i-1}\biggr\rfloor+1,\quad
\tilde{a}_i=\tilde{t}_i\, (r+1)
+(1-\tilde{t}_i)\,\tilde\Omega_i,\quad
\tilde{b}_i=\tilde{t}_i-1-\tilde{a}_i.\] if $\tilde{t}_{i}>1$ and
$\tilde{a}_{i}=\tilde{b}_{i}=\tilde{\Omega}_{i}=0$ if $\tilde{t}_{i}=1$ .
\end{thm}
\begin{pf}
By the long exact sequence (\ref{longsequence}), and the fact that $H_1(\r)=0$, we have the short exact sequence
\[0\longrightarrow H_2(\r)\longrightarrow H_2(\r/\J)\longrightarrow H_1(\J)\longrightarrow 0\]
The Euler characteristic equation applied to the complex $\r$ implies that $H_2(\r)_k=R_k$. Hence the isomorphism between $C^r(\hat\Delta)$ and the homology module $H_2(\r/\J)$ implies that
\[\dim C_k^r(\Delta)=\dim R_k+\dim H_1(\J)_k\]
where the complex of ideals $\J$ (as defined above) is given by
\[0\longrightarrow\bigoplus_{\tau\in\Delta_1^0}\J(\tau)\xrightarrow{\partial_1}\bigoplus_{\gamma\in\Delta_0^0}\J(\gamma)\longrightarrow 0\]
and where $H_{1} (\J) = \ker \partial_{1} = K_{1}$.
Define $W_1:=\Im(\partial_1)$. By the exact sequence
\[0\longrightarrow K_1 \longrightarrow \bigoplus_{\tau\in\Delta_1^0}\J(\tau)\longrightarrow W_1\longrightarrow 0\]
we get \[\dim C_k^r(\Delta)= \dim R_{k} + \sum_{\tau\in\Delta_1^0}\dim \J(\tau)_k-\dim (W_{1})_k\]
Therefore, to find an upper bound on $\dim C_k^r(\Delta)$ it is enough
to find a lower bound on the dimension of $W_{1}= \Im \partial_{1}$ in
degree $k$.
Let us consider the map
\[
\delta_1:\bigoplus_{\tau= (\gamma,\gamma') \in\Delta_1^0} \J(\tau)\, [\tau]
\to
\bigoplus_{\gamma_i\in\Delta_0^0}
\bigoplus_{\tau \in N (\gamma)}
R\, [\tau|\gamma]
\]
such that $\delta_{1} ([\tau]) = [\tau|\gamma] -[\tau|\gamma']$ for
$\tau= (\gamma,\gamma') \in \Delta_{1}^{0}$,
and the map
\[\varphi_1:
\bigoplus_{\gamma_i\in\Delta_0^0}
\bigoplus_{\tau \in N (\gamma)}
R\, [\tau|\gamma]
\to
\bigoplus_{\gamma\in\Delta_0^0}
R\, [\gamma] \]
with
\begin{align*}
{\varphi}_{1} ([\tau| \gamma]) &= [\gamma] \ \mathrm{if}\ \gamma\in \Delta^{0}_{0},\\
&= 0 \ \;\; \mathrm{if}\ \gamma\not\in \Delta^{0}_{0}.
\end{align*}
Then, we have $\partial_{1} = \varphi_{1} \circ \delta_{1}$.
We consider now the map
\[\pi_1:
\bigoplus_{\gamma\in\Delta_0^0}
\bigoplus_{\tau \in N (\gamma)}
R\, [\tau|\gamma]
\to
\bigoplus_{\gamma\in\Delta_0^0}
\bigoplus_{\tau \in N (\gamma)}
R\, [\tau|\gamma]
\]
with
${\pi}_{1} ([\tau| \gamma]) = 0$ if $\gamma$ is the end
point of biggest index of $\tau$
and ${\pi}_{1} ([\tau| \gamma]) = [\tau| \gamma]$ otherwise.
We denote by $\tilde{\partial}_{1} = \varphi_{1} \circ \pi_{1} \circ \delta_{1}$.
For $\gamma\in \Delta_{0}^{0}$, we denote by $\tilde{N} (\gamma)$ the set of interior edges $\tau$ connecting $\gamma$
to another vertex which is not of bigger index. Let $\tilde{\J} (\gamma) = \sum_{\tau\in \tilde{N} (\gamma)} R \, \ell_{\tau}^{r+1} \subset \J (\gamma)$.
By construction, we have
$$
\Im \tilde{\partial}_{1} = \bigoplus_{\gamma\in\Delta_0^0} \widetilde{\J}(\gamma)\, [\gamma]
$$
and $\dim (W_{1})_{k} = \dim (\Im \partial_{1})_{k} \ge \dim (\Im \tilde{\partial}_{1})_{k}.$
Using formulas \eqref{triang}, \eqref{edges} and the resolution
\eqref{resolution} applied to $\tilde{\J} (\gamma)$, we obtain the
upper bound
\begin{align*}
&\dim C_k^r(\Delta)\leq
\binom{k+2}{2}+f_1^{0}\,\binom{k+2-(r+1)}{2}\\
&-\sum_{i=1}^{f_0^0}\biggl[\tilde{t}_i\binom{k+2-(r+1)}{2}-\tilde{b}_i\,\binom{k+2-\tilde{\Omega}_i}{2}-\tilde{a}_i\,\binom{k+2-(\tilde{\Omega}_i+1)}{2}\biggr],
\end{align*}
with $\tilde{t}_{i} = |\tilde{N} (\gamma_{i})|$ and $\tilde{\Omega}_{i},
\tilde{a}_{i}, \tilde{b}_{i}$ defined as in \eqref{formulas}.
\end{pf}
As an immediate consequence of this theorem we mention the following result.
\begin{cor}\label{>r+1}
If for a numbering on the vertices in $\Delta_0^0$, either $t_i=\tilde t_i$ or $\tilde t_i\geq {r+1}$ for every vertex $\gamma_i$, then the upper bound we get, equals the lower bound, obtaining so the exact dimension formula for the spline space.
\end{cor}
\begin{pf}
We compare the terms corresponding to each interior vertex in both
formulas. If $t_i=\tilde t_i$ then they are trivially the same. By
definition (\ref{formulas}), for $t\geq r+1$, $\Omega$ and $a$ are
both constant equal to $r+1$. Hence if $r+1\leq\tilde t_i$ $(\leq
t_i)$, the terms in the binomials are the same when we compare them
in the formulas in Theorem \ref{lowbHOM} and \ref{upbHOM}. Since the
respective terms in $t_i$ and $\tilde t_i$ cancel out then we get
the equality of the bounds.
\end{pf}
\section{The bounds on $\dim C_k^r(\Delta)$ given by \cite{Sch}}\label{comp}
In this section we compare the bounds on $\dim C_k^r(\Delta)$ found by \cite{Sch}, with the lower and upper bounds given in the previous two sections.
With the notation as before, the upper bound presented by Schumaker can be stated as follows.
\begin{thm}\cite[Theorem 2.1]{Sch}\label{SchTheorem}
Suppose that the vertices $\gamma_i\in\Delta^0_ 0$ of the partition are numbered in such a way that each pair of consecutive vertices in the list are corners of a common triangle in $\Delta$. For each $\gamma_i$ define $\tilde{t}_i$ as the number of edges with different slopes joining the vertex $\gamma_i$ to a vertex in the boundary of $\Delta$ or to one of the first $i-1$ vertices. Then
{\small
\begin{align}\label{upperboundSchumaker}
\dim C_k^r(\Delta)&\leq\binom{k+2}{2}+ f_1^0\binom{k-r+1}{2} \nonumber\\
&-f_0^0\biggl[\binom{k+2}{2}-\binom{r+2}{2}\biggr]+\sum_{i=1}^{f_0^0}\hspace{0.1cm}\sum_{j=1}^{k-r}(r+j+1-j\cdot\tilde t_i)_+.
\end{align}}
\end{thm}
\vspace{0.2cm}
In the same article, the author also presented a lower bound on $\dim C_k^r(\Delta)$. The formula can be obtained by replacing \;$\tilde t_i$ by \,$t_i$ in (\ref{upperboundSchumaker}). We shall prove that that formula for the lower bound is the same as the formula we presented in Theorem \ref{lowbHOM}. Essentially the same proof shows that the upper bound formulas, the one in the previous theorem and the one we presented in Theorem (\ref{upbHOM}), coincide. With the exception that the upper bound presented by Schumaker can only be applied for certain numberings on the vertices. That restriction makes sometimes not possible to find an upper bound. In Section \ref{examples} we will include some examples of those situations, and also some cases when not having that restriction leads to find the exact dimension of the space.\\
\\
\textbf{Remark.\;}The above cited article defined $\tilde{t_i}$ as the
number of slopes of the edges containing $\gamma_i$ but not containing
any of the $i-1$ vertices in the list. This coincides with the
definition of $\tilde t_i$ we consider here, except that the reverse
counting on the vertices in $\Delta^0_0$ is used.
\begin{lem}\label{relation-ti}
Let \,$t\geq 2$ and \,$\Omega$ defined as in (\ref{formulas}) above. The notations in the upper bound formulas (and the lower bounds, respectively) are linked in the following way: if\quad$\Omega= r+\ell$\; then\quad$r+j+1-j\cdot t>0$ only when $j\leq\ell-1$.
\end{lem}
\begin{pf}
For any value of $t\geq 2$ we have \[1<\frac{t}{t-1}\leq 2\,.\] Thus, \;$r+1\leq\Omega\leq 2r+1$, and we can write \;$\Omega=r+\ell$\; for some integer $\ell$\; between 1 and $r+1$. The interval of $t$ where the value of $\Omega$ is $r+\ell$ is given as follows:
\begin{equation}\label{valueOmega}
\begin{cases}
\ST\frac{r+\ell}{\ell}<t\leq\frac{r+(\ell-1)}{\ell-1}\hspace{0.5cm}&\mbox{when\quad$\ell\geq 2$ }\\
\ST\; t>r+1\hspace{0.5cm}&\mbox{when\quad$\ell =1$.}
\end{cases}
\end{equation}
On the other hand, for fixed $r$ and $t$, the condition
\begin{equation}\label{ineq}
r+j+1-j\cdot t\geq 1
\end{equation}
is satisfied if and only if \;$t\leq (r+j)/j$.
Thus, the biggest number $j$ subject to condition (\ref{ineq}) must satisfy
\[\frac{r+(j+1)}{j+1}< t\leq\frac{r+j}{j}\,.\]
From (\ref{valueOmega}), the previous relation holds if and only if $\Omega=r+(j+1)$.
\end{pf}
Let \,$t\geq 2$ and \,$\Omega$, $a$ and $b$ be defined as in (\ref{formulas}). We consider the following two formulas:
{\small
\begin{equation}\label{homverticesformula}
t\binom{k+2-(r+1)}{2}-b\,\binom{k+2-\Omega_i}{2}-a\,\binom{k+2-(\Omega+1)}{2}\,
\end{equation}}\\
{\small
\begin{equation}\label{schverticesformula}
\binom{k+2}{2}-\binom{r+2}{2}-\sum_{j=1}^{k-r}(r+j+1-j\cdot t)_+\,.
\end{equation}}
\begin{lem}\label{ell=1}
If\, $t>r+1$ and $k\geq r$ then the formulas (\ref{homverticesformula}) and (\ref{schverticesformula}) are equal.
\end{lem}
\begin{pf}
By (\ref{valueOmega}) we know that if $t>r+1$ one has $\Omega=r+1$. From the definition in (\ref{formulas}), we have \,$a=r+1$ and \,$b=t-(r+2)$. Thus, (\ref{homverticesformula}) reduces to
\[{
\begin{cases}
0&\text{if}\; k+1-r<2\quad(k=r)\\
\ST r+2&\text{if}\; k-r=1\\
\binom{k+2}{2}-\binom{r+2}{2}&\text{otherwise}.\label{caso1Hom}\\
\end{cases}}
\]
This can be simplified to $\binom{k+2}{2}-\binom{r+2}{2}$ for any value of $k$ and $r$. By Lemma \ref{relation-ti}, the relation (\ref{ineq}) in this case is not satisfied by any value of $j$, then also (\ref{schverticesformula}) simplifies to that expression.
\end{pf}
\begin{lem}\label{ell>=2}
Let $t$ be an integer $\geq 2$. If \,$t\leq r+1$ and $k\ge r$ then the formulas (\ref{homverticesformula}) and (\ref{schverticesformula}) are equal.
\end{lem}
\begin{pf}
By Lemma \ref{relation-ti} we know that \,$\Omega=r+\ell$ for some
integer $\ell \geq 2$. Depending on the value of $\ell$ the binomials in (\ref{homverticesformula}) could be automatically zero. We need to consider three possible situations $k-r\geq\ell+1$, $k-r=1$ and $k-r<\ell$. The formula reduces to a different expression in each case, but it is not difficult to check that they are respectively equivalent to the expressions we get from (\ref{schverticesformula}).
\end{pf}
\begin{prop}\label{proplower}
The lower bound on $\dim C_k^r(\Delta)$ given in Theorem \ref{lowbHOM}
coincides with the lower bound in \cite{Sch}[Theorem 3.1 p. 256].
\end{prop}
\begin{pf}
Since any interior vertex of $\Delta$ is contained in at least two edges with different slopes, then $t_i\geq 2$ for every $i=1,2,\dots f_0^0$. Collecting the vertices $\gamma_i$ with the same value $t_i$, the statement follows directly applying Lemmas \ref{ell=1} and \ref{ell>=2} for those values of $t_i$.
\end{pf}
\begin{prop}\label{propupper}
If the vertices in $\Delta^0$ are numbered as in Theorem
(\ref{SchTheorem}), then the upper bound on $\dim C_k^r(\Delta)$ given
in Theorem \ref{upbHOM} coincides with the bound
(\ref{upperboundSchumaker}) in \cite{Sch}[Theorem 2.1 p. 252].
\end{prop}
\begin{pf}
This follows similarly as in the latter proposition. By collecting the vertices in $\Delta^0$ with the same value $\tilde t_i$, we apply Lemmas \ref{ell=1} and \ref{ell>=2} with the values $\tilde t_i$. The only case that remains to be considered is $\tilde t_i=1$.
It corresponds to the third term in the formula in Theorem (\ref{upbHOM}). Since
{\small
\begin{align*}
\binom{k+2-(r+1)}{2}=\binom{k+2}{2}-\binom{r+2}{2}-\sum_{j=1}^{k-r}(r+1)
\end{align*}}
the statement follows.
\end{pf}
We summarize the results in the following theorem.
Let us denote by $\textsc {lbh}$, $\textsc {ubh}$ and $\textsc{lbs}$, $\textsc{ubs}$ the respective lower and upper bounds obtained by Theorems \ref{lowbHOM} and \ref{upbHOM} and the ones obtained in \cite{Sch}.
\begin{thm}Let $\Delta$ be a connected, finite two dimensional simplicial complex, supported on a disk. Then,
\[{\textsc{ lbh}}(\dim C_k^r(\Delta))= \textsc{lbs}(\dim C_k^r(\Delta))\]
\[{\textsc{ubh}}(\dim C_k^r(\Delta))\leq \textsc{ubs}(\dim C_k^r(\Delta))\,.\]
\end{thm}
\begin{pf}
This follows from Propositions \ref{proplower} and \ref{propupper}, and the fact that the formula in Theorem \ref{upbHOM} can be applied to any numbering on the interior vertices of $\Delta$.
\end{pf}
\section{Dimension formula for degree $k\geq 4r+1$}\label{exactdimension}
In this section we present an alternative proof for the dimension
formula of the spline space $C_k^r(\Delta)$ for $k\geq 4r+1$. The
proof is substantially shorter than the one presented in \cite{alfschu}.
We include the following notation: let $\Delta_i$ and
$\Delta_i^{\partial}$ be (respectively) the sets of $i$-dimensional
faces and $i$-dimensional boundary faces. Denote by $f_i(\Delta)$ and
$f_i^{\partial}(\Delta)$ the cardinality of the preceding sets, respectively.
We begin by stating the following lemma which we will need later on.
\begin{lem}\cite[Lemma 3.3]{Local}\label{lema}
If $\Delta$ is a triangulated region in $\R^2$, then there exists a total order on $\Delta_0$ such that for every $\gamma$ in $\Delta^0_0$, there exist vertices $\gamma'$, $\gamma''$ adjacent to $\gamma$, with $\gamma\succ \gamma',\gamma''$, and such that $\overline{\gamma\gamma'}$, $\overline{\gamma\gamma''}$ have distinct slopes.
\end{lem}
For an ordering on $\Delta_0$ as in the previous lemma, we assign indices to the vertices in such a way that $\gamma_i\succ \gamma_j$ when\ $i> j$. The first indices $1,2,\dots ,f_0^{\partial}$ are assigned to the vertices lying on the boundary. To those interior vertices which are joined to the boundary by two or more edges of distinct slope we assign the indices $f_0^{\partial}+1,\dots,n$.
Let us recall the following notation and remarks from \cite{Local}.
For each interior vertex $\gamma$, and each $f\in\J(\gamma)$, let $f[\gamma]$ denote the corresponding element in $H_0(\J)$. Then $H_0(\J)$ is generated by $\{f[\gamma]\,\mid f\in\J(\gamma)\}$. By definition of $\J(\gamma)$, we know that
\begin{equation}\label{sum}
f[\gamma]=\sum_{\tau\ni \gamma}\ell_\tau^{r+1}f_\tau[\gamma]
\end{equation}
for some polynomials $f_\tau\in R$.
Notice that if $\tau$ is an edge whose vertices are $\gamma$ and $\gamma'$ then the
\begin{equation}\label{opvertexrelation}
\ell_\tau^{r+1}f\,[\gamma]=\ell_\tau^{r+1}[\gamma']
\end{equation}
in $H_0(\J)$; in particular $\ell_\tau^{r+1}f\,[\gamma]=0$ when $\tau$ is an edge connecting $\gamma$ to the boundary.
Here is another lemma that we need to prove that $H_{0} (\J_{k})=0$ for degree $k\geq 4r+1$:
\begin{lem}\label{lem:5.3} Let $\ell_1,\ell_2$ and $\ell_3$ be three equations of
distinct lines through a point $p$ and $L$ the equation of a line that
does not contain the point $p$. Then for any polynomial $g$ of degree
$d \ge r+1$, there exist $u,v \in R$ of degree $d$ and $w\in R$
of degree $r-1$ such that
$$
\ell_{3}^{r+1} \, g = \ell_{1}^{r+1}\, u + \ell_{2}^{r+1}\, v + L^{d-r+1}\, \ell_{3}^{r+1}\, w,
$$
with $w=0$ in the case $r=0$.
\end{lem}
\begin{pf}
The case $r=0$ is direct from the linear dependency of $\ell_{1}, \ell_{2}$ and $\ell_{3}$.
Suppose that $r>0$. Since $\ell_1$, $\ell_2$ and $L$ are linearly independent, we can make the following change of coordinates:
\begin{align*}
\ell_1&= x\,;&\ell_3&=x+ay\quad\text{for some $a\neq 0$\,;}\\
\ell_2&=y\,;&L&=z\,.
\end{align*}
Let $g=x^i y^j z^k$ be a monomial such that $i+j+k=d$.
If $k\leq d-r$ then $i+j\ge r$ and the polynomial $ \ell_2^{r+1}g$ is
in the ideal $\,\in\langle x^{r+1}, y^{r+1}\rangle$.
If $k\geq d-r+1$ then $g$ is a multiple of $z^{d-r+1}=L^{d-r+1}$. Thus we can write
$$
\ell_{3}^{r+1} \, g = \ell_{1}^{r+1} \, u' + \ell_{2}^{r+1}\, v' + L^{d-r+1} \, w',
$$
for some polynomials $u', v'\in R$ of degree $d$ and \,$w'\in R$ of degree
$2\,r$. As $L^{d-r+1} w'$ is in the ideal $\langle \ell_{1}^{r+1}, \ell_{2}^{r+1},\ell_{3}^{r+1}\rangle$
and as $L$ is a non-zero divisor modulo this ideal, we deduce that
$w'= \ell_{1}^{r+1} \, u'' + \ell_{2}^{r+1}\, v'' + \ell_{3}^{r+1}\, w''$, with
$u'',v'',w'' \in R$ of degree $r-1$.
Collecting the coefficients of $\ell_{1}^{r+1}, \ell_{2}^{r+1}$, we obtain
the desired decomposition:
$$
\ell_{3}^{r+1} \, g = \ell_{1}^{r+1} \, (u' + L^{d-r+1}\,u'') + \ell_{2}^{r+1}\,
(v'+ L^{d-r+1}\, v'') + L^{d-r+1} \,\ell_{3}^{r+1}\, w''.
$$
\end{pf}
We use this lemma to prove the following result:\\
\begin{thm}\label{4r+1}
The dimension of $C_k^{r}(\Delta)$ when $k\geq 4r+1$ is given by the
lower bound formula of Theorem \ref{lowbHOM}.
\end{thm}
\begin{pf}
From (\ref{eq}), and the formulas for the dimension of the modules
(\ref{triang}), (\ref{edges}) and (\ref{vertices}), it suffices to
show that $H_0(\J)_k=0$ for $k\ge 4 r+1$. Equivalently, we need to show
that $f[\gamma]=0$ in $H_0(\mathcal{J})_k$ for all $\gamma$ and $f
\in\J(\gamma)_{k}$. Ordering the vertices as in Lemma \ref{lema}, we
consider the first interior vertex in the ordering, and denote it by
$\gamma$. Let $\tau_1, \tau_2, \dots, \tau_t$ be the edges (not
necessarily with different slopes) containing $\gamma$ and
$\omega_1$, $\omega_2$, ..., $\omega_t$ be respectively the end
points of $\tau_{i}$, which are distinct from $\gamma$. We number
first the edges $\tau_1$ and
$\tau_2$ that connect $\gamma$ to the boundary, and the remaining
edges counterclockwise, starting from $\tau_2$. Let us denote by
\,$\ell_1,\ell_2,\dots, \ell_t$ (respectively) equations of
these edges and by $L_{i}$ equations of the edges connecting $\omega_i$ to
$\omega_{i-1}$ for $i=2,\dots t$.
\begin{figure}[!ht]
\scalebox{1.6}
{\hspace{1.5cm}
\begin{pspicture}(0,-1.549375)(5.490625,1.549375)
\psline[linewidth=0.015cm](2.6734376,-0.13375)(1.8734375,-1.13375)
\psline[linewidth=0.015cm](2.0734375,1.16625)(2.6734376,-0.13375)
\psline[linewidth=0.015cm](1.2734375,0.06625)(2.6734376,-0.13375)
\psline[linewidth=0.015cm](1.8734375,-1.13375)(1.2734375,0.06625)
\psline[linewidth=0.015cm](1.2734375,0.06625)(2.0734375,1.16625)
\psline[linewidth=0.015cm](2.6734376,-0.13375)(3.6734376,-1.13375)
\psline[linewidth=0.015cm,linecolor=MidnightBlue](1.8734375,-1.13375)(3.6734376,-1.13375)
\psline[linewidth=0.015cm](2.6734376,-0.13375)(3.7734375,0.76625)
\psline[linewidth=0.015cm](3.7734375,0.76625)(2.0734375,1.16625)
\psline[linewidth=0.015cm](3.6734376,-1.13375)(3.7734375,0.76625)
\psdots[dotsize=0.06](1.8734375,-1.13375)
\psdots[dotsize=0.06](3.6734376,-1.13375)
\psdots[dotsize=0.06](3.7734375,0.76625)
\psdots[dotsize=0.06](2.0734375,1.16625)
\psdots[dotsize=0.06](1.2734375,0.06625)
\psdots[dotsize=0.07](2.6734376,-0.13375)
\usefont{T1}{ppl}{m}{n}
\rput(2.3,-0.9){\tiny $\ell_1$}
\usefont{T1}{ppl}{m}{n}
\rput(3.25,-0.9){\tiny $\ell_2$}
\usefont{T1}{ppl}{m}{n}
\rput(3.1,0.46125){\tiny $\ell_3$}
\usefont{T1}{ppl}{m}{n}
\rput(2.15,0.4){\tiny $\ell_{i-1}$}
\usefont{T1}{ppl}{m}{n}
\rput(1.95875,-0.2){\tiny $\ell_i$}
\usefont{T1}{ppl}{m}{n}
\rput(3,1.15){\tiny $L_{i-1}$}
\usefont{T1}{ppl}{m}{n}
\rput(1.5,0.76125){\tiny $L_i$}
\usefont{T1}{ppl}{m}{n}
\rput(3.94,-0.15){\tiny $L_3$}
\usefont{T1}{ppl}{m}{n}
\rput(2.9,-0.15){\tiny $\gamma$}
\usefont{T1}{ppl}{m}{n}
\rput(4.02,0.85){\tiny $\omega_3$}
\usefont{T1}{ppl}{m}{n}
\rput(1.7,1.15){\tiny $\omega_{i-1}$}
\usefont{T1}{ppl}{m}{n}
\rput(1.08,0.16125){\tiny $\omega_i$}
\usefont{T1}{ppl}{m}{n}
\rput(1.70875,-1.33875){\tiny $\omega_1$}
\usefont{T1}{ppl}{m}{n}
\rput(3.70875,-1.33875){\tiny $\omega_2$}
\end{pspicture}
}
\caption{Notation in Theorem \ref{4r+1}.}\label{notation near gamma}
\end{figure}
We are going to prove by induction on $j>2$ that for any homogeneous polynomial $f$ of
degree $\ge 3 r$, we have
\begin{align*}
\bullet\;&\ell_{j}^{r+1}\,f\, [\omega_{j}] = \ell_{j}^{r+1} f[\gamma]= 0,\\
\bullet \;& L_{j}^{r+1}\,f [\omega_{j}] =0
\end{align*}
in $H_0(\J)$.
Let us prove it first for $j=3$. Let $f\in R_{k}$ with $k\geq 3r$.
By construction $\omega_2$ is on the boundary, thus we have
$L_{3}^{r+1}f\,[\omega_3]=L_{3}^{r+1}f\,[\omega_2]=0$.
By Lemma \ref{lem:5.3},
$$
\ell_3^{r+1}\,f = \ell_1^{r+1} u + \ell_2^{r+1} v + L_{3}^{2r+1} \ell_{3}^{r+1} w
$$
for some polynomials $u,v, w\in R$.
Then we have
\begin{eqnarray*}
{\ell_3^{r+1}\, f\, [\gamma]}
& = & \ell_1^{r+1} u\, [\gamma] + \ell_2^{r+1} v \, [\gamma]
+ L_{3}^{2\,r+1} \ell_{3}^{r+1} w \, [\gamma] \\
&=&
L_{3}^{2\,r+1} \ell_{3}^{r+1}\, w\, [\omega_{3}] = 0.
\end{eqnarray*}
Let us take now $i>2$ and assume that the induction hypothesis is true
for $3 \leq j<i$.
We consider first\, $\ell_i^{r+1}f\,[\omega_i]$ with $f$ homogeneous of degree
$\ge 3r$. By Lemma
\ref{lem:5.3} applied to $\ell_{1}, \ell_{2}, \ell_{3}$ and $L_{i}$, we have
$$
\ell_{i}^{r+1}\, f = \ell_{1}^{r+1} \, u + \ell_{2}^{r+1} \, v
+ L_{i}^{2 r+1} \, \ell_{i}^{r+1} w,
$$
for some polynomials $u,v,w\in R$. Then we have
\begin{eqnarray*}
\ell_{i}^{r+1}\, f \, [\gamma] &=& \ell_{1}^{r+1} \, u \, [\gamma]+
\ell_{2}^{r+1} \, v \, [\gamma] + L_{i}^{2 r+1} \, \ell_{i}^{r+1} w \,[\gamma]\\
&=& L_{i}^{2 r+1} \, \ell_{i}^{r+1} w \,[\omega_{i}]
= L_{i}^{2 r+1} \, \ell_{i}^{r+1} w \,[\omega_{i-1}].
\end{eqnarray*}
As $L_i^{2r+1}$ is in the ideal generated by $\ell_{i-1}^{r+1}$, $L_{i-1}^{r+1}$,
by the induction hypothesis, we have
$$
L_{i}^{2 r+1} \, \ell_{i}^{r+1} w \,[\omega_{i-1}]=0.
$$
This proves our claim for $\ell_i^{r+1}\,f\,[\omega_i]=0$.
Let us consider now
$L_i^{r+1}\,f\,[\omega_{i}]=L_i^{r+1}f\,[\omega_{i-1}]$ with $f$
homogeneous of degree $\geq 3r$.
By Lemma \ref{lem:5.3},
$$
L_{i}^{r+1}\, f = \ell_{i-1}^{r+1} \, u + L_{i-1}^{r+1} \, v
+ \ell_{i}^{r+1} \, L_{i}^{r+1}\, w,
$$
for some polynomials $u,v,w\in R$. We deduce that
\begin{eqnarray*}
L_{i}^{r+1}\, f\, [\omega_{i-1}]
&=& \ell_{i-1}^{r+1} \, u\, [\omega_{i-1}]
+ L_{i-1}^{r+1} \, v \, [\omega_{i-1}]
+ \ell_{i}^{r+1} \, L_{i}^{r+1}\, w \, [\omega_{i-1}] \\
&=&
\ell_{i}^{r+1} \, L_{i}^{r+1}\, w \, [\omega_{i}] =
\ell_{i}^{r+1} \, L_{i}^{r+1}\, w \, [\gamma] = 0
\end{eqnarray*}
by the induction hypothesis and the previous computation.
This concludes the induction proof and shows that for any $f \in \J (\gamma)_{4r+1}$,
we have
\begin{align*}
f\, [\gamma]&=\sum_{i=1}^t \ell_i^{r+1}f_i[\gamma]=\sum_{i=3}^t \ell_i^{r+1}f_i[\gamma]\\
& = 0.
\end{align*}
Therefore $H_{0} (\J)=0$ and the dimension of $C^{r}_{k} (\Delta)$ is
given by the lower bound when $k\leq 4r+1$.
\end{pf}
\section{Examples and remarks}\label{examples}
In this section we compute the dimension formula for some complexes. We prove that the dimension formula is in fact given by the corresponding lower bound given in Theorem \ref{lowbHOM}. In the first example we show directly that $H_0(\J)_k=0$. In the second example we number the vertices in such a way that the upper bound we obtain agrees with the lower one. The subdivisions we consider are two of the so--called Powell-Sabin subdivisions (\cite{ps}).
\begin{example} Powell-Sabin 12-split.
\end{example}
Let $\Delta$ be the simplicial complex supported on a triangle $|\Delta|$ subdivided into twelve smaller triangles, as in Figure \ref{12split}.
\begin{figure}[ht!]
\centering
\scalebox{0.9}
{
\begin{pspicture}(0,-2.01)(6.01,2.01)
\psline[linewidth=0.010cm](0.6,-2.0)(3.0,2.0)
\psline[linewidth=0.010cm](3.0,2.0)(5.4,-2.0)
\psline[linewidth=0.010cm](5.4,-2.0)(0.6,-2.0)
\psline[linewidth=0.010cm](0.6,-2.0)(4.2,0.0)
\psline[linewidth=0.010cm](4.2,0.0)(1.8,0.0)
\psline[linewidth=0.010cm](1.8,0.0)(5.4,-2.0)
\psline[linewidth=0.010cm](3.0,2.0)(3.0,-2.0)
\psline[linewidth=0.010cm](3.0,-2.0)(4.2,0.0)
\psline[linewidth=0.010cm](3.0,-2.0)(1.8,0.0)
\usefont{T1}{ppl}{m}{n}
\rput(3.4,-0.67){\scriptsize $\gamma_0$}
\usefont{T1}{ppl}{m}{n}
\rput(2.1,-0.9){\scriptsize $\gamma_1$}
\usefont{T1}{ppl}{m}{n}
\rput(3.2,0.2){\scriptsize $\gamma_2$}
\usefont{T1}{ppl}{m}{n}
\rput(3.95,-0.9){\scriptsize $\gamma_3$}
\usefont{T1}{ppl}{m}{n}
\rput(1,1.5){\Large $\Delta$}
\usefont{T1}{ppl}{m}{n}
\rput(1.5,-1.3){\scriptsize $\tau_j$}
\usefont{T1}{ppl}{m}{n}
\rput(2.2,-1.6){\small $\sigma_i$}
\psdots[dotsize=0.1,linecolor=MidnightBlue](3.0,-0.65)
\psdots[dotsize=0.1,linecolor=RedOrange](2.39999,-0.984)
\psdots[dotsize=0.1,linecolor=RedOrange](3.609,-0.984)
\psdots[dotsize=0.1,linecolor=RedOrange](3.0,0.0)
\end{pspicture}
}
\caption{Powell-Sabin 12-split.}\label{12split}
\end{figure}
Let us note that by using any numbering on $\Delta_0^0$, the upper bound we get from Theorem \ref{upbHOM} equals the lower bound, leading easily to the dimension formula. On the other hand, it is not possible to find a numbering on the interior vertices of $\Delta$ with the condition required to apply Theorem \ref{SchTheorem}. Hence it is not possible to find an upper bound for this spline space by using the formula of Schumaker.
By the remarks in Section \ref{exactdimension}, it is easy to check that $ \ell_j^{r+1}f\,[\gamma_i]=0$ for every $\gamma_i$ and $\ell_j^{r+1}f\,\in \J(\gamma_i)$. Thus, for any $k$ and any $r\leq k$, the homology module $H_0(\J)_k$ is zero and the dimension of the spline space $C_k^r(\Delta)$ is given by the formula in Theorem \ref{lowbHOM}.
This example is an instance of a triangulation where all
edges are pseudoboundaries (in the terminology of \cite{family}), and the
computation also follows by Lemma 2.5 of \cite{family}.
\begin{example} Powell-Sabin 6-split.
\end{example}
Let $\Delta$ be the simplicial complex supported on a simply connected triangulated region $|\Delta|$ in $\R^2$. Assume the triangulation consists of $f_0$ vertices (interior and exterior) and $f_i^0$ $i$-dimensional interior faces ($i=0,1,2$), where $f_2^0$ is taken as the number of all the triangles $\sigma_i$ in the subdivision.
We refine this triangulation by subdividing each triangle into 6 triangles to get $\tilde\Delta$, as follows.
\begin{enumerate}
\item In each triangle $\sigma_i\in\Delta^0_2$ choose an interior point $\nu_i$, in such a way that if two triangles \,$\sigma_i$, $\sigma_j$ have an edge $\tau$ in common, then the line joining $\nu_i$ and $\nu_j$ intersects $\tau$ at an interior point $\mu_{ij}$.
\item Join each new point $\nu_i$ to the vertices of the triangle $\sigma_i$, and to the points $\mu_{ij}$ on the edges of $\sigma_i$.
\item For the triangles having an edge (or more) on the boundary, choose a point in the interior of such edge and join it with $\nu_i$.
\end{enumerate}
\begin{figure}[!ht]
\scalebox{0.8}
{\begin{pspicture}(-2,10.063)(26.71,6.399)
\definecolor{RoyalPurple}{rgb}{0.28627450980392155,0.21176470588235294,0.3411764705882353}
\definecolor{color2746}{rgb}{0.06666666666666667,0.4627450980392157,0.2}
\definecolor{color2803}{rgb}{0.3843137254901961,0.6901960784313725,0.03137254901960784}
\psdots[dotsize=0.11,linecolor=gray](4.641875,8.7)
\psdots[dotsize=0.11,linecolor=gray](2.741875,7.8)
\psdots[dotsize=0.11,linecolor=gray](3.141875,6.6)
\psdots[dotsize=0.11,linecolor=gray](1.541875,5.9)
\psdots[dotsize=0.11,linecolor=gray](1.941875,4.8)
\psdots[dotsize=0.11,linecolor=gray](3.241875,4.4)
\psdots[dotsize=0.11,linecolor=gray](4.841875,5.0)
\psdots[dotsize=0.11,linecolor=gray](6.341875,4.2)
\psdots[dotsize=0.11,linecolor=gray](7.241875,5.5)
\psdots[dotsize=0.11,linecolor=gray](8.641875,6.2)
\psdots[dotsize=0.11,linecolor=gray](9.941875,5.8)
\psdots[dotsize=0.11,linecolor=gray](10.341875,4.5)
\psdots[dotsize=0.11,linecolor=gray](10.541875,7.4)
\psdots[dotsize=0.11,linecolor=gray](10.141875,8.3)
\psdots[dotsize=0.11,linecolor=gray](8.941875,8.2)
\psdots[dotsize=0.11,linecolor=gray](8.041875,9.0)
\psdots[dotsize=0.11,linecolor=gray](6.441875,8.0)
\psdots[dotsize=0.11,linecolor=gray](5.941875,6.9)
\psdots[dotsize=0.11,linecolor=gray](4.741875,6.4)
\psline[linewidth=0.010cm,linecolor=gray](27.741875,-9.6)(27.641874,-9.4)
\psline[linewidth=0.010cm,linecolor=gray](1.541875,5.9)(1.941875,4.8)
\psline[linewidth=0.010cm,linecolor=gray](2.741875,7.8)(4.641875,8.7)
\psline[linewidth=0.010cm,linecolor=gray](4.641875,8.7)(6.441875,8.0)
\psline[linewidth=0.010cm,linecolor=gray](6.441875,8.0)(8.041875,9.0)
\psline[linewidth=0.010cm,linecolor=gray](8.041875,9.0)(8.941875,8.2)
\psline[linewidth=0.010cm,linecolor=gray](8.941875,8.2)(10.141875,8.3)
\psline[linewidth=0.010cm,linecolor=gray](10.141875,8.3)(10.541875,7.4)
\psline[linewidth=0.010cm,linecolor=gray](10.541875,7.4)(9.941875,5.8)
\psline[linewidth=0.010cm,linecolor=gray](9.941875,5.8)(10.341875,4.5)
\psline[linewidth=0.010cm,linecolor=gray](1.941875,4.8)(3.241875,4.4)
\psline[linewidth=0.010cm,linecolor=gray](3.241875,4.4)(4.841875,5.0)
\psline[linewidth=0.010cm,linecolor=gray](4.841875,5.0)(6.341875,4.2)
\psline[linewidth=0.010cm,linecolor=gray](6.341875,4.2)(7.241875,5.5)
\psline[linewidth=0.010cm,linecolor=gray](7.241875,5.5)(8.641875,6.2)
\psline[linewidth=0.010cm,linecolor=gray](8.641875,6.2)(9.941875,5.8)
\psline[linewidth=0.010cm,linecolor=gray](2.741875,7.8)(3.141875,6.6)
\psline[linewidth=0.010cm,linecolor=gray](3.141875,6.6)(4.741875,6.4)
\psline[linewidth=0.010cm,linecolor=gray](4.741875,6.4)(5.941875,6.9)
\psline[linewidth=0.010cm,linecolor=gray](5.941875,6.9)(6.441875,8.0)
\psline[linewidth=0.010cm,linecolor=gray](5.941875,6.9)(7.241875,5.5)
\psline[linewidth=0.010cm,linecolor=gray](4.741875,6.4)(4.841875,5.0)
\psline[linewidth=0.010cm,linecolor=gray](1.541875,5.9)(3.141875,6.6)
\psline[linewidth=0.010cm,linecolor=gray](8.641875,6.2)(8.941875,8.2)
\psline[linewidth=0.010cm,linecolor=gray](5.941875,6.9)(7.441875,7.3)
\psline[linewidth=0.010cm,linecolor=gray](5.941875,6.9)(4.741875,7.5)
\psline[linewidth=0.010cm,linecolor=gray](5.941875,6.9)(6.041875,5.7)
\psline[linewidth=0.010cm,linecolor=gray](1.141875,6.6)(2.741875,7.8)
\psline[linewidth=0.010cm,linecolor=gray](2.741875,7.8)(2.541875,9.1)
\psline[linewidth=0.010cm,linecolor=gray](2.741875,7.8)(4.741875,7.5)
\psline[linewidth=0.010cm,linecolor=gray](4.741875,7.5)(4.641875,8.7)
\psline[linewidth=0.010cm,linecolor=gray](4.641875,8.7)(7.241875,9.6)
\psline[linewidth=0.010cm,linecolor=gray](4.641875,8.7)(2.541875,9.1)
\psline[linewidth=0.010cm,linecolor=gray](4.741875,7.5)(6.441875,8.0)
\psline[linewidth=0.010cm,linecolor=gray](6.441875,8.0)(7.241875,9.6)
\psline[linewidth=0.010cm,linecolor=gray](6.441875,8.0)(7.441875,7.3)
\psline[linewidth=0.010cm,linecolor=gray](7.441875,7.3)(8.041875,9.0)
\psline[linewidth=0.010cm,linecolor=gray](8.041875,9.0)(9.841875,9.5)
\psline[linewidth=0.010cm,linecolor=gray](7.241875,9.6)(8.041875,9.0)
\psline[linewidth=0.010cm,linecolor=gray](7.441875,7.3)(8.941875,8.2)
\psline[linewidth=0.010cm,linecolor=gray](8.941875,8.2)(9.841875,7.3)
\psline[linewidth=0.010cm,linecolor=gray](9.841875,7.3)(10.141875,8.3)
\psline[linewidth=0.010cm,linecolor=gray](10.141875,8.3)(10.641875,8.2)
\psline[linewidth=0.010cm,linecolor=gray](10.641875,8.2)(10.541875,7.4)
\psline[linewidth=0.010cm,linecolor=gray](10.541875,7.4)(9.841875,7.3)
\psline[linewidth=0.010cm,linecolor=gray](10.541875,7.4)(11.141875,6.6)
\psline[linewidth=0.010cm,linecolor=gray](11.141875,6.6)(9.941875,5.8)
\psline[linewidth=0.010cm,linecolor=gray](9.941875,5.8)(9.841875,7.3)
\psline[linewidth=0.010cm,linecolor=gray](9.841875,7.3)(8.641875,6.2)
\psline[linewidth=0.010cm,linecolor=gray](8.941875,8.2)(9.841875,9.5)
\psline[linewidth=0.010cm,linecolor=gray](9.841875,9.5)(10.141875,8.3)
\psline[linewidth=0.010cm,linecolor=gray](7.441875,7.3)(8.641875,6.2)
\psline[linewidth=0.010cm,linecolor=gray](7.241875,5.5)(7.441875,7.3)
\psline[linewidth=0.010cm,linecolor=gray](6.041875,5.7)(7.241875,5.5)
\psline[linewidth=0.010cm,linecolor=gray](1.141875,6.6)(3.141875,6.6)
\psline[linewidth=0.010cm,linecolor=gray](3.141875,6.6)(4.741875,7.5)
\psline[linewidth=0.010cm,linecolor=gray](3.141875,6.6)(3.341875,5.8)
\psline[linewidth=0.010cm,linecolor=gray](3.341875,5.8)(4.741875,6.4)
\psline[linewidth=0.010cm,linecolor=gray](4.741875,6.4)(4.741875,7.5)
\psline[linewidth=0.010cm,linecolor=gray](4.741875,6.4)(6.041875,5.7)
\psline[linewidth=0.010cm,linecolor=gray](6.041875,5.7)(4.841875,5.0)
\psline[linewidth=0.010cm,linecolor=gray](4.841875,5.0)(3.341875,5.8)
\psline[linewidth=0.010cm,linecolor=gray](4.841875,5.0)(5.241875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](5.241875,3.5)(6.341875,4.2)
\psline[linewidth=0.010cm,linecolor=gray](6.041875,5.7)(6.341875,4.2)
\psline[linewidth=0.010cm,linecolor=gray](6.341875,4.2)(8.941875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](8.641875,6.2)(8.941875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](9.941875,5.8)(8.941875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](10.341875,4.5)(8.941875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](10.341875,4.5)(10.641875,4.1)
\psline[linewidth=0.010cm,linecolor=gray](10.341875,4.5)(11.141875,6.6)
\psline[linewidth=0.010cm,linecolor=gray](7.241875,5.5)(8.941875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](3.241875,4.4)(1.741875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](3.241875,4.4)(5.241875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](3.241875,4.4)(3.341875,5.8)
\psline[linewidth=0.010cm,linecolor=gray](3.341875,5.8)(1.941875,4.8)
\psline[linewidth=0.010cm,linecolor=gray](1.941875,4.8)(1.741875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](1.941875,4.8)(1.041875,4.9)
\psline[linewidth=0.010cm,linecolor=gray](1.041875,4.9)(1.541875,5.9)
\psline[linewidth=0.010cm,linecolor=gray](1.541875,5.9)(3.341875,5.8)
\psline[linewidth=0.010cm,linecolor=gray](1.541875,5.9)(1.141875,6.6)
\psline[linewidth=0.010cm,linecolor=gray](4.641875,8.7)(4.541875,9.3)
\psline[linewidth=0.010cm,linecolor=gray](2.741875,7.8)(1.941875,8.0)
\psline[linewidth=0.010cm,linecolor=gray](8.041875,9.0)(8.041875,9.6)
\psline[linewidth=0.010cm,linecolor=gray](10.141875,8.3)(10.441875,8.5)
\psline[linewidth=0.010cm,linecolor=gray](10.541875,7.4)(10.91,7.4
\psline[linewidth=0.010cm,linecolor=gray](10.341875,4.5)(10.741875,4.5)
\psline[linewidth=0.010cm,linecolor=gray](10.341875,4.5)(10.141875,3.9)
\psline[linewidth=0.010cm,linecolor=gray](3.241875,4.4)(3.141875,3.5)
\psline[linewidth=0.010cm,linecolor=gray](1.541875,5.9)(1.09,5.8
\psline[linewidth=0.010cm,linecolor=gray](1.341875,4.3)(1.941875,4.8)
\psline[linewidth=0.010cm,linecolor=gray](6.341875,4.2)(6.441875,3.5)
\psline[linewidth=0.040cm](8.941875,3.5)(11.141875,6.6)
\psline[linewidth=0.040cm](7.441875,7.3)(8.941875,3.5)
\psline[linewidth=0.040cm](8.941875,3.5)(9.841875,7.3)
\psline[linewidth=0.040cm](6.041875,5.7)(8.941875,3.5)
\psline[linewidth=0.040cm](6.041875,5.7)(5.241875,3.5)
\psline[linewidth=0.040cm](5.241875,3.5)(3.341875,5.8)
\psline[linewidth=0.040cm](1.741875,3.5)(3.341875,5.8)
\psline[linewidth=0.040cm](9.841875,7.3)(11.141875,6.6)
\psline[linewidth=0.040cm](7.441875,7.3)(9.841875,7.3)
\psline[linewidth=0.040cm](7.441875,7.3)(6.041875,5.7)
\psline[linewidth=0.040cm](3.341875,5.8)(6.041875,5.7)
\psline[linewidth=0.040cm](1.741875,3.5)(1.041875,4.9)
\psline[linewidth=0.040cm](1.041875,4.9)(3.341875,5.8)
\psline[linewidth=0.040cm](1.041875,4.9)(1.141875,6.6)
\psline[linewidth=0.040cm](1.141875,6.6)(3.341875,5.8)
\psline[linewidth=0.040cm](3.341875,5.8)(4.741875,7.5)
\psline[linewidth=0.040cm](4.741875,7.5)(1.141875,6.6)
\psline[linewidth=0.040cm](1.141875,6.6)(2.541875,9.1)
\psline[linewidth=0.040cm](2.541875,9.1)(4.741875,7.5)
\psline[linewidth=0.040cm](4.741875,7.5)(6.041875,5.7)
\psline[linewidth=0.040cm](4.741875,7.5)(7.441875,7.3)
\psline[linewidth=0.040cm](7.441875,7.3)(9.841875,9.5)
\psline[linewidth=0.040cm](9.841875,9.5)(9.841875,7.3)
\psline[linewidth=0.040cm](9.841875,7.3)(10.641875,8.2)
\psline[linewidth=0.040cm](11.141875,6.6)(10.641875,8.2)
\psline[linewidth=0.040cm](10.641875,8.2)(9.841875,9.5)
\psline[linewidth=0.040cm](7.441875,7.3)(7.241875,9.6)
\psline[linewidth=0.040cm](7.241875,9.6)(9.841875,9.5)
\psline[linewidth=0.040cm](4.741875,7.5)(7.241875,9.6)
\psline[linewidth=0.040cm](2.541875,9.1)(7.241875,9.6)
\psline[linewidth=0.040cm](1.741875,3.5)(5.241875,3.5)
\psline[linewidth=0.040cm](5.241875,3.5)(8.941875,3.5)
\psline[linewidth=0.040cm](8.941875,3.5)(10.641875,4.1)
\psline[linewidth=0.040cm](10.641875,4.1)(11.141875,6.6)
\usefont{T1}{ppl}{m}{n}
\rput(1.35,9.3){\Huge{$\tilde\Delta$}}
\end{pspicture}
}
\vspace{2cm}
\caption{Powell-Sabin 6-split of $\Delta$.}\label{6splitcomplete}
\end{figure}
We want to apply Corollary \ref{>r+1} and give a formula for $\dim C_k^1(\tilde\Delta)$.
Let us consider the following numbering on the vertices in $\tilde\Delta_0^0$. We take first a triangle $\sigma_i\in\Delta$ with (at least) one of its edges on the boundary, denote by $\gamma_i$ the vertex in $\sigma_i$ that is in the interior of $\Delta$. With the notation as above, we assign the index $1$ to the vertex $\nu_i$, the indices $2$ and $3$ to the vertices $\mu_{ij}$ on the edges of $\sigma_i$, and finally the index $4$ to the vertex $\gamma_i$. After this, we consider the edges of $\sigma_i$ as if they were all part of the boundary and iterate the previous process. We choose another triangle in $\Delta$ having (at least) one edge on the ``new" boundary. We index its vertices following the same order as we did for the ones in $\sigma_i$. We continue this numbering until we have considered all the triangles in $\Delta$ and hence have index all the vertices in $\tilde\Delta^0_0$. Figure \ref{firstround} shows the triangles that could be considered first, and the different colors refer to the values of $\tilde t_i$ for the interior vertices in each case.
\begin{figure}[!ht]
\scalebox{0.8}
{\hspace{0.6cm}
\begin{pspicture}(0,-9.659375)(29.07375,9.694375)
\definecolor{RoyalPurple}{rgb}{0.28627450980392155,0.21176470588235294,0.3411764705882353}
\definecolor{color142}{rgb}{0.9098039215686274,0.1803921568627451,0.06666666666666667}
\definecolor{color82}{rgb}{0.06666666666666667,0.4627450980392157,0.2}
\psline[linewidth=0.03cm,linecolor=gray](8.76875,7.245625)(10.26875,3.445625)
\psline[linewidth=0.03cm,linecolor=gray](10.26875,3.445625)(11.16875,7.245625)
\psline[linewidth=0.03cm,linecolor=gray](8.76875,7.245625)(11.16875,7.245625)
\psline[linewidth=0.03cm,linecolor=gray](8.76875,7.245625)(7.36875,5.645625)
\psline[linewidth=0.03cm,linecolor=gray](4.66875,5.745625)(7.36875,5.645625)
\psline[linewidth=0.03cm,linecolor=gray](4.66875,5.745625)(6.06875,7.445625
\psline[linewidth=0.03cm,linecolor=gray](6.06875,7.445625)(7.36875,5.645625
\psline[linewidth=0.03cm,linecolor=gray](6.06875,7.445625)(8.76875,7.245625
\psline[linewidth=0.02cm,linecolor=RoyalPurple](29.06875,-9.654375)(28.968748,-9.454375)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](2.86875,5.845625)(3.26875,4.745625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](4.06875,7.745625)(5.96875,8.645625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](5.96875,8.645625)(7.03,8.25)
\psline[linewidth=0.01cm,linecolor=gray](7.03,8.25)(7.76875,7.945625)
\psdots[dotsize=0.1,linecolor=RoyalPurple](7.03,8.25
\psline[linewidth=0.01cm,linecolor=gray](7.76875,7.945625)(8.66,8.494375)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](8.66,8.494375)(9.36875,8.945625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](9.36875,8.945625)(10.015,8.394375)
\psline[linewidth=0.01cm,linecolor=gray](10.015,8.394375)(10.26875,8.145625)
\psline[linewidth=0.01cm,linecolor=gray](10.26875,8.145625)(11.18,8.23)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](11.18,8.23)(11.46875,8.245625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](11.46875,8.245625)(11.86875,7.345625)
\psline[linewidth=0.01cm,linecolor=gray](11.73,6.95)(11.26875,5.745625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](11.73,6.95)(11.86875,7.345625)
\psline[linewidth=0.01cm,linecolor=gray](11.26875,5.745625)(11.47,5.125)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](11.47,5.125)(11.66875,4.445625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](3.26875,4.745625)(4.56875,4.345625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](4.56875,4.345625)(5.535,4.699)
\psline[linewidth=0.01cm,linecolor=gray](5.535,4.699)(6.16875,4.945625)
\psline[linewidth=0.01cm,linecolor=gray](6.16875,4.945625)(6.96,4.53)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](6.96,4.53)(7.66875,4.145625
\psline[linewidth=0.02cm,linecolor=RoyalPurple](7.66875,4.145625)(8.26,4.98)
\psline[linewidth=0.01cm,linecolor=gray](8.26,4.98)(8.56875,5.445625)
\psline[linewidth=0.01cm,linecolor=gray](8.56875,5.445625)(9.96875,6.145625
\psline[linewidth=0.01cm,linecolor=gray](9.96875,6.145625)(11.26875,5.745625
\psline[linewidth=0.01cm,linecolor=gray](4.46875,6.545625)(6.06875,6.345625)
\psline[linewidth=0.01cm,linecolor=gray](6.06875,6.345625)(7.26875,6.845625)
\psline[linewidth=0.01cm,linecolor=gray](7.26875,6.845625)(7.76875,7.945625)
\psline[linewidth=0.01cm,linecolor=gray](7.26875,6.845625)(8.56875,5.445625)
\psline[linewidth=0.01cm,linecolor=gray](6.06875,6.345625)(6.16875,4.945625)
\psline[linewidth=0.01cm,linecolor=gray](3.56,6.15)(4.46875,6.545625)
\psline[linewidth=0.01cm,linecolor=gray](9.96875,6.145625)(10.26875,8.145625)
\psline[linewidth=0.01cm,linecolor=gray](7.26875,6.845625)(8.76875,7.245625)
\psline[linewidth=0.01cm,linecolor=gray](7.26875,6.845625)(6.06875,7.445625)
\psline[linewidth=0.01cm,linecolor=gray](7.26875,6.845625)(7.36875,5.645625)
\psline[linewidth=0.01cm,linecolor=gray](6.06875,7.445625)(7.76875,7.945625)
\psline[linewidth=0.01cm,linecolor=gray](4.46875,6.545625)(6.06875,7.445625)
\psline[linewidth=0.01cm,linecolor=gray](6.06875,6.345625)(6.06875,7.445625)
\psline[linewidth=0.01cm,linecolor=gray](8.56875,5.445625)(8.76875,7.245625)
\psline[linewidth=0.01cm,linecolor=gray](7.36875,5.645625)(8.56875,5.445625)
\psline[linewidth=0.01cm,linecolor=gray](8.56875,5.445625)(10.26875,3.445625)
\psline[linewidth=0.01cm,linecolor=gray](7.76875,7.945625)(8.76875,7.245625)
\psline[linewidth=0.01cm,linecolor=gray](8.76875,7.245625)(10.26875,8.145625)
\psline[linewidth=0.01cm,linecolor=gray](8.76875,7.245625)(9.96875,6.145625)
\psline[linewidth=0.01cm,linecolor=gray](7.76875,7.945625)(8.56875,9.545625)
\psline[linewidth=0.01cm,linecolor=gray](10.26875,8.145625)(11.16875,7.245625
\psline[linewidth=0.01cm,linecolor=gray](10.26875,8.145625)(11.16875,9.445625)
\psline[linewidth=0.01cm,linecolor=gray](11.26875,5.745625)(11.16875,7.245625)
\psline[linewidth=0.01cm,linecolor=gray](11.16875,7.245625)(9.96875,6.145625)
\psline[linewidth=0.01cm,linecolor=gray](12.46875,6.545625)(11.26875,5.745625)
\psline[linewidth=0.01cm,linecolor=gray](11.26875,5.745625)(10.26875,3.445625)
\psline[linewidth=0.01cm,linecolor=gray](7.36875,5.645625)(6.16875,4.945625)
\psline[linewidth=0.01cm,linecolor=gray](6.16875,4.945625)(4.66875,5.745625)
\psline[linewidth=0.01cm,linecolor=gray](6.16875,4.945625)(6.56875,3.445625)
\psline[linewidth=0.01cm,linecolor=gray](4.66875,5.745625)(6.06875,6.345625)
\psline[linewidth=0.01cm,linecolor=gray](4.46875,6.545625)(4.66875,5.745625)
\psline[linewidth=0.01cm,linecolor=gray](9.96875,6.145625)(10.26875,3.445625
\psline[linewidth=0.01cm,linecolor=gray](4.32,7)(4.46875,6.545625)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](4.06875,7.745625)(4.32,7
\psline[linewidth=0.02cm,linecolor=MidnightBlue](2.46875,6.545625)(4.06875,7.745625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](4.06875,7.745625)(3.86875,9.045625
\psline[linewidth=0.02cm,linecolor=color82](4.06875,7.745625)(6.06875,7.445625)
\psline[linewidth=0.02cm,linecolor=color82](6.06875,7.445625)(5.96875,8.645625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](8.56875,9.545625)(9.36875,8.945625
\psline[linewidth=0.04cm,linecolor=color82](6.06875,7.445625)(5.03,8.2
\psline[linewidth=0.04cm,linecolor=RoyalPurple](3.86875,9.045625)(5.03,8.2
\psline[linewidth=0.04cm,linecolor=color82](6.06875,7.445625)(4.32,7)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](4.32,7)(2.46875,6.545625
\psline[linewidth=0.04cm,linecolor=color82](6.06875,7.445625)(7.03,8.25)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](7.03,8.25)(8.56875,9.545625
\psline[linewidth=0.04cm,linecolor=RoyalPurple](8.66,8.494375)(8.56875,9.545625)
\psline[linewidth=0.04cm,linecolor=color82](8.76875,7.245625)(8.66,8.494375
\psline[linewidth=0.04cm,linecolor=color82](8.76875,7.245625)(10.015,8.394375)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](10.015,8.394375)(11.16875,9.445625
\psline[linewidth=0.04cm,linecolor=color82](11.175,8.23)(11.16875,7.245625)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](11.16875,9.445625)(11.175,8.23
\psline[linewidth=0.04cm,linecolor=RoyalPurple](11.73,6.95)(12.46875,6.545625)
\psline[linewidth=0.04cm,linecolor=color82](11.16875,7.245625)(11.73,6.95
\psline[linewidth=0.04cm,linecolor=RoyalPurple](11.6663,7.81)(11.96875,8.145625)
\psline[linewidth=0.04cm,linecolor=color82](11.16875,7.245625)(11.6663,7.81)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](10.26875,3.445625)(12.46875,6.545625)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](8.26,4.98)(10.26875,3.445625)
\psline[linewidth=0.04cm,linecolor=color82](7.36875,5.645625)(8.26,4.98)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](6.96,4.53)(6.56875,3.445625)
\psline[linewidth=0.04cm,linecolor=color82](7.36875,5.645625)(6.96,4.53)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](5.96875,8.645625)(8.56875,9.545625
\psline[linewidth=0.02cm,linecolor=MidnightBlue](5.96875,8.645625)(3.86875,9.045625
\psline[linewidth=0.02cm,linecolor=MidnightBlue](9.36875,8.945625)(11.16875,9.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.16875,9.445625)(11.46875,8.245625)
\psline[linewidth=0.02cm,linecolor=color82](11.16875,7.245625)(11.46875,8.245625
\psline[linewidth=0.02cm,linecolor=color82](7.36875,5.645625)(7.66875,4.145625
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.46875,8.245625)(11.96875,8.145625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.96875,8.145625)(11.86875,7.345625)
\psline[linewidth=0.02cm,linecolor=color82](11.86875,7.345625)(11.16875,7.245625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.86875,7.345625)(12.46875,6.545625)
\psline[linewidth=0.01cm,linecolor=gray](2.46875,6.545625)(4.46875,6.545625)
\psline[linewidth=0.01cm,linecolor=gray](6.06875,6.345625)(7.36875,5.645625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](6.56875,3.445625)(7.66875,4.145625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](7.66875,4.145625)(10.26875,3.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.66875,4.445625)(10.26875,3.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.66875,4.445625)(11.96875,4.045625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.66875,4.445625)(12.46875,6.545625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](4.56875,4.345625)(3.06875,3.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](4.56875,4.345625)(6.56875,3.445625)
\psline[linewidth=0.02cm,linecolor=color82](4.56875,4.345625)(4.66875,5.745625
\psline[linewidth=0.02cm,linecolor=color82](4.66875,5.745625)(3.26875,4.745625
\psline[linewidth=0.02cm,linecolor=MidnightBlue](3.26875,4.745625)(3.06875,3.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](3.26875,4.745625)(2.36875,4.845625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](2.36875,4.845625)(2.86875,5.845625)
\psline[linewidth=0.02cm,linecolor=color82](2.86875,5.845625)(4.66875,5.745625
\psline[linewidth=0.02cm,linecolor=MidnightBlue](2.86875,5.845625)(2.46875,6.545625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](5.96875,8.645625)(5.86875,9.245625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](4.06875,7.745625)(3.26875,7.945625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](9.36875,8.945625)(9.36875,9.545625
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.46875,8.245625)(11.76875,8.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.86875,7.345625)(12.24,7.345625
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.66875,4.445625)(12.06875,4.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](11.66875,4.445625)(11.46875,3.845625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](4.56875,4.345625)(4.46875,3.445625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](2.86875,5.845625)(2.44,5.794375
\psline[linewidth=0.02cm,linecolor=MidnightBlue](2.66875,4.245625)(3.26875,4.745625)
\psline[linewidth=0.02cm,linecolor=MidnightBlue](7.66875,4.145625)(7.76875,3.445625)
\psline[linewidth=0.02cm,linecolor=color82](8.76875,7.245625)(9.36875,8.945625)
\psline[linewidth=0.04cm,linecolor=color82](4.66875,5.745625)(3.56,6.15)
\psline[linewidth=0.04cm,linecolor=color82](4.66875,5.745625)(3.12,5.14)
\psline[linewidth=0.04cm,linecolor=color82](4.66875,5.745625)(3.85,4.56)
\psline[linewidth=0.04cm,linecolor=color82](4.66875,5.745625)(5.535,4.699)
\psdots[dotsize=0.1,linecolor=color82](4.66875,5.745625
\usefont{T1}{ppl}{m}{n}
\rput(7.65,5.2){\small ${ 4}$}
\usefont{T1}{ppl}{m}{n}
\rput(7.2,4.7){\small ${ 2}$}
\usefont{T1}{ppl}{m}{n}
\rput(8.3,4.7){\small ${ 3}$}
\usefont{T1}{ppl}{m}{n}
\rput(7.85,3.9){\small ${1 }$}
\psline[linewidth=0.04cm,linecolor=RoyalPurple](6.56875,3.445625)(5.535,4.699)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](3.06875,3.445625)(3.85,4.56)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](2.46875,6.545625)(3.56,6.15)
\psline[linewidth=0.04cm,linecolor=RoyalPurple](2.36875,4.845625)(3.12,5.14)
\psdots[dotsize=0.1,linecolor=RoyalPurple](4.32,7)
\psdots[dotsize=0.1,linecolor=RoyalPurple](3.56,6.15)
\psdots[dotsize=0.1,linecolor=RoyalPurple](3.12,5.14)
\psdots[dotsize=0.1,linecolor=RoyalPurple](3.85,4.56)
\psdots[dotsize=0.1,linecolor=RoyalPurple](5.535,4.699)
\psline[linewidth=0.02cm,linecolor=RoyalPurple](2.86875,5.845625)(3.56,6.15)
\psdots[dotsize=0.1,linecolor=MidnightBlue](2.86875,5.845625)
\psdots[dotsize=0.1,linecolor=MidnightBlue](3.26875,4.745625)
\psdots[dotsize=0.1,linecolor=MidnightBlue](4.56875,4.345625)
\psdots[dotsize=0.1,linecolor=MidnightBlue](7.66875,4.145625)
\psdots[dotsize=0.1,linecolor=RoyalPurple](6.96,4.53)
\psdots[dotsize=0.1,linecolor=RoyalPurple](8.26,4.98
\psdots[dotsize=0.1,linecolor=RoyalPurple](11.465,5.14
\psdots[dotsize=0.1,linecolor=RoyalPurple](11.73,6.95)
\psdots[dotsize=0.1,linecolor=RoyalPurple](11.6663,7.81)
\psdots[dotsize=0.1,linecolor=RoyalPurple](11.175,8.23)
\psdots[dotsize=0.1,linecolor=RoyalPurple](10.015,8.394375)
\psdots[dotsize=0.1,linecolor=RoyalPurple](8.66,8.494375)
\psdots[dotsize=0.1,linecolor=RoyalPurple](5.03,8.2)
\psdots[dotsize=0.1,linecolor=color82](6.06875,7.445625
\psdots[dotsize=0.1,linecolor=color82](8.76875,7.245625
\psdots[dotsize=0.1,linecolor=color82](11.16875,7.245625
\psdots[dotsize=0.1,linecolor=color82](7.36875,5.645625
\psdots[dotsize=0.1,linecolor=MidnightBlue](4.06875,7.745625
\psdots[dotsize=0.1,linecolor=MidnightBlue](5.96875,8.645625
\psdots[dotsize=0.1,linecolor=MidnightBlue](11.66875,4.445625
\psdots[dotsize=0.1,linecolor=MidnightBlue](11.86875,7.345625
\psdots[dotsize=0.1,linecolor=MidnightBlue](11.46875,8.245625)
\psdots[dotsize=0.1,linecolor=MidnightBlue](9.36875,8.945625)
\psline[linewidth=0.04cm](3.06875,3.445625)(2.36875,4.845625
\psline[linewidth=0.04cm](2.36875,4.845625)(2.46875,6.545625
\psline[linewidth=0.04cm](2.46875,6.545625)(3.86875,9.045625
\psline[linewidth=0.04cm](12.46875,6.545625)(11.96875,8.145625
\psline[linewidth=0.04cm](11.96875,8.145625)(11.16875,9.445625
\psline[linewidth=0.04cm](8.56875,9.545625)(11.16875,9.445625
\psline[linewidth=0.04cm](3.86875,9.045625)(8.56875,9.545625
\psline[linewidth=0.04cm](3.06875,3.445625)(6.56875,3.445625
\psline[linewidth=0.04cm](6.56875,3.445625)(10.26875,3.445625
\psline[linewidth=0.04cm](10.26875,3.445625)(11.96875,4.045625
\psline[linewidth=0.04cm](11.96875,4.045625)(12.46875,6.545625
\usefont{T1}{ppl}{m}{n}
\rput(2.60375,9.245625){\Huge $\tilde\Delta$}
\end{pspicture}
}
\vspace{-10.5cm}
\caption{Numbering on the vertices in $\tilde\Delta_0^0$.}\label{firstround}
\end{figure}
Let us notice that for this numbering, the values for $\tilde t_i$ corresponding to these vertices are either $\geq 3$, as is the case of the vertices $\nu_i$ and $\gamma_i$, or $\tilde t_i=t_i=2$, as it happens for the vertices $\mu_{ij}$. Thus, we arrive to the following result.
\begin{prop}
For any $k\geq 3$:
\[\dim C_k^1(\tilde\Delta)=\binom{k+2}{2}+f^0_1 \binom{k-2}{2}+2f^0_0\binom{k-1}{2}+\binom{2k-1}{2}(f_0-2).\]
For $k=2$: \[\dim C_2^1(\tilde\Delta)=3f_0.\]
\end{prop}
\begin{pf}
It follows directly from the remarks we made above about $\tilde t_i$ for the counting, and Corollary \ref{>r+1}.
\end{pf}
For the numbering we defined on $\tilde\Delta_0^0$, it is not possible to apply Theorem \ref{SchTheorem} to find an exact value for the dimension. In fact, when the initial subdivision consists of more than one triangle, by using the numberings allowed in that theorem, we get upper bounds which are strictly bigger that the actual dimension of the space.
\medskip
\noindent{}\textbf{Remark 1.\;}
In this paper we have confined our attention to triangular partitions but it is easy to check that our proofs of the lower and the upper bounds can be extended to rectilinear partitions. The construction also allows to consider regions subdivided by curved boundaries, but then the ideals must be considered individually. The boundary of $|\Delta|$ itself can, of course, be curved.
\medskip
\noindent{}\textbf{Remark 2.\;} The resolution for ideals generated by power of linear forms presented in the paper by \cite{gs}, applied also when the power of the linear forms is different. It makes possible to apply the ideas we present in this paper to mixed splines, i.e., splines where the order of smoothness may differ on the various edges of the subdivision.
\medskip
\noindent{}\textbf{Remark 3.\;} The proof of the exact dimension formula when the degree of the spline space $k$ is $\geq 4r+1$ cannot be directly extended to spaces of polynomials of degree $k\geq 3r+1$. With some restrictions associated to the number of different slopes of the edges containing each vertex it is possible to prove the result for this degree.
\medskip
\noindent{}\textbf{Remark 4.\;} We hope that the two methods we used in the examples to find the exact dimension, namely, by showing that the lower and the upper bounds coincide for certain numbering on the vertices, or that $H_0(\J)=0$ directly by considering the equations of the edges, illustrate the way to easily find the dimension of the spline space for many particular triangulations.
\medskip
\noindent{}\textbf{Remark 5.\;}There is a version of the upper bound
given by \cite{Sch} in \cite{LaiSchu}, where the constraint on the
numbering of the vertices is relaxed. However, for this new version of
the theorem, there are still examples for which the upper bound
formula does not give the correct result, see for instance the example
p. 243 of \cite{LaiSchu}.
\medskip
\noindent{}\textbf{Remark 6.\;} For the three dimensional version of the problem, it is necessary to analyse ideals generated by powers of linear forms in three variables. The formula of the dimension, analogous to formula (\ref{eq}) in Section \ref{construction}, have two homology modules, and in order to approximate the dimensions the dimension of these two modules must be bound. Some issues and positive results about this problem will be in an upcoming paper.
| {
"timestamp": "2012-10-18T02:01:46",
"yymm": "1210",
"arxiv_id": "1210.4639",
"language": "en",
"url": "https://arxiv.org/abs/1210.4639",
"abstract": "The spline space $C_k^r(\\Delta)$ attached to a subdivided domain $\\Delta$ of $\\R^{d} $ is the vector space of functions of class $C^{r}$ which are polynomials of degree $\\le k$ on each piece of this subdivision. Classical splines on planar rectangular grids play an important role in Computer Aided Geometric Design, and spline spaces over arbitrary subdivisions of planar domains are now considered for isogeometric analysis applications. We address the problem of determining the dimension of the space of bivariate splines $C_k^r(\\Delta)$ for a triangulated region $\\Delta$ in the plane. Using the homological introduced by Billera (1988), we number the vertices and establish a formula for an upper bound on the dimension. There is no restriction on the ordering and we obtain more accurate approximations to the dimension than previous methods and furthermore, in certain cases even an exact value can be found. The construction makes also possible to get a short proof for the dimension formula when $k\\ge 4r+1$, and the same method we use in this proof yields the dimension straightaway for many other cases.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Homological techniques for the analysis of the dimension of triangular spline spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771763033943,
"lm_q2_score": 0.8006920044739461,
"lm_q1q2_score": 0.7901045952635053
} |
https://arxiv.org/abs/1212.6093 | Strong edge-colorings for k-degenerate graphs | We prove that the strong chromatic index for each $k$-degenerate graph with maximum degree $\Delta$ is at most $(4k-2)\Delta-k(2k-1)+1$. | \section{Introduction}
A {\em strong edge-coloring} of a graph $G$ is an edge-coloring so that no edge can be adjacent to two edges with the same color. So in a strong edge-coloring, every color class gives an induced matching. The strong chromatic index $\chi_s'(G)$ is the minimum number of colors needed to color $E(G)$ strongly. This notion was introduced by Fouquet and Jolivet (1983, \cite{FJ83}). Erd\H{o}s and Ne\v{s}et\v{r}il during a seminar in Prague in 1985 proposed some open problems, one of which is the following
\begin{conjecture}[Erd\H{o}s and Ne\v{s}et\v{r}il, 1985]
If $G$ is a simple graph with maximum degree $\Delta$, then $\chi_s'(G)\le 5\Delta^2/4$ if $\Delta$ is even, and $\chi_s'(G)\le (5\Delta^2-2\Delta+1)/4$ if $\Delta$ is odd.
\end{conjecture}
This conjecture is true for $\Delta\le 3$ (\cite{A92, HHT93}). Cranston \cite{C06} showed that $\chi_s'(G)\le 22$ for $\Delta=4$. Chung, Gy\'arf\'as, Trotter, and Tuza (1990, \cite{CGTT90}) showed that the upper bounds are exactly the numbers of edges in $2K_2$-free graphs. Molloy and Reed \cite{MR97} proved that graphs with sufficient large maximum degree $\Delta$ has strong chromatic index at most $1.998\Delta^2$. For more results see \cite{SSTM} (Chapter 6, problem 17).
A graph is {\em $k$-degenerate} if every subgraph has minimum degree at most $k$. Chang and Narayanan (2012, \cite{CN12}) recently proved that a $2$-degenerate graph with maximum degree $\Delta$ has strong chromatic index at most $10\Delta-10$. Luo and the author in \cite{LY12} improved the upper bound to $8\Delta-4$.
In~\cite{CN12}, the following conjecture was made
\begin{conjecture}[Chang and Narayanan, \cite{CN12}]
There exists an absolute constant $c$ such that for any $k$-degenerate graphs $G$ with maximum degree $\Delta$, $\chi_s'(G)\le ck^2\Delta$. Furthermore, the $k^2$ may be replaced by $k$.
\end{conjecture}
In this paper, we prove a stronger form of the conjecture. Unlike the priming processes in\cite{CN12, LY12}, we find a special ordering of the edges and by using a greedy coloring obtain the following result.
\begin{theorem}
The strong chromatic index for each $k$-degenerate graph with maximum degree $\Delta$ is at most $(4k-2)\Delta-k(2k-1)+1$.
\end{theorem}
Thus, $2$-degenerate graphs have strong chromatic index at most $6\Delta-5$.
\begin{proof}
By definition of $k$-degenerate graphs, after the removal of all vertices of degree at most $k$, the remaining graph has no edges or has new vertices of degree at most $k$, thus we have the following simple fact on $k$-degenerate graphs (see also \cite{CN12}).
\medskip
{\em Let $G$ be a $k$-degenerate graph. Then there exists $u\in V(G)$
so that $u$ is adjacent to at most $k$ vertices of degree more than $k$. Moreover, if $\Delta(G)>k$, then the vertex $u$ can be selected with degree more than $k$.}
\medskip
We call a vertex $u$
a {\em special vertex} if $u$ is adjacent to at most $k$ vertices of degree more than $k$. An edge is a {\em special edge} if it is incident to a special vertex and a vertex with degree at most $k$. The above fact implies that every $k$-degenerate graph has a special edge, and if $\Delta\le k$, then every vertex and every edge are special.
We order the edges of $G$ as follows. First we find in $G$ a special edge, put it at the beginning of the list, and then remove it from $G$. Repeat the above step in the remaining graph. When the process ends, we have an ordered list of the edges in $G$, say $e_1, e_2, \ldots, e_m$, where $m=|E(G)|$. So $e_m$ is the special edge we first chose and placed in the list.
Let $G_i$ be the graph induced by the first $i$ edges in the list, $i=1,2,\ldots, m$. Then $e_i$ is a special edge in $G_i$.
We now count the edges of $G_i$ within distance one to $e_i$ in $G$. We may call the edges in $G_i$ blue edges and the edges in $G-G_i$ yellow edges. Let $u_i,v_i$ be the endpoints of $e_i$ with $u_i$ being a special vertex in $G_i$.
We first count the blue edges incident to $u_i$ and its neighbors. The vertex $u_i$ has three kinds of neighbors: the neighbors in $X_1$ sharing blue edges with $u_i$ and having degree more than $k$, the neighbors in $X_2$ sharing blue edges with $u_i$ and having degree at most $k$ (thus $v_i\in X_2$), and the neighbors in $X_3$ sharing yellow edges with $u_i$. By definition, $|X_1|\le k$, so at most $|X_1|\Delta+k(|X_2|-1)$ blue edges are incident to $X_1\cup (X_2-\{v_i\})$. For each vertex $u$ in $X_3$, $uu_i$ is a yellow edge in $G_i$ but will be a special edge in $G_j$ for some $j>i$. So either $u$ or $u_i$ has degree at most $k$ in $G_j$ (thus also in $G_i$), and if $u_i$ has degree at least $k$ in $G_m$ for some $m$, then all yellow edges incident to $u_i$ in $G_m$ should have degree at most $k-1$ in $G_m$, in order for the yellow edges to be special later. Then among vertices in $X_3$, at most $x=\max\{0,k-|X_1|-|X_2|\}$ vertices have degree more than $k$ in $G_i$, and all other vertices have degree at most $k-1$ in $G_i$. Therefore at most $x\Delta+(|X_3|-x)(k-1)$ blue edges are incident to $X_3$. Note that $d(u_i)\le \Delta, |X_2|\le \Delta$ and $|X_1|+x\le k$, then at most
$$|X_1|\Delta+k(|X_2|-1)+x\Delta+(|X_3|-x)(k-1)=(|X_1|+x)\Delta+(k-1)(d(u_i)-|X_1|-x-1)+|X_2|-1\le 2k\Delta-k^2$$
blue edges are within distance one to $e_i$ from $u_i$ side (not including the edges incident to $v_i$).
We also count the blue edges incident to $v_i$ and its neighbors. Similarly, $v_i$ has two kinds of neighbors: the neighbors in $Y_1$ sharing blue edges with $v_i$, and the neighbors in $Y_2$ sharing yellows edges with $v_i$. From the fact that $e_i$ is a special edge, $|Y_1|\le k$, so at most $(|Y_1|-1)\Delta$ blue edges are incident to $Y_1-\{u_i\}$. For each vertex $v$ in $Y_2$, $vv_i$ is a yellow edge in $G_i$ but will be a special edge in $G_s$ for some $s>i$. Similar to above, at most $k-|Y_1|$ vertices in $Y_2$ have degree more than $k$ in $G_i$, and all other vertices in $Y_2$ have degree at most $k-1$ in $G_i$. So at most $(k-|Y_1|)(\Delta-1)+(|Y_2|-(k-|Y_1|))(k-1)$ blued edges are incident to $Y_2$. In total, at most
$$(|Y_1|-1)\Delta+(k-|Y_1|)(\Delta-1)+(|Y_2|-(k-|Y_1|))(k-1)\le (2k-2)\Delta-k(k-1)$$
So in $G_i$, the number of blue edges within distance one to $e_i$ is at most
$$2k\Delta-k^2+(2k-2)\Delta-k(k-1)\le (4k-2)\Delta-k(2k-1)$$
Now color the edges in the list one by one greedily. For each $i$, when it is the turn to color $e_i$, only the edges in $G_i$ (the blue edges) have been colored. Since there are at least $(4k-2)\Delta-k(2k-1)+1$ colors, we are able to color the edges so that edges within distance one get different colors.
\end{proof}
We shall note that the above result is not only true for simple graphs, but also for multigraphs.
\section*{Acknowledgement}
The author would like to thank Rong Luo and Zixia Song for their encouragements and discussions.
| {
"timestamp": "2013-04-02T02:03:00",
"yymm": "1212",
"arxiv_id": "1212.6093",
"language": "en",
"url": "https://arxiv.org/abs/1212.6093",
"abstract": "We prove that the strong chromatic index for each $k$-degenerate graph with maximum degree $\\Delta$ is at most $(4k-2)\\Delta-k(2k-1)+1$.",
"subjects": "Combinatorics (math.CO)",
"title": "Strong edge-colorings for k-degenerate graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777180580855,
"lm_q2_score": 0.800691997339971,
"lm_q1q2_score": 0.79010459164879
} |
https://arxiv.org/abs/1610.09210 | Extremal regular graphs: independent sets and graph homomorphisms | This survey concerns regular graphs that are extremal with respect to the number of independent sets, and more generally, graph homomorphisms. More precisely, in the family of of $d$-regular graphs, which graph $G$ maximizes/minimizes the quantity $i(G)^{1/v(G)}$, the number of independent sets in $G$ normalized exponentially by the size of $G$? What if $i(G)$ is replaced by some other graph parameter? We review existing techniques, highlight some exciting recent developments, and discuss open problems and conjectures for future research. | \section{Independent sets} \label{sec:ind}
An \emph{independent set} in a graph is a subset of vertices with no two adjacent. Many combinatorial problems can be reformulated in terms of independent sets by setting up a graph where edges represent forbidden relations.
A graph is \emph{$d$-regular} if all vertices have degree $d$. In the family of $d$-regular graphs of the same size, which graph has the most number of independent sets? This question was initially raised by Andrew Granville in connection to combinatorial number theory, and appeared first in print in a paper by Alon~\cite{Alon91}, who speculated that, at least when $n$ is divisible by $2d$, the maximum is attained by a disjoint union of complete bipartite graphs $K_{d,d}$. Some ten years later, Kahn~\cite{Kahn01} arrived at the same conjecture while studying a problem arising from statistical physics. Using a beautiful entropy argument, Kahn proved the conjecture under the additional assumption that the graph is already bipartite.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=.6]
\begin{scope}
\draw (0,0) node[W] {} --
(1,0) node[W] {} --
(1,1) node[W]{} --
(0,1) node[W]{} -- cycle;
\end{scope}
\begin{scope}[shift={(2,0)}]
\draw (0,0) node[B] {} --
(1,0) node[W] {} --
(1,1) node[W]{} --
(0,1) node[W]{} -- cycle;
\end{scope}
\begin{scope}[shift={(4,0)}]
\draw (0,0) node[W] {} --
(1,0) node[B] {} --
(1,1) node[W]{} --
(0,1) node[W]{} -- cycle;
\end{scope}
\begin{scope}[shift={(6,0)}]
\draw (0,0) node[W] {} --
(1,0) node[W] {} --
(1,1) node[B]{} --
(0,1) node[W]{} -- cycle;
\end{scope}
\begin{scope}[shift={(8,0)}]
\draw (0,0) node[W] {} --
(1,0) node[W] {} --
(1,1) node[W]{} --
(0,1) node[B]{} -- cycle;
\end{scope}
\begin{scope}[shift={(10,0)}]
\draw (0,0) node[B] {} --
(1,0) node[W] {} --
(1,1) node[B]{} --
(0,1) node[W]{} -- cycle;
\end{scope}
\begin{scope}[shift={(12,0)}]
\draw (0,0) node[W] {} --
(1,0) node[B] {} --
(1,1) node[W] {} --
(0,1) node[B] {} -- cycle;
\end{scope}
\end{tikzpicture}
\caption{The independent sets of a 4-cycle: $i(C_4) =7$.} \label{fig:indcount}
\end{figure}
We write $I(G)$ to denote the set of independent sets in $G$, and $i(G) := |I(G)|$ the number of independent sets in $G$. See Figure~\ref{fig:indcount}.
\begin{theorem}[Kahn~\cite{Kahn01}] \label{thm:kahn}
If $G$ is a bipartite $n$-vertex $d$-regular graph, then
\[
i(G) \le i(K_{d,d})^{n/(2d)} = (2^{d+1} - 1)^{n/(2d)}.
\]
\end{theorem}
I showed that the bipartite requirement in Theorem~\ref{thm:kahn} can be dropped (as conjectured by Kahn).
\begin{theorem} [\cite{Zhao10}] \label{thm:zhao}
If $G$ is an $n$-vertex $d$-regular graph, then
\[
i(G) \le i(K_{d,d})^{n/(2d)} = (2^{d+1} - 1)^{n/(2d)}.
\]
\end{theorem}
Equality occurs when $n$ is divisible by $2d$ and $G$ is a disjoint union of $K_{d,d}$'s. We do not concern ourselves here with what happens when $n$ is not divisible by $2d$, as the extremal graphs are likely dependent on number theoretic conditions, and we do not know a clean set of examples. Alternatively, the problem can phrased as maximizing $i(G)^{1/v(G)}$ over the set of $d$-regular bipartite graphs $G$, where $v(G)$ denotes the number of vertices of $G$. The above theorem says that this maximum is attained at $G = K_{d,d}$. Note that $i(G)^{1/v(G)}$ remains unchanged if $G$ is replaced by a disjoint union of copies of $G$.
We provide an exposition of the proofs of these two theorems as well as a discussion of subsequent developments. Notably, Davies, Jenssen, Perkins, and Roberts~\cite{DJPR1} recently gave a new proof of the above theorems by introducing a powerful new technique, which has already had a number of surprising new consequences~\cite{DJPR2,DJPR3,PP}. The results have been partially extended to graph homomorphisms, though many intriguing open problems remain. We also discuss some recent work on the subject done by Luke Sernau~\cite{Ser} as an undergraduate student at Notre Dame.
\section{Graph homomorphisms} \label{sec:hom}
Given two graphs $G$ and $H$, a \emph{graph homomorphism} from $G$ to $H$ is a map of vertex sets $\phi \colon V(G) \to V(H)$ that sends every edge of $G$ to an edge of $H$, i.e., $\phi(u)\phi(v) \in E(H)$ whenever $uv \in E(G)$. Here $V(G)$ denotes the vertex set of $G$ and $E(G)$ the edge set. We use lower case letters for cardinalities: $v(G) := |V(G)|$ and $e(G) := |E(G)|$. Let
\[
\Hom(G,H) := \{ \phi \colon V(G) \to V(H) : \phi(u)\phi(v) \in E(H) \ \forall uv \in E(G)\}
\]
denote the set of graph homomorphisms from $G$ to $H$, and $\hom(G,H) := |\Hom(G,H)|$.
We usually use the letter $G$ for the source graph and $H$ for the target graph. It will be useful to allow the target graph $H$ to have loops (but not multiple edges), and we shall refer to such graphs as \emph{loop-graphs}. The source graph $G$ is usually simple (without loops). By \emph{graph} we usually mean a simple graph.
Graph homomorphisms generalize the notion of independent sets. They are equivalent to labeling the vertices of $G$ subject to certain constraints encoded by $H$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=.6, >=latex]
\begin{scope}[shift={(-8,0)}]
\node[P] (1) at (90:1) {};
\node[P] (2) at (162:1) {};
\node[P] (3) at (234:1) {};
\node[P] (4) at (306:1) {};
\node[P] (5) at (378:1) {};
\draw (1)--(2)--(3)--(4)--(5)--(1);
\end{scope}
\begin{scope}[shift={(-2,0)}, font={\footnotesize}]
\node[B] (b) at (1,0) {};
\node[W] (w) at (0,0) {};
\draw (b)--(w) edge[-,in=45,out=135,loop] ();
\end{scope}
\begin{scope}[-latex, shorten <=3pt, shorten >=3pt]
\draw (1) to[bend left] (b);
\draw (2) to[bend right=15] (w);
\draw (3) to[bend right=20] (b);
\draw (4) to[bend right=15] (w);
\draw (5) to[bend left=5] (w);
\end{scope}
\begin{scope}[shift={(3,0)}, font=\footnotesize]
\node[B] (1) at (90:1) {};
\node[W] (2) at (162:1) {};
\node[B] (3) at (234:1) {};
\node[W] (4) at (306:1) {};
\node[W] (5) at (378:1) {};
\draw (1)--(2)--(3)--(4)--(5)--(1);
\end{scope}
\node[inner sep=1em] (label-left) at (-5, -2) {graph homomorphism };
\node[inner sep=1em] (label-right) at (3,-2) {independent set };
\draw[<->] (label-left) to (label-right);
\end{tikzpicture}
\caption[Graph homomorphisms and independent sets]{Homomorphisms from $G$ to $\tikzHind$ correspond to independent sets of $G$.}
\label{fig:ind}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale=.6, >=latex]
\begin{scope}[shift={(-8,0)}]
\node[P] (1) at (90:1) {};
\node[P] (2) at (162:1) {};
\node[P] (3) at (234:1) {};
\node[P] (4) at (306:1) {};
\node[P] (5) at (378:1) {};
\draw (1)--(2)--(3)--(4)--(5)--(1);
\end{scope}
\begin{scope}[shift={(-2,0)}]
\node[c1] (a) at (90:1) {};
\node[c2] (b) at (210:1) {};
\node[c3] (c) at (330:1) {};
\draw (a)--(b)--(c)--(a);
\end{scope}
\begin{scope}[-latex, shorten <=3pt, shorten >=3pt]
\draw (1) to[bend left=5] (b);
\draw (2) to[bend left=5] (a);
\draw (3) to[bend left=10] (b);
\draw (4) to[bend right=15] (c);
\draw (5) to[bend left=5] (a);
\end{scope}
\begin{scope}[shift={(3,0)}]
\node[c2] (1) at (90:1) {};
\node[c1] (2) at (162:1) {};
\node[c2] (3) at (234:1) {};
\node[c3] (4) at (306:1) {};
\node[c1] (5) at (378:1) {};
\draw (1)--(2)--(3)--(4)--(5)--(1);
\end{scope}
\node[inner sep=1em] (label-left) at (-5, -2) {graph homomorphism};
\node[inner sep=1em] (label-right) at (3,-2) {coloring};
\draw[<->] (label-left) to (label-right);
\end{tikzpicture}
\caption{Homomorphisms from $G$ to $K_q$ correspond to proper colorings of vertices of $G$ with $q$ colors.}
\label{fig:color}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale=.6, baseline=.5cm]
\draw (0,0) grid (4,3);
\foreach \times in {0,...,4}{
\foreach \y in {0,...,3}{
\node[P] (v\times\y) at (\times,\y) {};
}
}
\foreach \c in {01,02,12,42}{
\node[c1] at (v\c) {};
}
\foreach \c in {21,23,33,30,40}{
\node[c2] at (v\c) {};
}
\end{tikzpicture}
\caption[A configuration for the Widom--Rowlinson model on a grid.]{A configuration for the Widom--Rowlinson model on a grid, corresponding to a homomorphism to $\tikzHwr$, where vertices of the grid that are mapped to the first vertex in $\tikzHwr$ are marked \tikz[baseline=-3pt]{\node[c1] {};} and those mapped to the third vertex are marked \tikz[baseline=-3pt]{\node[c2] {};}.}
\label{fig:WR}
\end{figure}
\begin{example}[Independent sets] \label{ex:ind}
Homomorphisms from $G$ to $\tikzHind$ correspond bijectively to independent sets in $G$. Indeed, a map of vertices from $G$ to $\tikzHind$ is a homomorphism if and only if the preimage of the non-looped vertex in $\tikzHind$ forms an independent set in $G$. So $\hom(G, \tikzHind) = i(G)$. See Figure~\ref{fig:ind}. In the statistical physics literature\footnote{See \cite{BW99} for the connection between the combinatorics of graph homomorphisms and Gibbs measures in statistical physics.
}, independent sets correspond to \emph{hard-core models}. For example, they can be used to represent configurations of non-overlapping spheres (``hard'' spheres) on a grid.
\end{example}
\begin{example}[Graph colorings] \label{ex:color}
When the target graph is the complete graph $K_q$ on $q$ vertices, a graph homomorphism from $G$ to $K_q$ corresponds to a coloring of the vertices of $G$ with $q$ colors so that no two adjacent vertices of $G$ receive the same color. Such colorings are called \emph{proper $q$-colorings}. See Figure~\ref{fig:color}. Thus $\hom(G,K_q)$ is the number of proper $q$-colorings of $G$. For a fixed $G$, the quantity $\hom(G,K_q)$ is a polynomial function in $q$, and it is called the \emph{chromatic polynomial} of $G$, a classic object in graph theory.
\end{example}
\begin{example}[Widom--Rowlinson model] \label{ex:WR}
A homomorphism from $G$ to $\tikzHwr$ corresponds to a partial coloring of the vertices of $G$ with red or blue, allowing vertices to be left uncolored, such that no red vertex is adjacent to a blue vertex. Such a coloring is known as a \emph{Widom--Rowlinson configuation}. See Figure~\ref{fig:WR}.
\end{example}
As graph homomorphisms generalize independent sets, one may wonder whether theorems in Section~\ref{sec:ind} generalize to graph homomorphisms. There have indeed been some interesting results in this direction, as well as several intriguing open problems.
It turns out, perhaps surprisingly, that Theorem~\ref{thm:kahn}, concerning the number of independent sets in a regular bipartite graph, always extends to graph homomorphisms.
\begin{theorem}[Galvin and Tetali~\cite{GT04}] \label{thm:GT}
Let $G$ be a bipartite $d$-regular graph and $H$ a loop-graph. Then
\[
\hom(G,H)^{1/v(G)} \le \hom(K_{d,d}, H)^{1/(2d)}.
\]
\end{theorem}
Can the bipartite hypothesis above be dropped as in Theorem~\ref{thm:zhao}? The answer is no. Indeed, with $H=\tikzHtwoloops$ being two disjoint loops, $\hom(G,\tikzHtwoloops) = 2^{c(G)}$ where $c(G)$ is the number of connected components of $G$. In this case, $\hom(G,\tikzHtwoloops)^{1/v(G)}$ is maximized when the sizes of the components of $G$ are as small as possible (among $d$-regular graphs), i.e., when $G = K_{d+1}$.
The central problem of interest for the rest of this article is stated below. It has been solved for certain targets $H$, but it is open in general. The analogous minimization problem is also interesting, and will be discussed in Section~\ref{sec:min}.
\begin{problem} \label{prb:hom-max}
Fix a loop-graph $H$ and a positive integer $d$.
Determine the supremum of $\hom(G,H)^{1/v(G)}$ taken over all $d$-regular graphs $G$.
\end{problem}
We have already seen two cases where Problem~\ref{prb:hom-max} has been solved: when $H = \tikzHind$, the maximum is attained by $G = K_{d,d}$ (Theorem~\ref{thm:zhao}), and when $H = \tikzHtwoloops$, the maximum is attained by $G = K_{d+1}$. The latter example can be extended to $H$ being a disjoint union of complete loop-graphs. Another easy case is $H$ bipartite, as $\hom(G,H) = 0$ unless $G$ is bipartite, so the maximizer is $K_{d,d}$ by Theorem~\ref{thm:GT}.
I extended Theorem~\ref{thm:zhao} to solve Problem~\ref{prb:hom-max} for a certain family of $H$. We define a \emph{loop-threshold graph} to be a loop-graph whose vertices can be ordered so that its adjacency matrix has the property that whenever an entry is $1$, all entries to the left of it and above it are $1$ as well. An example of a loop-threshold graph, along with its adjacency matrix, is shown below.
\[
\begin{tikzpicture}[baseline=(current bounding box.center),font=\footnotesize]
\node[P,label=left:1] (1) at (0,0) {};
\node[P,label=left:2] (2) at (0,1) {};
\node[P,label=right:3] (3) at (1,1) {};
\node[P,label=right:4] (4) at (1,0) {};
\node[P,label=above:5] (5) at (2,0.5) {};
\draw (1)--(2) (1)--(3) (1)--(4);
\draw (1) edge[-,in=-45,out=-135,loop] ();
\draw (2) edge[-,in=45,out=135,loop] ();
\end{tikzpicture}
\qquad
\footnotesize
\begin{pmatrix}
1&1&1&1&0\\
1&1&0&0&0\\
1&0&0&0&0\\
1&0&0&0&0\\
0&0&0&0&0
\end{pmatrix}.
\]
Loop-threshold graphs generalize \tikzHind from Example~\ref{ex:ind}.
The following result was obtained by extending the proof method of Theorem~\ref{thm:zhao}. It answers Problem~\ref{prb:hom-max} when the target $H$ is a loop-threshold graph.
\begin{theorem}[\cite{Zhao11}] \label{thm:threshold}
Let $G$ be a $d$-regular graph $G$ and $H$ a loop-threshold graph. Then
\[
\hom(G,H)^{1/v(G)} \le \hom(K_{d,d},H)^{1/(2d)}.
\]
\end{theorem}
In fact, the theorem was proved in \cite{Zhao11} for any $H$ that is a \emph{bipartite swapping target}, a class of loop-graphs that includes the loop-threshold graphs (see Section~\ref{sec:swap}).
Sernau~\cite{Ser} recently extended Theorem~\ref{thm:threshold} to an even larger family of $H$ (see Section~\ref{sec:prod}).
The most interesting open case of Problem \ref{prb:hom-max} is $H = K_q$, concerning the number of proper $q$-colorings of vertices of $G$ (Example~\ref{ex:color}).
\begin{conjecture} \label{conj:coloring}
For every $d$-regular graph $G$ and integer $q \ge 3$,
\[
\hom(G,K_q)^{1/v(G)} \le \hom(K_{d,d},K_q)^{1/(2d)}.
\]
\end{conjecture}
The conjecture was recently solved for $d=3$ by Davies, Jenssen, Perkins, and Roberts \cite{DJPR3} using a novel method they developed earlier. We will discuss the method in Section~\ref{sec:occup}. The conjecture remains open for all $d\ge 4$ and $q \ge 3$.
The above inequality is known to hold if $q$ is sufficiently large as a function of $G$ \cite{Zhao11} (the current best bound is $q > 2\binom{v(G)d/2}{4}$ \cite{Gal13}).
The first non-trivial case of Problem~\ref{prb:hom-max} where the maximizing $G$ is not $K_{d,d}$ was obtained recently by Cohen, Perkins, and Tetali \cite{CPT}.
\begin{theorem}[Cohen, Perkins, and Tetali~\cite{CPT}] \label{thm:wr}
For any $d$-regular graph $G$ we have
\[
\hom(G, \tikzHwr)^{1/v(G)} \le \hom(K_{d+1}, \tikzHwr)^{1/(d+1)}.
\]
\end{theorem}
Theorem~\ref{thm:wr} was initially proved \cite{CPT} using the occupancy fraction method, which will be discussed in Section~\ref{sec:occup}. Subsequently, a much shorter proof was given in \cite{CCPT} (also see Sernau \cite{Ser}).\footnote{Sernau also tackled Theorem~\ref{thm:wr}, obtaining an approximate result in a version of \cite{Ser} that predated \cite{CPT} and \cite{CCPT}. After the appearance of \cite{CCPT}, Sernau corrected an error (identified by Cohen) in \cite{Ser}, and the corrected version turned out to include Theorem~\ref{thm:wr} as a special case.} These methods can be used to prove that $K_{d+1}$ is the maximizer for a large family of target loop-graphs $H$ (see Section~\ref{sec:prod}).
There are weighted generalizations of these problems and results. Though, for clarity, we defer discussing the weighted versions until Section~\ref{sec:occup}, where we will see that introducing weights leads to a powerful new differential method for proving the unweighted results.
\medskip
We conclude this section with some open problems. Galvin~\cite{Gal13} conjectured that in Problem~\ref{prb:hom-max}, the maximizing $G$ is always either $K_{d,d}$ or $K_{d+1}$, as with all the cases we have seen so far. However, Sernau~\cite{Ser} recently found a counterexample (a similar construction was independently found by Pat Devlin; See Section~\ref{sec:neither}). As it stands, there does not seem to be a clean conjecture concerning the solution to Problem~\ref{prb:hom-max} on determining the maximizing $G$. Sernau suggested the possibility that there is a finite list of maximizing $G$ for every $d$.
\begin{conjecture}
For every $d \ge 3$, there exists a finite set $\mathcal{G}_d$ of $d$-regular graphs such that for every loop-graph $H$ and every $d$-regular graph $G$ one has
\[
\hom(G,H)^{1/v(G)} \le \max_{G' \in \mathcal{G}_d} \hom(G', H)^{1/v(G')}.
\]
\end{conjecture}
It has been speculated that the maximizing $G$ perhaps always has between $d+1$ and $2d$ vertices (corresponding to $K_{d+1}$ and $K_{d,d}$ respectively).
Sernau suggested the possibility that for a fixed $H$, the maximizer is always one of $K_{d,d}$ and $K_{d+1}$ as long as $d$ is large enough.
\begin{conjecture}
Let $H$ be a fixed loop-graph. There is some $d_H$ such that for all $d \ge d_H$ and $d$-regular graph $G$,
\[
\hom(G,H)^{1/v(G)} \le \max\{\hom (K_{d+1},H)^{1/(d+1)}, \hom(K_{2d}, H)^{1/(2d)}\}.
\]
\end{conjecture}
We do not know if the supremum in Problem~\ref{prb:hom-max} can always be attained.
\begin{question}
Fix $d \ge 3$ and a loop-graph $H$. Is the supremum of $\hom(G,H)^{1/v(G)}$ over all $d$-regular graphs $G$ always attained by some $G$?
\end{question}
It could be the case that the supremum is the limit coming from a sequence of graphs $G$ of increasing size instead of a single graph $G$ on finitely many vertices. This is indeed the case if we wish to \emph{minimize} $\hom(G, \tikzHwr)^{1/v(G)}$ over $d$-regular graphs $G$. Csikv\'ari~\cite{Csi16ar} recently showed that the infimum of $\hom(G, \tikzHwr)^{1/v(G)}$ is given by a limit of $d$-regular graphs $G$ with increasing girth (i.e., $G$ locally looks like a $d$-regular tree at every vertex).
\section{Projection inequalities} \label{sec:proj}
The original proofs of Theorems~\ref{thm:kahn} and \ref{thm:GT} use beautiful entropy arguments, with a key input being Shearer's entropy inequality \cite{CGFS86}. Unfortunately we will not cover the entropy arguments as they would lead us too far astray. See Galvin's lecture notes~\cite{Gal} for a nice exposition of the entropy method for counting problems. The first non-entropy proof of these two theorems was given in \cite{LZ15} using a variant of H\"older's inequality, which we describe in this section. We begin our discussion with a classical projection inequality. See Friedgut's \textsc{Monthly} article \cite{Fri04} concerning how the projection inequalities relate to entropy.
Let $P_{xy}$ denote the projection operator from $\mathbb{R}^3$ onto the $xy$-plane. Similarly define $P_{xz}$ and $P_{yz}$. Let $S$ be a body in $\mathbb{R}^3$ such that each of the three projections $P_{xz}(S)$, $P_{yz}(S)$, and $P_{xz}(S)$ has area $1$. What is the maximum volume of $S$? (This is not as obvious as it may first appear. Note that we are projecting onto the 2-D coordinate planes as opposed to the 1-D axes.)
The answer is $1$, attained when $S$ is an axes-parallel cube of side-length $1$. Indeed, equivalently (by re-scaling), we have
\begin{equation}\label{eq:vol-projection}
\vol(S)^2 \le \area(P_{xy}(S)) \area(P_{xz}(S)) \area(P_{yz}(S)).
\end{equation}
Such results were first obtained by Loomis and Whitney~\cite{LW49}. More generally, for any functions $f, g, h \colon \mathbb{R}^2 \to \mathbb{R}$ (assuming integrability conditions)
\begin{multline} \label{eq:lw-3}
\left( \int_{\mathbb{R}^3} f(x,y)g(x,z)h(y,z) \, dxdydz\right)^2
\\\le
\left( \int_{\mathbb{R}^2} f(x,y)^2 \, dxdy\right)
\left(\int_{\mathbb{R}^2} g(x,z)^2 \, dxdz\right)
\left( \int_{\mathbb{R}^2} h(y,z)^2 \, dydz\right).
\end{multline}
To see how \eqref{eq:lw-3} implies \eqref{eq:vol-projection}, take $f, g, h$ to be the indicator functions of the projections of $S$ onto the three coordinates planes, and observe that $1_{S}(x,y,z) \le f(x,y)g(x,z)h(y,z)$.
Let us prove \eqref{eq:lw-3}. In fact, $x, y, z$ can vary over any measurable space instead of $\mathbb{R}$. In our application the domains will be discrete, i.e., the integral will be a sum. It suffices to prove the inequality when $f, g, h$ are nonnegative. The proof is via three simple applications of the Cauchy--Schwarz inequality, to the variables $x$, $y$, $z$, one at a time in that order:
\begin{align*}
&\int f(x,y)g(x,z)h(y,z) \, dxdydz
\\
&\le
\int \left(\int f(x,y)^2 \, dx\right)^{1/2} \left(\int g(x,z)^2 \, dx\right)^{1/2} h(y,z) \, dydz
\\
&\le
\int \left(\int f(x,y)^2 \, dx dy\right)^{1/2} \left(\int g(x,z)^2 \, dx\right)^{1/2} \left(\int h(y,z)^2 \, dy\right)^{1/2} \, dz
\\
&\le
\left(\int f(x,y)^2 \, dx dy\right)^{1/2} \left(\int g(x,z)^2 \, dx dz\right)^{1/2} \left(\int h(y,z)^2 \, dydz\right)^{1/2}
\\
&= \|f\|_2 \|g\|_2 \|h\|_2,
\end{align*}
where
\[
\|f\|_p := \left( \int |f|^p \right)^{1/p}
\]
is the $L^p$ norm. This proves \eqref{eq:lw-3}. This inequality strengthens H\"older's inequality, since a direct application of H\"older's inequality would yield
\begin{equation}\label{eq:holder-3}
\int f g h \le \|f\|_{3}\|g\|_{3}\|h\|_{3}.
\end{equation}
What we have shown is that whenever each of the variables $x, y, z$ appears in the argument of exactly two of the three functions $f, g, h$, then the $L^3$ norms on the right-hand side of~\eqref{eq:holder-3} can be sharpened to $L^2$ norms (we always have $\| f\|_2 \le \|f\|_3$ by convexity).
The above proof easily generalizes to prove the following more general result \cite{Fin92} (also see \cite[Theorem~3.1]{LZ15}). It is also related to the Brascamp--Lieb inequality~\cite{BL76}.
\begin{theorem} \label{thm:holder-ext}
Let $A_1, \dots, A_m$ be subsets of $[n]:= \{1, 2, \dots, n\}$ such that each $i \in [n]$ appears in exactly $d$ of the sets $A_j$. Let $\Omega_i$ be a measure space for each $i \in [n]$. For each $j$, let $f_j \colon \prod_{i \in A_j} \Omega_i \to \mathbb{R}$ be measurable functions. Let $P_j$ denote the projection of $\mathbb{R}^n$ onto the coordinates indexed by $A_j$. Then
\[
\int_{\Omega_1 \times \cdots \times \Omega_n} f_1(P_1(\mathbf{x})) \cdots f_m(P_m(\mathbf{x})) \, d\mathbf{x} \le \|f_1\|_{d} \cdots \|f_m\|_{d}.
\]
\end{theorem}
Using this inequality, we now prove Theorem~\ref{thm:GT}.
\begin{proof}[Proof of Theorem~\ref{thm:GT}] \cite{LZ15}
Let $V(G) = U \cup W$ be a bipartition of $G$. Since $G$ is $d$-regular, $|U| = |W| = v(G)/2$. For any $z_1, \dots, z_d \in V(H)$, let
\[
f(z_1, \dots, z_d) := |\{z \in V(H) : z_1z, \dots, z_dz \in E(H)\}|
\]
denote the size of the common neighborhood of $z_1, \dots, z_d$ in $H$.
For any $\phi \colon U \to V(H)$, the number of ways to extend $\phi$ to a graph homomorphism from $G$ to $H$ can be determined by noting that for each $w \in W$, there are exactly $f(\phi(u) : u \in N(w))$ choices for its image $\phi(w)$, independently of the choices for other vertices in $W$. Therefore,
\[
\hom(G, H) = \sum_{\phi \colon U \to V(H)} \prod_{w \in W} f(\phi(u) : u \in N(w)).
\]
Since $G$ is $d$-regular, every $u \in U$ is contained in $N(w)$ for exactly $d$ different $w \in W$. Therefore, by applying Theorem~\ref{thm:holder-ext} with the counting measure on $V(H)$, we find that
\[
\hom(G, H)
\le \|f \|_{d}^{|W|}.
\]
Note that
\[
\|f\|_{d}^d = \sum_{z_1, \dots, z_d \in V(H)} f(z_1,\dots, z_d)^d = \hom(K_{d,d},H).
\]
Therefore,
\[
\hom(G, H) \le \hom(K_{d,d},H)^{|W|/d} = \hom(K_{d,d},H)^{v(G)/(2d)}. \qedhere
\]
\end{proof}
\section{A bipartite swapping trick} \label{sec:swap}
In the previous section, we proved Theorem~\ref{thm:kahn} about the maximum number of independent sets in a bipartite $d$-regular graph $G$. Now we use it to deduce Theorem~\ref{thm:zhao}, showing that the bipartite hypothesis can be dropped. The proof follows \cite{Zhao10,Zhao11}. The idea is to transform $G$ into a bipartite graph, namely the \emph{bipartite double cover} $G \times K_2$, with vertex set $V(G) \times \{0,1\}$. The vertices of $G \times K_2$ are labeled $v_i$ for $v \in V(G)$ and $i \in \{0,1\}$. Its edges are $u_0v_1$ for all $uv \in E(G)$. See Figure~\ref{fig:swap}. This construction is a special case of the graph tensor product, which we define in the next section. Note that $G \times K_2$ is always a bipartite graph. The following key lemma shows that $G \times K_2$ always has at least as many independent sets as two disjoint copies of $G$.
\begin{lemma}[\cite{Zhao10}] \label{lem:indep-bip}
Let $G$ be any graph (not necessarily regular). Then
\[
i(G)^2 \le i(G \times K_2).
\]
\end{lemma}
Since $G \times K_2$ is bipartite and $d$-regular, Theorem~\ref{thm:kahn} implies
\[
i(G)^2 \le i(G \times K_2) \le (2^{d+1} - 1)^{n/d},
\]
so that Theorem~\ref{thm:zhao} follows immediately. See Figure~\ref{fig:swap} for an illustration of the following proof.
\begin{proof}[Proof of Lemma~\ref{lem:indep-bip}]
Let $2 G$ denote a disjoint union of two copies of $G$. Label its vertices by $v_i$ with $v \in V$ and $i \in \{0,1\}$ so that its edges are $u_iv_i$ with $uv \in E(G)$ and $i \in \{0,1\}$. We will give an injection $\phi \colon I(2 G) \to I(G \times K_2)$. Recall that $I(G)$ is the set of independent sets of $G$. The injection would imply $i(G)^2 = i(2G) \le i(G \times K_2)$ as desired.
Fix an arbitrary order on all subsets of $V(G)$.
Let $S$ be an independent set of $2G$. Let
\[
E_\mathrm{bad}(S) := \{uv \in E(G) : u_0, v_1 \in S\}.
\]
Note that $E_\mathrm{bad}(S)$ is a bipartite subgraph of $G$, since each edge of $E_\mathrm{bad}$ has exactly one endpoint in $\{v \in V(G) : v_0 \in S\}$ but not both (or else $S$ would not be independent). Let $A$ denote the first subset (in the previously fixed ordering) of $V(G)$ such that all edges in $E_\mathrm{bad}(S)$ have one vertex in $A$ and the other outside $A$. Define $\phi(S)$ to be the subset of $V(G) \times \{0,1\}$ obtained by ``swapping'' the pairs in $A$, i.e., for all $v \in A$, $v_i \in \phi(S)$ if and only if $v_{1-i} \in S$ for each $i \in \{0,1\}$, and for all $v \notin A$, $v_i \in \phi(S)$ if and only if $v_i \in S$ for each $i \in \{0,1\}$. It is not hard to verify that $\phi(S)$ is an independent set in $G \times K_2$. The swapping procedure fixes the ``bad'' edges.
It remains to verify that $\phi$ is an injection. For every $S \in I(2G)$, once we know $T = \phi(S)$, we can recover $S$ by first setting
\[
E'_\mathrm{bad}(T) = \{uv \in E(G) : u_i, v_i \in T \text{ for some } i \in \{0,1\} \},
\]
so that $E_\mathrm{bad}(S) = E_\mathrm{bad}'(T)$, and then finding $A$ as earlier and swapping the pairs of $A$ back. (Remark: it follows that $T \in I(G\times K_2)$ lies in the image of $\phi$ if and only if $E_\mathrm{bad}'(T)$ is bipartite.)
\end{proof}
\begin{figure}
\begin{tikzpicture}[scale=.8]
\begin{scope}
\begin{scope}[shift={(-.1,.2)}]
\node[W] (a) at (0,0) {};
\node[B] (b) at (3,0) {};
\node[W] (c) at (1,1) {};
\node[W] (d) at (2,1) {};
\node[B] (e) at (0,2) {};
\node[W] (f) at (3,2) {};
\end{scope}
\begin{scope}[shift={(.1,-.2)}]
\node[W] (a1) at (0,0) {};
\node[W] (b1) at (3,0) {};
\node[B] (c1) at (1,1) {};
\node[W] (d1) at (2,1) {};
\node[W] (e1) at (0,2) {};
\node[B] (f1) at (3,2) {};
\end{scope}
\draw (a)--(b)--(d)--(c)--(a)--(e)--(c); \draw (e)--(f)--(b); \draw (d)--(f);
\draw (a1)--(b1)--(d1)--(c1)--(a1)--(e1)--(c1); \draw (e1)--(f1)--(b1); \draw (d1)--(f1);
\node at (1.5,-.8) {$2G$};
\end{scope}
\begin{scope}[shift={(6,0)}]
\begin{scope}[shift={(-.1,.2)}]
\node[W] (a) at (0,0) {};
\node[B] (b) at (3,0) {};
\node[W] (c) at (1,1) {};
\node[W] (d) at (2,1) {};
\node[B] (e) at (0,2) {};
\node[W] (f) at (3,2) {};
\end{scope}
\begin{scope}[shift={(.1,-.2)}]
\node[W] (a1) at (0,0) {};
\node[W] (b1) at (3,0) {};
\node[B] (c1) at (1,1) {};
\node[W] (d1) at (2,1) {};
\node[W] (e1) at (0,2) {};
\node[B] (f1) at (3,2) {};
\end{scope}
\draw[dashed] (0,2) circle (.8);
\draw[dashed] (3,0) circle (.8);
\draw (a)--(b1)--(d)--(c1)--(a)--(e1)--(c);
\draw (e1)--(f)--(b1);
\draw (d1)--(f);
\draw (a1)--(b)--(d1)--(c)--(a1)--(e)--(c1);
\draw (f1)--(b);
\draw (d)--(f1);
\draw[ultra thick] (e)--(c1);
\draw[ultra thick] (b)--(f1);
\draw[ultra thick] (e)--(f1);
\node at (1.5,-.8) {$G \times K_2$};
\end{scope}
\begin{scope}[shift={(12,0)}]
\begin{scope}[shift={(-.1,.2)}]
\node[W] (a) at (0,0) {};
\node[W] (b) at (3,0) {};
\node[W] (c) at (1,1) {};
\node[W] (d) at (2,1) {};
\node[W] (e) at (0,2) {};
\node[W] (f) at (3,2) {};
\end{scope}
\begin{scope}[shift={(.1,-.2)}]
\node[W] (a1) at (0,0) {};
\node[B] (b1) at (3,0) {};
\node[B] (c1) at (1,1) {};
\node[W] (d1) at (2,1) {};
\node[B] (e1) at (0,2) {};
\node[B] (f1) at (3,2) {};
\end{scope}
\draw (a)--(b1)--(d)--(c1)--(a)--(e1)--(c); \draw (e1)--(f)--(b1); \draw (d1)--(f);
\draw (a1)--(b)--(d1)--(c)--(a1)--(e)--(c1); \draw (e)--(f1)--(b); \draw (d)--(f1);
\node at (1.5,-.8) {$G \times K_2$};
\end{scope}
\end{tikzpicture}
\caption{The bipartite swapping trick in the proof of Lemma~\ref{lem:indep-bip}: swapping the circled pairs of vertices (denoted $A$ in the proof) fixes the bad edges (bolded), transforming an independent set of $2G$ into an independent set of $G \times K_2$.}
\label{fig:swap}
\end{figure}
In \cite{Zhao11}, the above method was used to extend Theorem~\ref{thm:GT} to Theorem~\ref{thm:threshold}, and more generally, a wider family of target graphs defined below.
\begin{definition}
A loop-graph $H$ is a \emph{bipartite swapping target} if $H^\mathrm{bst}$ is bipartite, where $H^\mathrm{bst}$ is the auxiliary graph defined by taking $V(H^\mathrm{bst}) = V(H) \times V(H)$ and an edge between $(u,v)$ and $(u',v')$ if and only if
\[
uu',vv' \in E(H) \quad \text{and} \quad \{uv' \notin E(H) \text{ or } u'v \notin E(H)\}.
\]
\end{definition}
\begin{theorem}[\cite{Zhao11}] \label{thm:bst} Let $H$ be a bipartite swapping target. Then $\hom(G, H)^2 \le \hom(G \times K_2, H)$ for all graphs $G$. Consequently, for $d$-regular graphs $G$, one has
\[
\hom(G, H)^{1/v(G)} \le \hom(K_{d,d},H)^{1/(2d)}.
\]
\end{theorem}
Sernau~\cite{Ser} extended the class of $H$ for which Theorem~\ref{thm:bst} holds by observing that this class is closed under taking tensor products. See Section~\ref{sec:closure-tensor}.
\begin{example} Let $H$ be $P_k$ (the path with $k$ vertices) with a single loop added at the $i$-th vertex. Then $H$ is a bipartite swapping target if $i \in \{1,2,k-1,k-2\}$. The bipartition of $H^\mathrm{bst}$ is indicated below by vertex colors.
\begin{center}
\input{fig-bst-loop-path}
\end{center}
On the other hand, $P_5$ with a loop added to the middle vertex is not a bipartite swapping target, as seen by the odd cycle highlighted below.
\begin{center}
\input{fig-non-bst}
\end{center}
Any $P_k$ with single loop added at vertex $i \notin \{1,2,k-1,k-2\}$ cannot be a bipartite swapping target as it contains the above $H$ as an induced subgraph.
\end{example}
The above method does not extend to $H= K_q$, corresponding to the number of proper $q$-colorings. Nonetheless the analogous strengthening of Conjecture~\ref{conj:coloring} is conjectured to hold.
\begin{conjecture}[\cite{Zhao11}] \label{conj:color-bip}
For every graph $G$ and every $q \ge 3$,
\[
\hom(G, K_q)^2 \le \hom(G \times K_2, K_q).
\]
\end{conjecture}
Conjecture~\ref{conj:color-bip} implies Conjecture~\ref{conj:coloring}. It is known \cite{Zhao11} that Conjecture~\ref{conj:color-bip} holds when $q$ is sufficiently large as a function of $G$.
\section{Graph products and powers} \label{sec:prod}
We define several operations on (loop-)graphs.
\begin{itemize}
\item \emph{Tensor product} $G \times H$: its vertices are $V(G) \times V(H)$, with $(u,v)$ and $(u',v') \in V(G) \times V(H)$ adjacent in $G \times H$ if $uu' \in E(G)$ and $vv' \in E(H)$. This construction is also known as the \emph{categorical product}.
\item \emph{Exponentiation} $H^G$: its vertices are maps $f \colon V(G) \to V(H)$ (not necessarily homomorphisms), where $f$ and $f'$ are adjacent if $f(u)f'(v) \in E(H)$ whenever $uv \in E(G)$.
\item $G^\circ$: same as $G$ except that every vertex now has a loop.
\item $\ell(H)$: subgraph of $H$ induced by its looped vertices, or equivalently, delete all non-looped vertices from $H$.
\end{itemize}
\begin{example}
The tensor product $G \times K_2$ (here $K_2 = \tikzKtwo$) is the bipartite double cover used in the previous section (see Figure~\ref{fig:swap}).
\end{example}
\begin{example}
We have $\tikzHind \times \tikzHind = \tikzHindsquared$, with adjacency matrix
$\left(\begin{smallmatrix}
1 & 1 & 1 & 1 \\
1 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 \\
1 & 0 & 0 & 0
\end{smallmatrix}\right)$,
which can be obtained by taking the adjacency matrix $\left(\begin{smallmatrix}
1 & 1 \\
1 & 0
\end{smallmatrix}\right)$
of $\tikzHind$ and then replacing each $1$ in the matrix by a copy of
$\left(\begin{smallmatrix}
1 & 1 \\
1 & 0
\end{smallmatrix}\right)$
and replacing each $0$ by a copy of
$\left(\begin{smallmatrix}
0 & 0 \\
0 & 0
\end{smallmatrix}\right)$. More generally, the adjacency matrix of a tensor product of (loop-)graphs is the matrix tensor product of the adjacency matrices of the graphs.
\end{example}
\begin{example}\label{ex:wr-ind-power-loop}
For any loop-graph $H$, the graph $H^{K_2}$ has vertex set $V(H) \times V(H)$, with $(u,v)$ and $(u',v') \in V(H) \times V(H)$ adjacent if and only if $uv', u'v \in E(H)$. In particular, if $H_\mathrm{ind} = \tikzHind$, then $H_\mathrm{ind}^{K_2} = \tikzHwrdown$.
\end{example}
Here are a few easy yet key facts relating the above operations with graph homomorphisms. The proofs are left as exercises for the readers.
\begin{equation}
\label{eq:H-prod}
\hom(G, H_1 \times H_2) = \hom(G, H_1) \hom(G, H_2)
\end{equation}
\begin{equation}
\label{eq:H-power}
\hom(G \times G', H) = \hom(G, H^{G'})
\end{equation}
\begin{equation}
\label{eq:G-loop}
\hom(G^\circ, H) = \hom(G, \ell(H)).
\end{equation}
\subsection{$K_{d+1}$ as maximizer}
Now we prove Theorem~\ref{thm:wr} concerning the Widom--Rowlinson model. Recall it says that for any $d$-regular graph $G$, we have
\[
\hom(G, \tikzHwr)^{1/v(G)} \le \hom(K_{d+1}, \tikzHwr)^{1/(d+1)}.
\]
\begin{proof}[Proof of Theorem~\ref{thm:wr}]
\cite{CCPT}
We have $\tikzHwr = \ell(H_\mathrm{ind}^{K_2})$ (Example~\ref{ex:wr-ind-power-loop}). For any graph $G$,
\begin{equation} \label{eq:wr-ind}
\hom(G,\tikzHwr) = \hom(G,\ell(H_\mathrm{ind}^{K_2})) = \hom(G^\circ, H_\mathrm{ind}^{K_2}) = \hom(G^\circ \times K_2, \tikzHind).
\end{equation}
When $G$ is $d$-regular, $G^\circ \times K_2$ is a $(d+1)$-regular bipartite graph, so Theorem~\ref{thm:GT} (or Theorem~\ref{thm:kahn}) implies that the above quantity is at most $\hom(K_{d+1,d+1},\tikzHind)^{v(G)/(d+1)}$. Since $K_{d+1,d+1} = K_{d+1}^\circ \times K_2$, we have by~\eqref{eq:wr-ind},
\begin{align*}
\hom(G,\tikzHwr)^{1/v(G)}
&= \hom(G^\circ \times K_2, \tikzHind)^{1/v(G)}
\\
&\le \hom(K_{d+1}^\circ \times K_2,\tikzHind)^{1/(d+1)}
\\
&
= \hom(K_{d+1}, \tikzHwr)^{1/(d+1)}. \qedhere
\end{align*}
\end{proof}
The above proof exploits the connection~\eqref{eq:wr-ind} between the hard-core model (independent sets) and the Widom--Rowlinson model. This relationship had been previously observed in \cite{BHW99}.
More generally, the above proof extends to give the following result.
\begin{theorem}[Sernau \cite{Ser}] \label{thm:Serneau-loop-power}
Let $H = \ell(A^B)$ where $A$ is any loop-graph and $B$ is a bipartite graph. For any $d$-regular graph $G$,
\[
\hom(G, H)^{1/v(G)} \le \hom(K_{d+1},H)^{1/(d+1)}.
\]
\end{theorem}
\begin{proof}
For every bipartite graph $B$, any product of the form $G \times B$ is bipartite, and furthermore $B \times K_2$ is two disjoint copies of $B$. For $A$ and $B$ as in the hypothesis of the theorem, we have, for any graph $G$,
\begin{align*}
\hom(G, \ell(A^B))
&= \hom(G^\circ, A^B)
= \hom(G^\circ \times B, A)
\\
&= \hom(G^\circ \times B \times K_2, A)^{1/2}
= \hom(G^\circ \times K_2, A^B)^{1/2}.
\end{align*}
Since $G$ is $d$-regular, $G^\circ \times K_2$ is a $(d+1)$-regular bipartite graph, so Theorem~\ref{thm:GT} implies that (recall $K_{d+1}^\circ \times K_2 \cong K_{d+1,d+1}$)
\begin{align*}
\hom(G, \ell(A^B))^{1/v(G)}
&= \hom(G^\circ \times K_2, A^B)^{1/(2v(G))}
\\
&\le \hom(K_{d+1}^\circ \times K_2, A^B)^{1/(2d+2)}
\\
&= \hom(K_{d+1},\ell(A^B))^{1/(d+1)}. \qedhere
\end{align*}
\end{proof}
One can extend Theorem~\ref{thm:Serneau-loop-power} by considering a bigraph version of Theorem~\ref{thm:GT}. A \emph{bigraph} $G$ is a bipartite graph $G$ along with a specified left/right-vertex bipartition $V(G) = V_L(G) \cup V_R(G)$. Given two bigraphs $G$ and $H$, a homomorphism $\phi$ from $G$ to $H$ is a graph homomorphism of the underlying graphs that respects the vertex bipartition, i.e., $\phi(V_L(G)) \subseteq V_L(H)$ and $\phi(V_R(G)) \subseteq V_R(H)$. Theorem~\ref{thm:GT}, with essentially the same proof, holds for bigraphs $G$ and $H$. The proof of Theorem~\ref{thm:Serneau-loop-power} can then be easily modified to establish the following result.
\begin{theorem} \label{thm:H-bigraph-hom}
Let $A$ and $B$ be two bigraphs. Let $H$ denote the loop-graph with vertices being bigraph homomorphisms from $B$ to $A$, such that $\phi, \phi' \in V(H)$ are adjacent if and only if $\phi(u)\phi'(v) \in E(A)$ whenever $uv \in E(B)$ (in particular, all vertices of $H$ are automatically looped). Then for any $d$-regular graph $G$, one has
\[
\hom(G,H)^{1/v(G)} \le \hom(K_{d+1}, H)^{1/(d+1)}.
\]
\end{theorem}
It may not be obvious which $H$ can arise in Theorem~\ref{thm:H-bigraph-hom}. The following special case, proved in \cite{CCPT} (prior to \cite{Ser}), provides some a nice family of examples.
\begin{definition} \label{def:extended-line-graph}
The \emph{extended line graph} $\widetilde H$ of a graph $H$ has $V(\widetilde H) = E(H)$ and two edges $e$ and $f$ of $H$ are adjacent in $\widetilde H$ if
\begin{enumerate}
\item $e = f$, or
\item $e$ and $f$ share a common vertex, or
\item $e$ and $f$ are opposite edges of a $4$-cycle in $G$.
\end{enumerate}
\end{definition}
Note that every vertex of $\widetilde H$ is automatically looped. If $B$ is a bipartite graph, then the graph $H$ in Theorem~\ref{thm:H-bigraph-hom} that arises from $A = K_2$ and $B$ is precisely $\widetilde B$.
\begin{corollary}[\cite{CCPT}] \label{cor:H-line-graph}
Let $\widetilde H$ be the extended line graph of a bipartite graph $H$. For any $d$-regular graph $G$,
\[
\hom(G,\widetilde H)^{1/v(G)} \le \hom(K_{d+1}, \widetilde H)^{n/(d+1)}.
\]
\end{corollary}
For a simple graph $H$, let $H^\circ$ denote $H$ with a loop added at every vertex. Let $P_k$ denote the path of $k$ vertices, and $C_k$ the cycle with $k$ vertices.
\begin{example}
One has $\widetilde P_{k+1} = P_{k}^\circ$ for all $k$. Also, $\widetilde C_k = C_k^\circ$ for all $k \ne 4$.
\end{example}
\begin{corollary}[\cite{CCPT}]
Let $H = C_k^\circ$ with even $k \ge 6$ or $H = P_k^\circ$ for any $k \ge 1$. For any $d$-regular graph $G$,
\[
\hom(G, H)^{1/v(G)} \le \hom(K_{d+1},H)^{1/v(G)}.
\]
\end{corollary}
\subsection{Closure under tensor products} \label{sec:closure-tensor}
Sernau~\cite{Ser} observed that, for any $d$, if $H = H_1$ and $H = H_2$ both have the property that $G = K_{d,d}$ maximizes $\hom(G,H)^{1/v(G)}$ over all $d$-regular graphs $G$, then $H = H_1 \times H_2$ has the same property by \eqref{eq:H-prod}. In other words, the set of $H$ such that $G = K_{d,d}$ is the maximizer in Problem~\ref{prb:hom-max} is closed under tensor products. This observation enlarges the set of such $H$ previously obtained in Theorems~\ref{thm:threshold} and \ref{thm:bst}.
Similarly, the set of $H$ such that $G = K_{d+1}$ maximizes the expression $\hom(G,H)^{1/v(G)}$ over $d$-regular graphs $G$ is also closed under tensor products. However, the loop-graphs $H$ that arise in Theorem~\ref{thm:H-bigraph-hom} are already closed under taking tensor products, so no new cases are obtained via taking the tensor product closure.
\section{Neither $K_{d,d}$ nor $K_{d+1}$} \label{sec:neither}
So far in all cases of Problem~\ref{prb:hom-max} that we have considered, the maximizing $G$ is always either $K_{d,d}$ or $K_{d+1}$. It was conjectured~\cite{Gal13} that one of $K_{d,d}$ and $K_{d+1}$ always maximizes $\hom(G, H)^{1/v(G)}$ for every $H$. However, Sernau~\cite{Ser} showed that this is false (a similar construction was independently found by Pat Devlin).
Let $d \ge 4$ and let $G$ be a $d$-regular graph with $v(G) < 2d$ other than $K_{d+1}$. Brooks' theorem tells us that $G$ is $d$-colorable, so that $\hom(G, K_d) > 0$. It follows that for this $G$,
\begin{align*}
\hom(G, k K_d)^{1/v(G)}
&= k^{1/v(G)} \hom(G,K_d)^{1/v(G)}
\\
&> k^{1/(2d)} \hom(K_{d,d},K_d)^{1/(2d)} =
\hom(K_{d,d},k K_d)^{1/(2d)}
\end{align*}
for sufficiently large $k$ (as a function of $d$) since $v(G) < 2d$. Also,
\[
\hom(G, k K_d)^{1/v(G)} > 0 = \hom(K_{d+1},k K_d)^{1/(d+1)}.
\]
Therefore neither $G = K_{d,d}$ nor $G = K_{d+1}$ maximize $\hom(G, k K_d)^{1/v(G)}$ over all $d$-regular graphs $G$.\footnote{If we wish the target graph $H$ to be connected, we can slightly modify the construction by connecting the disjoint copies by paths of length two, say.} However, we do not know which $G$ maximizes $\hom(G,k K_d)^{1/v(G)}$. For $d=3$, Csikv\'ari \cite{Csi-p} found a counterexample using a similar construction.
In general, we do not know which graphs $G$ (other than $K_{d+1}$ and $K_{d,d}$) can arise as maximizers for $\hom(G, H)^{1/v(G)}$ in Problem~\ref{prb:hom-max}. See the end of Section~\ref{sec:hom} for some open questions and conjectures.
\section{Occupancy fraction} \label{sec:occup}
The original proof of Theorem~\ref{thm:kahn} used the entropy method~\cite{Kahn01}. The proof in Section~\ref{sec:proj}, following~\cite{LZ15}, used a variant of the H\"older's inequality, and is related to the original entropy method proof.
Recently, an elegant new proof of the result was found \cite{DJPR1} using a novel method, unrelated to previous proofs. We discuss this new technique in this section. It will be necessary to introduce weighted versions of the problems.
The \emph{independence polynomial} of a graph $G$ is defined by
\[
P_G(\lambda) := \sum_{I \in I(G)} \lambda^{|I|}.
\]
Recall that $I(G)$ is the set of independent sets of $G$. In particular, $P_G(1) = i(G)$. Theorem~\ref{thm:kahn}, which says that $i(G)^{1/v(G)} \le i(K_{d,d})^{1/(2d)}$ for $d$-regular bipartite $G$, extends to this weighted version of the number of independent sets \cite{GT04}\footnote{The case $\lambda \ge 1$ had been established earlier by Kahn~\cite{Kahn02}.}. The bipartite swapping trick in Section~\ref{sec:swap} also extends to the weighted setting~\cite{Zhao10}.
\begin{theorem}[\cite{GT04} for bipartite $G$; \cite{Zhao10} for general $G$] \label{thm:indep-poly}
If $G$ is a $d$-regular graph and $\lambda \ge 0$, then
\[
P_G(\lambda)^{1/v(G)} \le P_{K_{d,d}}(\lambda)^{1/(2d)}.
\]
\end{theorem}
The \emph{hard-core model} with \emph{fugacity} $\lambda$ on $G$ is defined as the probability distribution on independent sets of $G$ where an independent set $I$ is chosen with probability proportional to $\lambda^{|I|}$, i.e., with probability
\[
\Pr_\lambda[I] = \frac{\lambda^{|I|}}{P_G(\lambda)}.
\]
The \emph{occupancy fraction} of $I$ is the fraction of vertices of $G$ occupied by $I$. The expected occupancy fraction of a random independent set from the hard-core model is
\[
\alpha_G(\lambda) := \frac{1}{v(G)} \sum_{I \in I(G)} |I| \cdot \Pr_\lambda[I]
= \frac{\sum_{I \in I(G)} |I| \lambda^{|I|}}{v(G) P_G(\lambda)}
= \frac{\lambda P'_G(\lambda)}{v(G) P_G(\lambda)}.
\]
The occupancy fraction is an ``observable''---a quantity associated with each instance produced by the model.
It turns out that $K_{d,d}$ maximizes the occupancy fraction among all $d$-regular graphs.
\begin{theorem}[Davies, Jenssen, Perkins, and Roberts~\cite{DJPR1}] \label{thm:occupancy}
For all $d$-regular graphs $G$ and all $\lambda\ge 0$, we have
\begin{equation}\label{eq:thm-occupancy}
\alpha_G(\lambda) \le \alpha_{K_{d,d}}(\lambda) = \frac{\lambda(1+\lambda)^{d-1}}{2(1+\lambda)^d - 1}.
\end{equation}
\end{theorem}
Since the expected occupancy fraction is proportional to the logarithmic derivative of $P_G(\lambda)^{1/v(G)}$, the inequality for the expected occupancy fraction implies the corresponding inequality for the independence polynomial. Indeed, Theorem~\ref{thm:occupancy} implies Theorem~\ref{thm:indep-poly} (and hence Theorems~\ref{thm:kahn} and \ref{thm:zhao}) since
\[
\frac{1}{v(G)}\int_0^\lambda \frac{\overline\alpha_G(t)}{t} \, dt
= \frac{1}{v(G)}\int_0^\lambda \frac{P'_G(t)}{P_G(t)} \, dt
= \frac{\log P_G(\lambda)}{v(G)}.
\]
We reproduce here two proofs of Theorem~\ref{thm:occupancy}. They are both based on the following idea, introduced in \cite{DJPR1} for this problem. We draw a random independent set $I$ from the hard-core model and look at the neighborhood of a uniform random vertex $v \in V(G)$. The expected occupancy fraction is then the probability that $v \in I$. (It is helpful here that the occupancy fraction is an observable quantity.) We then analyze how the neighborhood of $v$ should look in relation to $I$. Since the graph is regular, a uniform random neighbor of $v$ is uniformly distributed in $V(G)$. By finding an appropriate set of constraints on the probabilities of seeing various neighborhood configurations of $v$, we can bound the probability that $v \in I$.
The first proof is given below under the additional simplifying assumption that $G$ is triangle-free (which includes all bipartite graphs and much more). See~\cite{DJPR1} for how to extend this proof to all regular graphs.
\begin{proof}[Proof of Theorem~\ref{thm:occupancy} for triangle-free $G$]
Let $I$ be an independent set of $G$ drawn according to the hard-core model with fugacity $\lambda$. For each $v \in V(G)$, let $p_v$ denote the probability that $v \in I$. We say $v \in V(G)$ is \emph{uncovered} if none of the neighbors of $v$ are in $I$, i.e., $N(v) \cap I = \emptyset$. If $v \in I$ then $v$ is necessarily uncovered. Conversely, conditioned on $v$ being uncovered, one has $v \in I$ with probability $\lambda/(1+\lambda)$. So the probability that $v$ is uncovered is $p_v(1+\lambda) / \lambda$.
Let $U_v$ denote the set of uncovered neighbors of $v$. Since $G$ is triangle-free, $U_v$ is an independent set. Conditioned on $U_v$ being the uncovered neighbors of $v$, the probability that $v$ is uncovered, which is equivalent to $U_v \cap I = \emptyset$, is exactly $(1+\lambda)^{-|U_v|}$. Hence
\begin{equation} \label{eq:p_v-ineq}
\frac{1+\lambda}{\lambda} p_v = \mathbb{E}[(1 + \lambda)^{-|U_v|}]
\le 1 - \frac{\mathbb{E}[|U_v|]}{d} \left( 1 - (1+\lambda)^{-d}\right),
\end{equation}
where the inequality follows from $0 \le |U_v| \le d$ and the convexity of the function $x \mapsto (1+\lambda)^{-x}$, so that $(1+\lambda)^{-x} \le 1 - \frac{x}{d}(1-(1+\lambda)^{-d})$ for all $0 \le x \le d$ by linear interpolation.
If $v$ is chosen from $V(G)$ uniformly at random, then $\mathbb{E}[p_v] = \alpha_G(\lambda)$ is the expected occupancy fraction. Similarly, $\mathbb{E}[|U_v|]/d$ is the probability that a random vertex is uncovered (here we use again that $G$ is $d$-regular), which equals $\mathbb{E}[p_v] \frac{1+ \lambda}{\lambda} = \alpha_G(\lambda) \frac{1+\lambda}{\lambda}$. Setting into \eqref{eq:p_v-ineq}, we obtain
\[
\frac{1+\lambda}{\lambda} \alpha_G(\lambda) \le 1 - \alpha_G(\lambda) \frac{1+\lambda}{\lambda} \left( 1 - (1+\lambda)^{-d}\right).
\]
Rearranging gives us \eqref{eq:thm-occupancy}.
\end{proof}
In \cite{DJPR1}, Theorem~\ref{thm:occupancy} was proved for all $d$-regular graphs $G$ by considering all graphs on $d$ vertices that could be induced by the neighborhood of a vertex in $G$ and using a linear program to constrain the probability distribution of the neighborhood profile of a random vertex. When $G$ is triangle-free, the neighborhood of a vertex is always an independent set, which significantly simplifies the situation. The following conjecture extends Theorem~\ref{thm:GT} to triangle-free graphs.
\begin{conjecture}[\cite{CCPT}]
Let $G$ be a triangle-free $d$-regular graph and $H$ a loop-graph. Then
\[
\hom(G,H)^{1/v(G)} \le \hom(K_{d,d},H)^{1/(2d)}.
\]
\end{conjecture}
Next we give an alternative proof of Theorem~\ref{thm:occupancy} due to Perkins~\cite{Perkins-pc}, based on a similar idea. In the following proof, we do not need to assume that $G$ is triangle-free. In the proof, we introduce an additional constraint, which allows us to obtain the result more quickly. This simplification seems to be somewhat specific to independent sets.
\begin{proof}[Second proof of Theorem~\ref{thm:occupancy}]
Let $I$ be an independent set of $G$ drawn according to the hard-core model with fugacity $\lambda$, and let $v$ be a uniform random vertex in $G$. Let $Y = \abs{I \cap N(v)}$ denote the number of neighbors of $v$ in $I$ (not including $v$ itself). Let $p_k = \mathbb{P}(Y = k)$. Since $Y \in \{0, 1, \dots, d\}$,
\begin{equation}\label{eq:occ-prob-sum}
p_0 + p_1 + \cdots + p_d = 1.
\end{equation}
However, not all vectors of probabilities $(p_0, \dots, p_d)$ are feasible. The art of the method is in finding additional constraints on the probability distribution.
As in the previous proof, since $v$ is uncovered if and only if $Y = 0$, we have
\[
\alpha_G(\lambda) = \mathbb{P}(v \in I) = \frac{\lambda}{1+\lambda} \mathbb{P}(Y = 0) = \frac{\lambda}{1+\lambda} p_0.
\]
On the other hand, since $G$ is $d$-regular, a uniform random neighbor of $v$ is also uniformly distributed in $V(G)$, so we have
\[
\alpha_G(\lambda) = \frac{1}{d} \mathbb{E}[Y] = \frac{1}{d}(p_1 + 2p_2 + \cdots + dp_d).
\]
Comparing the previous two relations, we obtain
\begin{equation}
\label{eq:occ-neighbor-relation}
\frac{\lambda}{1+\lambda} p_0 = \frac{1}{d}(p_1 + 2p_2 + \cdots + dp_d).
\end{equation}
Now, let us compare the probability that $v$ has $k$ versus $k-1$ neighbors. In an event where exactly $k$ neighbors of $v$ are occupied, we can remove any of the occupied neighbors from $I$, and obtain another independent set where $v$ has exactly $k-1$ neighbors. There are $k$ ways to remove an element, but we over-count by a factor of at most $d-k+1$. Also factoring in the weight multiplier, we obtain the inequality
\begin{equation} \label{eq:occ-descend}
(d-k+1) \lambda p_{k-1} \ge k p_k, \qquad \text{for } 2 \le k \le d.
\end{equation}
The constraints \eqref{eq:occ-prob-sum}, \eqref{eq:occ-neighbor-relation}, and \eqref{eq:occ-descend} together form a linear program with variables $p_0, \dots, p_d$.
Next we show that these linear constraints together imply $p_0 \le \frac{(1+\lambda)^d}{2(1+\lambda)^d - 1}$, which gives the desired bound on $\alpha_G(\lambda) = \frac{\lambda}{1+\lambda}p_0$. Equality is attained for the probability distribution $(p_0, \dots, p_d)$ arising from $G = K_{d,d}$.
To prove this claim, first we show that if $(p_0, \dots, p_d)$ achieves the maximum of value of $p_0$ while satisfying the constraints \eqref{eq:occ-prob-sum}, \eqref{eq:occ-neighbor-relation}, and \eqref{eq:occ-descend}, then every inequality in \eqref{eq:occ-descend} must be an equality. Indeed, if we have $(d-k+1) \lambda p_{k-1} > k p_k$ for some $k$, then by increasing $p_0$ by $\epsilon$, decreasing $p_{k-1}$ by $(\frac{d\lambda}{1+\lambda} + k)\epsilon$, increasing $p_k$ by $(\frac{d\lambda}{1+\lambda} + k-1)\epsilon$, and leaving all other $p_i$'s fixed, we can maintain all constraints and increase $p_0$, provided $\epsilon > 0$ is sufficiently small. Thus, in the maximizing solution, equality occurs in \eqref{eq:occ-descend} for all $2 \le k \le d$. It can be checked that the vector $(p_0, \dots, p_d)$ arising from $G = K_{d,d}$ satisfies all the equality constraints, and it is the unique solution since we have a linear system of equations with full rank.
\end{proof}
Conjecture~\ref{conj:coloring} about the number of colorings was recently proved~\cite{DJPR3} for 3-regular graphs using an extension of the above method. Instead of the independence polynomial and the hard-core model, one considers a continuous relaxation of proper colorings by using the Potts model. We sample a $q$-coloring of $G$, not necessarily proper, so that the coloring $\sigma$ is chosen with probability proportional to $e^{-\beta m(\sigma)}$, where $m(\sigma)$ is the number of monochromatic edges, and $\beta \in \mathbb{R}$ is called the \emph{inverse temperature}. A proper coloring corresponds to $\beta \to +\infty$. The \emph{partition function} (analogous to the independence polynomial) for this Potts model is
\[
Z_G^q (\beta) = \sum_{\sigma \colon V(G) \to [q]} e^{-\beta m(\sigma)}
\]
where $\sigma$ ranges over all $q$-colorings of $G$ (not necessarily proper). In the Potts model, the coloring $\sigma$ appears with probability $e^{-\beta m(\sigma)}/Z_G^q(\beta)$. The expected number of monochromatic edges of $\sigma$ is
\begin{align*}
U_G^q(\beta)
:= \frac{1}{v(G)} \mathbb{E}_\sigma[m(\sigma)]
&= \frac{1}{v(G) Z_G^q(\beta)} \sum_{\sigma\colon V(G) \to [q]} m(\sigma) e^{-\beta m(\sigma)}
\\
&= \frac{-1}{v(G)} \frac{d}{d\beta} (\log Z_G^q(\beta)).
\end{align*}
The above quantity is analogous to the occupancy fraction for the hard-core model (think $\lambda = e^{-\beta}$). Conjecture~\eqref{conj:coloring} would follow from the next conjecture. (Note that the inequality sign is reversed since $U_G^q(\beta)$ is proportional to the negative logarithmic derivative of $Z_G(\beta)^{1/v(G)}$)
\begin{conjecture} \label{conj:potts}
For every $d$-regular graph $G$ and integer $q \ge 3$, and any $\beta > 0$,
\[
U_G^q(\beta) \ge U_{K_{d,d}}^q(\beta).
\]
Consequently (by integrating $\beta$ and noting $Z_G^q(0)^{1/v(G)} = q$),
\[
Z_G^q(\beta)^{1/v(G)} \le Z_{K_{d,d}}^q(\beta)^{1/(2d)}.
\]
\end{conjecture}
Conjecture~\ref{conj:potts} was proved for 3-regular graphs in \cite{DJPR3} using a variant of the method discussed in this section, by considering all configurations of the 2-step neighborhood of a uniform random vertex. The analysis is substantially more involved than the proofs we saw for independent sets.
\section{On the minimum number of independent sets and homomorphisms} \label{sec:min}
\subsection{Independent sets}
Having explored the maximum number of independent sets in a regular graph, let us turn to the natural opposite question. Which $d$-regular graph has the minimum number of independent sets? It turns out that the answer is a disjoint union of cliques.
\begin{theorem}[Cutler and Radcliffe~\cite{CR14}] \label{thm:ind-min}
For a $d$-regular graph $G$,
\[
i(G)^{1/v(G)} \ge i(K_{d+1})^{1/(d+1)} = (d+2)^{1/(d+1)}.
\]
\end{theorem}
In fact, a stronger result holds: a disjoint union of $K_{d+1}$'s minimizes the number of independent sets of every fixed size. We write $a G$ for a disjoint union $a$ copies of $G$. Let $i_t(G)$ denote the number of independent sets of $G$ of size $t$.
\begin{theorem}[\cite{CR14}] \label{thm:ind-min-by-size}
Let $a$ and $d$ be positive integers. Let $G$ be a $d$-regular graph with $a(d+1)$ vertices. Then $i_t(G) \ge i_t(a K_{d+1})$ for every $0 \le t \le a(d+1)$.
\end{theorem}
\begin{proof}
Let us compare the number of sequences of $t$ vertices that form an independent set in $G$ and $a K_{d+1}$. In $a K_{d+1}$, we have $a(d+1)$ choices for the first vertex. Once the first vertex has been chosen, there are exactly $(a-1)(d+1)$ choices for the second vertex. More generally, for $1 \le j \le a$, once the first $j-1$ vertices have been chosen, there are exactly $(a+1-j)(d+1)$ choices for the $j$-th vertex.
On the other hand, in $G$, after the first $j-1$ vertices have been chosen, the union of these $j-1$ vertices along with their neighborhoods has cardinality at most $(j-1)(d+1)$, so there are at least $(a + 1 - j)(d+1)$ choices for the $j$-th vertex, at least as many compared to $a K_{d+1}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ind-min}]
Theorem~\ref{thm:ind-min-by-size} implies that $i(G)^{1/v(G)} \ge i(K_{d+1})^{1/(d+1)}$ whenever $v(G)$ is divisible by $d+1$. When $v(G)$ is not divisible by $d+1$, we can apply the same inequality to a disjoint union of $(d+1)$ copies of $G$ to obtain $i(G)^{1/v(G)} = i((d+1)G)^{1/((d+1)v(G))} \ge i(K_{d+1})^{1/(d+1)}$.
\end{proof}
The situation changes significantly if we require $G$ to be bipartite. In this case, the problem was solved very recently by Csikv\'ari~\cite{Csi16ar}, who showed that the infimum of $i(G)^{1/v(G)}$ over $d$-regular bipartite graphs $G$ is obtained by taking a sequence of $G$ with increasing girth, i.e., $G$ is locally tree-like. The limit of $i(G)^{1/v(G)}$ for a sequence of bipartite $d$-regular graphs $G$ of increasing girth was determined by Sly and Sun \cite{SS14} using sophisticated (rigorous) methods from statistical physics.
\subsection{Colorings}
Here is an infimum of $\hom(G,K_q)^{1/v(G)}$ over $d$-regular graphs $G$ due to Csikv\'ari~\cite{Csi-p}.
\begin{theorem} \label{thm:min-color}
For a $d$-regular graph $G$ and any $q \ge 2$,
\[
\hom(G, K_q)^{1/v(G)} \ge \hom(K_{d+1}, K_q)^{1/(d+1)}.
\]
\end{theorem}
\begin{proof}
Assume $q \ge d+1$, since otherwise the right-hand side is zero. Let $\sigma$ be a random permutation of $V(G)$. For each $u \in V(G)$, let $d_u^\sigma$ denote the number of neighbors of $u$ that appears before $u$ in the permutation $\sigma$. By coloring the vertices in the order of $\sigma$, there are at least $q - d_u^\sigma$ choices for the color of vertex $u$, so
\[
\hom(G, K_q) \ge \prod_{u \in V(G)} (q - d_u^\sigma).
\]
Taking the logarithm of both sides, we find that
\begin{equation} \label{eq:color-sigma-ineq}
\frac{1}{v(G)} \log \hom(G, K_q) \ge \frac{1}{v(G)} \sum_{u \in V(G)} \log(q - d_u^\sigma).
\end{equation}
For each $u \in V(G)$, the random variable $d_u^\sigma$ is uniformly distributed among $\{0, 1, \dots, d\}$ since the ordering of $u \cup N(u)$ under $\sigma$ is uniform. Therefore, the expected value of the right-hand side of \eqref{eq:color-sigma-ineq} is
\[
\frac{1}{d+1}(\log q + \log(q-1) + \cdots + \log(q-d)) = \frac{1}{d+1}\log \hom(K_{d+1},K_q),
\]
which proves the theorem.
\end{proof}
What is infimum of $\hom(G, K_q)^{1/v(G)}$ over \emph{bipartite} $d$-regular graphs $G$? The following inequality was proved by Csikv\'ari and Lin~\cite{CL}. For $q \ge d+1$, the constant in the inequality is best possible as it is the limit for any sequence of $d$-regular graphs with increasing girth \cite{BG08}.
\begin{theorem}[\cite{CL}] \label{thm:min-color-bip}
For any $d$-regular bipartite graph $G$ and any $q \ge 2$,
\[
\hom(G,K_q)^{1/v(G)} \ge q (1-1/q)^{d/2}.
\]
\end{theorem}
\subsection{Widom--Rowlinson model} In the previous two cases, for independent sets and colorings, the minimizing $G$ is $K_{d+1}$, and if we restrict to bipartite $G$, the ``minimizing'' $G$ is locally tree-like. For the Widom--Rowlinson model, we saw in Theorem~\ref{thm:wr} that the quantity $\hom(G, \tikzHwr)^{1/v(G)}$ is maximized, over $d$-regular graphs $G$, by $G = K_{d+1}$. Csikv\'ari~\cite{Csi16ar} recently showed that $\hom(G, \tikzHwr)^{1/v(G)}$ is minimized, over $d$-regular graphs $G$, by a sequence of graphs $G$ with increasing girth, even without the bipartite assumption on $G$.
\section{Related results and further questions} \label{sec:related}
In this final section we mention some related results and problems. Also see the survey~\cite{Cut12} for a related discussion.
\subsection{Independent sets of fixed size}
We saw in Theorems~\ref{thm:kahn} and \ref{thm:zhao} that in the family of $d$-regular graphs on $n$ vertices, a disjoint union of $K_{d,d}$'s maximizes the number of independent sets. It is conjectured that latter maximizes the number of independents of every fixed size. Let $i_t(G)$ denote the number of independent sets of size $t$ in $G$. Recall that $k G$ denotes a disjoint union of $k$ copies of $G$.
\begin{conjecture}[\cite{Kahn01}] \label{conj:ind-fixed-size}
If $G$ is a $d$-regular graph with $2ad$ vertices, then $i_t(G) \le i_t(a K_{d,d})$ for every $t$.
\end{conjecture}
See \cite[Section 8]{DJPR1} for the current best bounds on this problem.
\subsection{Homomorphism with weights}
Theorem~\ref{thm:GT} holds more generally for weighted graph homomorphisms, allowing $H$ to have weights on its vertices and edges. The proof in Section~\ref{sec:prod} also extends to the weighted setting after small modifications. We refer to \cite{GT04} and \cite{LZ15} for details.
\subsection{Biregular graphs}
An $(a,b)$-biregular graph is a bipartite graph such that all vertices on one side of the bipartition have degree $a$, and all vertices on the other side have degree $b$. Theorems~\ref{thm:kahn} and \ref{thm:GT} extend to biregular graphs, stating that for any $(a,b)$-biregular graph $G$ and loop-graph $H$,
\[
\hom(G,H)^{1/v(G)} \le \hom(K_{b,a},H)^{1/(a+b)}.
\]
Both the entropy proof \cite{Kahn01,GT04} and the H\"older's inequality proof \cite{LZ15} (Section~\ref{sec:prod}) extend to biregular graphs. The occupancy method proof \cite{DJPR1} for independent sets (Section~\ref{sec:occup}) also extends to the biregular setting, though one should use two different fugacity parameters for the two vertex parts.
\subsection{Graphs with given degree profile}
Kahn~\cite{Kahn01} made the following conjecture extending Theorem~\ref{thm:kahn} to irregular graphs. We write $d_u$ for the degree of vertex $u \in V(G)$.
\begin{conjecture}[\cite{Kahn01}] \label{conj:kahn-irreg}
For any graph $G$,
\[
i(G) \le \prod_{uv \in E(G)} i(K_{d_u,d_v})^{1/d_ud_v} = \prod_{uv \in E(G)} (2^{d_u} + d^{d_v} - 1)^{1/(d_ud_v)}.
\]
\end{conjecture}
By the bipartite reduction in Section~\ref{sec:swap}, it suffices to prove the conjecture for bipartite graphs $G$. Galvin and I \cite{GZ11} proved Conjecture~\ref{conj:kahn-irreg} for all $G$ with maximum degree at most $5$.
The following conjecture, due to Galvin~\cite{Gal06}\footnote{A bipartite assumption on $G$ is missing in \cite[Conjecture 1.5]{Gal06}.}, extends Theorem~\ref{thm:GT} and the bipartite case of Conjecture~\ref{conj:kahn-irreg}.
\begin{conjecture}
For any bipartite graph $G$ and loop-graph $H$,
\[
\hom(G,H) \le \prod_{uv \in E(G)} \hom(K_{d_u,d_v},H)^{1/(d_u d_v)}.
\]
\end{conjecture}
\subsection{Graphs with additional local constraints}
We saw in Theorem~\ref{thm:zhao} and Theorem~\ref{thm:ind-min} that the maximum and minimum of $i(G)^{1/v(G)}$ among $d$-regular graphs $G$ are attained by $K_{d,d}$ and $K_{d+1}$ respectively. What if we impose additional ``local'' constraints to disallow $K_{d,d}$ and $K_{d+1}$? For example, consider the following.
\begin{itemize}
\item What is the infimum of $i(G)^{1/v(G)}$ among $d$-regular triangle-free graphs $G$?
\item What is the supremum of $i(G)^{1/v(G)}$ among $d$-regular graphs $G$ that do not contain any cycles of length 4?
\end{itemize}
These two questions were recently answered by Perarnau and Perkins~\cite{PP}.
\begin{theorem} \label{thm:ind-girth}
(a) Among 3-regular triangle-free graphs $G$, the quantity $i(G)^{1/v(G)}$ is minimized when $G$ is the Petersen graph.
(b) Among 3-regular graphs $G$ without cycles of length 4, the quantity $i(G)^{1/v(G)}$ is maximized when $G$ is the Heawood graph.
\end{theorem}
\begin{center}
\begin{tabular}{ccc}
\begin{tikzpicture}[scale=.6,P/.style={draw, circle, black, fill, inner sep = 0pt, minimum width = 5pt}]
\foreach \times in {0,1,2,3,4}{
\node[P] (a\times) at ($(\times * 360 / 5 + 90:1)$) {};
\node[P] (b\times) at ($(\times * 360 / 5 + 90:2)$) {};
\draw (a\times)--(b\times);
}
\draw (b0)--(b1)--(b2)--(b3)--(b4)--(b0);
\draw (a0)--(a2)--(a4)--(a1)--(a3)--(a0);
\end{tikzpicture}
& \hspace{6em} &
\begin{tikzpicture}[scale=.6,P/.style={draw, circle, black, fill, inner sep = 0pt, minimum width = 5pt}]
\foreach \times in {0,...,13}{
\pgfmathsetlengthmacro{\theta}{(\times-.5) * 360 / 14 + 90}
\node[P] (\times) at ({\theta}:2) {};
}
\draw (0)--(1)--(2)--(3)--(4)--(5)--(6)--(7)--(8)--(9)--(10)--(11)--(12)--(13)--(0);
\draw (1)--(6) (2)--(11) (3)--(8) (4)--(13) (5)--(10) (7)--(12) (9)--(0);
\end{tikzpicture}
\\
Petersen graph & & Heawood graph
\end{tabular}
\end{center}
Theorem~\ref{thm:ind-girth} was proved using the occupancy method discussed in Section~\ref{sec:occup}. The following general problem is very much open.
\begin{problem} \label{prb:local-constraints}
Let $d \ge 3$ be an integer and $\mathcal{F}$ be a finite list of graphs. Determine the infimum and supremum of $i(G)^{1/v(G)}$ among $d$-regular graphs $G$ that do not contain any element of $\mathcal{F}$ as an induced subgraph.
\end{problem}
We pose the following (fairly bold) conjecture that the extrema are always attained by finite graphs.
\begin{conjecture}[Local constraints imply bounded extrema]
Let $d \ge 3$ be an integer and $\mathcal{F}$ be a finite list of graphs. Let $\mathcal{G}_d(\mathcal{F})$ denote the set of finite $d$-regular graphs that do not contain any element of $\mathcal{F}$ as an induced subgraph. Then there exist $G_{\min}, G_{\max} \in \mathcal{G}_d(\mathcal{F})$ such that for all $G \in \mathcal{G}_d(\mathcal{F})$,
\[
i(G_{\min})^{1/v(G_{\min})} \le i(G)^{1/v(G)} \le i(G_{\max})^{1/v(G_{\max})}.
\]
\end{conjecture}
It would be interesting to know which graphs can arise as extremal graphs in this manner. On the other hand, imposing bipartiteness induces a very different behavior (Section~\ref{sec:min}). See \cite{CR16ar,DJPR2,PP} for discussions of related results and conjectures.
\subsection{Graphs with a given number of vertices and edges}
Let $V(G) =\{1, 2, \dots, n\}$. Let $L_{n,m}$ denote the graph on $n$ vertices obtained by including the first $m$ edges in lexicographic order, i.e., $12, 13, \dots, 1n, 23,24,\dots$. Recall that $i_t(G)$ is the number of independent sets of size $t$ in $G$. The following result is a consequence of the Kruskal--Katona theorem \cite{Kru63,Kat68}.
\begin{theorem}
For any graph $G$ with $n$ vertices and $m$ edges, and positive integer $t$, one has $i_t(G) \le i_t(L_{n,m})$.
\end{theorem}
Reiher's clique density theorem~\cite{Rei16} solves the corresponding minimization problem, which is significantly more difficult. See \cite{LPS10,MN15} and their references for results and conjectures on the analogous problem of maximizing the number of proper $q$-colorings in a graph with a given number of vertices and edges, and \cite{CK,CR11,CR14jgt} for graph homomorphisms.
\subsection{Minimum degree condition}
What if we relax the $d$-regular condition in Theorem~\ref{thm:zhao} to minimum degree $d$? The following result was conjectured by Galvin~\cite{Gal11} and proved by Cutler and Radcliffe~\cite{CR14}.
\begin{theorem}[\cite{CR14}]
Let $\delta \le n/2$. Let $G$ be an $n$-vertex graph with minimum degree at least $\delta$. Then $i(G) \le i(K_{\delta,n-\delta})$.
\end{theorem}
More generally, for any $\delta < n$, write $n = a(n-\delta)+b$ with $a$ and $b$ nonnegative integers and $b < n-\delta$, one has $i(G) \le i(\overline{a K_{n-\delta} \cup K_b}) = a(2^{n-\delta} - 1) + 2^b$ for any graph $G$ on $n$ vertices with minimum degree at least $\delta$. Here $\overline{G}$ denotes the edge-complement of $G$.
The following strengthening was conjectured by Engbers and Galvin~\cite{EG14}. It was proved by Alexander, Cutler, and Mink~\cite{ACM12} for bipartite graphs, and proved by Gan, Loh, and Sudakov~\cite{GLS15} in general. Recall that $i_t(G)$ is the number of independent sets of size $t$ in $G$.
\begin{theorem}[\cite{GLS15}]
Let $\delta \le n/2$ and $t \ge 3$. Let $G$ be an $n$-vertex graph with minimum degree at least $\delta$. Then $i_t(G) \le i_t(K_{\delta,n-\delta})$.
\end{theorem}
Note that this claim is false for $t = 2$.
See \cite{CK,CR11,CR14jgt,Eng15} for discussions on the analogous problem of maximizing the number of homomorphisms into a fixed $H$.
\subsection{Matchings}
Let $m(G)$ denote the number of matchings in a graph $G$, $m_t(G)$ the number of matchings with $t$ edges in $G$, and $pm(G) := m_{v(G)/2}(G)$ the number of perfect matchings in $G$.
The following upper bounds on $m(G)$ and $pm(G)$ have a curious semblance to Theorems~\ref{thm:kahn} and \ref{thm:zhao}. The quantity $m(G)$ for matchings can be viewed as analogous to $i(G)$ for independent sets.
For the number of perfect matchings, the bipartite case was conjectured in 1963 by Minc~\cite{Minc63} and proved by Br\`egman~\cite{Bre73} a decade later. Many different proofs have been given since then. The non-bipartite extension is due to Kahn and Lov\'asz (unpublished). See \cite{Gal}, which includes a statement allowing irregular $G$.
\begin{theorem}\label{thm:pm}
For any $d$-regular graph $G$,
\[
pm(G)^{1/v(G)} \le pm(K_{d,d})^{1/(2d)} = (d!)^{1/(2d)}.
\]
\end{theorem}
The occupancy method was used in \cite{DJPR1} to give an alternative proof of Theorem~\ref{thm:pm}, along with a new upper bound on $m(G)$, as well as a weighted extension analogous to Theorem~\ref{thm:indep-poly}. Define the \emph{matching polynomial}
\[
M_G(\lambda) := \sum_{M \in \mathcal{M}(G)} \lambda^{|M|}
\]
where $\mathcal{M}(G)$ is the set of matchings in $G$, and $|M|$ is the number of edges in the matching $M$.
\begin{theorem}[\cite{DJPR1}] \label{thm:matching-weighted}
For any $d$-regular graph $G$ and $\lambda \ge 0$,
\[
M_G(\lambda)^{1/v(G)} \le M_{K_{d,d}}(\lambda)^{1/(2d)}.
\]
In particular, setting $\lambda = 1$ yields
\[
m(G)^{1/v(G)} \le m(K_{d,d})^{1/(2d)}.
\]
\end{theorem}
In fact, an edge occupancy fraction result analogous to Theorem~\ref{thm:occupancy} holds. We refer to \cite[Theorem~3]{DJPR1} for the exact statement.
Note that setting $\lambda \to \infty$ in Theorem~\ref{thm:matching-weighted} recovers Theorem~\ref{thm:pm}, since the dominant term in $M_G(\lambda)$ is $pm(G)\lambda^{v(G)/2}$.
The following matching analog of Conjecture~\ref{conj:ind-fixed-size} remains open. See \cite{CGT09,DJPR1} for discussion.
\begin{conjecture}[\cite{FKM08}] \label{conj:mat-fixed-size}
If $G$ is an $2ad$-vertex $d$-regular graph and $t \ge 0$, then $m_t(G) \le m_t(a K_{d,d})$.
\end{conjecture}
The infimum of $pm(G)^{1/v(G)}$ for $d$-regular bipartite graphs $G$ is well understood. The infimum corresponds to random bipartite graphs $G$.
\begin{theorem}[Voorhoeve~\cite{Voo79} for $d=3$ and Schrijver~\cite{Sch98} for all $d$]
If $G$ is a $d$-regular bipartite graph on $2n$ vertices, then
\[
pm(G)^{1/n} \ge \frac{(d-1)^{d-1}}{d^{d-2}}.
\]
\end{theorem}
See \cite{LS10} for an exposition. The corresponding minimization problem for $m(G)$, and more generally for $m_t(G)$ and $M_G(\lambda)$, was solved by Gurvits~\cite{Gur} and extended by Csikv\'ari \cite{Csi}.
\section*{Acknowledgments}
I am grateful to Joe Gallian for the REU opportunity in 2009 where I began working on this problem (resulting in \cite{Zhao10}). I thank P\'eter Csikv\'ari, David Galvin, Joonkyung Lee, Will Perkins, and Prasad Tetali for carefully reading a draft of this paper and providing helpful comments. I also thank the anonymous reviewers for suggestions that improved the exposition of the paper.
\bibliographystyle{amsplain_mod2}
| {
"timestamp": "2017-04-11T02:08:19",
"yymm": "1610",
"arxiv_id": "1610.09210",
"language": "en",
"url": "https://arxiv.org/abs/1610.09210",
"abstract": "This survey concerns regular graphs that are extremal with respect to the number of independent sets, and more generally, graph homomorphisms. More precisely, in the family of of $d$-regular graphs, which graph $G$ maximizes/minimizes the quantity $i(G)^{1/v(G)}$, the number of independent sets in $G$ normalized exponentially by the size of $G$? What if $i(G)$ is replaced by some other graph parameter? We review existing techniques, highlight some exciting recent developments, and discuss open problems and conjectures for future research.",
"subjects": "Combinatorics (math.CO)",
"title": "Extremal regular graphs: independent sets and graph homomorphisms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771813585751,
"lm_q2_score": 0.8006919949619792,
"lm_q1q2_score": 0.7901045899249562
} |
https://arxiv.org/abs/1309.4025 | On stable lattices and the diagonal group | Inspired by work of McMullen, we show that any orbit for the action of the diagonal group on the space of lattices, accumulates on a stable lattice. We use this to settle a conjecture of Ramharter about Mordell's constant, get new proofs of Minkowski's conjecture in dimensions up to seven, and answer a question of Harder on the volume of stable lattices. | \section{Introduction}
Let $n \geq 2$ be an integer, let $G {\, \stackrel{\mathrm{def}}{=}\, } \operatorname{SL}_n({\mathbb{R}}), \, \Gamma {\, \stackrel{\mathrm{def}}{=}\, }
\operatorname{SL}_n({\mathbb{Z}})$, let $A \subset G$ be the subgroup of positive diagonal
matrices and let ${\mathcal{L}_n} {\, \stackrel{\mathrm{def}}{=}\, } G/\Gamma$ be the space of unimodular
lattices in ${\mathbb{R}}^n$. The purpose of this paper is to present a
dynamical result regarding the action of $A$ on ${\mathcal{L}_n}$, and to present
some consequences in the geometry of numbers.
A lattice $x \in {\mathcal{L}_n}$ is called {\em stable} if for any
subgroup $\Lambda \subset x$, the covolume of $\Lambda$ in
${\rm span}(\Lambda)$ is at least 1. In particular the length of the
shortest nonzero vector in $x$ is at least 1. Stable lattices have
also been called `semistable', they were introduced in a broad
algebro-geometric context by Harder, Narasimhan and Stuhler
\cite{Stuhler, Harder}, and were used to develop a
reduction theory for the study of the topology of locally symmetric
spaces. See Grayson \cite{Grayson} for a clear exposition.
\begin{theorem}\Name{thm: main}
For any $x \in {\mathcal{L}_n}$, the orbit-closure $\overline{Ax}$ contains a
stable lattice.
\end{theorem}
Theorem \ref{thm: main} is inspired by
a breakthrough result of McMullen \cite{McMullenMinkowski}. Recall that a lattice in ${\mathcal{L}_n}$ is
called {\em well-rounded} if its shortest nonzero vectors span ${\mathbb{R}}^n$.
In connection with his work on
Minkowski's conjecture, McMullen showed that the closure of any bounded
$A$-orbit in ${\mathcal{L}_n}$ contains a well-rounded lattice. The
set of well-rounded lattices neither contains, nor is contained in,
the set of stable lattices,
but the proof of Theorem \ref{thm:
main} closely follows McMullen's strategy.
We apply Theorem \ref{thm: main} to two problems in the geometry of
numbers.
Let $x \in {\mathcal{L}_n}$ be a unimodular lattice. By a {\em symmetric box} in
${\mathbb{R}}^n$ we mean a set of the form $ [-a_1, a_1] \times \cdots \times
[-a_n, a_n]$, and we say that a symmetric box is
{\em admissible} for $x$ if it contains no nonzero points of
$x$ in its interior. The {\em Mordell constant} of $x$
is defined to be
\eq{eq: defn const}{
\kappa(x) {\, \stackrel{\mathrm{def}}{=}\, } \frac{1}{2^n} \sup_{{\mathcal{B}}} \Vol({\mathcal{B}}),
}
where the supremum is taken over admissible symmetric boxes ${\mathcal{B}}$, and
where $\Vol({\mathcal{B}})$ denotes the volume of ${\mathcal{B}}$.
We also write
\eq{eq: defn kappan}{\kappa_n {\, \stackrel{\mathrm{def}}{=}\, } \inf\{\kappa(x): x \in {\mathcal{L}_n}\}.
}
The infimum in this definition is in fact a minimum, and, as with many
problems in the geometry of numbers it is of interest to compute the
constants $\kappa_n$ and identify the lattices realizing the
minimum. However this appears to be a very difficult problem, which so
far has only been solved for $n=2,3$, the latter in a difficult paper
of Ramharter \cite{Ramharter_dim3}. It is also of interest to provide
bounds on the asymptotics of $\kappa_n$, and in \cite{Ramharter_conjecture},
Ramharter conjectured that $\limsup_{n \to \infty} \kappa_n^{1/n\log
n}>0$. As a simple corollary of Theorem \ref{thm: main}, we validate
Ramharter's conjecture, with an explicit bound:
\begin{cor}\Name{cor: Ramharter conj}
For all $n \geq 2,$
\eq{eq: our bound}{\kappa_n \geq n^{-n/2}.
}
In particular
$$
\kappa_n^{1/n\log
n} \geq n^{-1/2\log n} \longrightarrow_{n \to \infty}
\frac{1}{\sqrt{e}}.
$$
\end{cor}
We remark that Corollary \ref{cor: Ramharter conj} could also be
derived from McMullen's results and a theorem of Birch and
Swinnerton-Dyer. In \S \ref{sec: rankin
bounds} we show that the bound \equ{eq: our bound} is not
optimal and explain how to obtain better bounds, for all $n$ which are
not divisible by 4. We refer the reader to
\cite{gruber} for more information on the possible values of
$\kappa(x), x \in {\mathcal{L}_n}$.
Our second application concerns Minkowski's conjecture\footnote{It is not clear to us whether
Minkowski actually made this conjecture.},
which posits that for any unimodular lattice $x$, one has
\eq{eq: Minkowski conj}{
\sup_{u \in {\mathbb{R}}^n} \, \inf_{v \in x} |N(u-v)| \leq \frac{1}{2^n},
}
where $N(u_1, \ldots, u_d) {\, \stackrel{\mathrm{def}}{=}\, } \prod_j u_j.$
Minkowski solved the question for $n=2$ and several
authors resolved the cases $n \leq 5$.
In \cite{McMullenMinkowski}, McMullen settled the case
$n=6$. In fact, using his theorem on the $A$-action on ${\mathcal{L}_n}$, McMullen
showed that in arbitrary dimension $n$,
Minkowski's conjecture is implied by the statement that any well-rounded
lattice $x \subset {\mathbb{R}}^d$ with $d \leq n$ satisfies
\eq{eq: covrad}{\covrad(x) \leq \frac{\sqrt{d}}{2},}
where $\covrad(x) {\, \stackrel{\mathrm{def}}{=}\, } \max_{u \in {\mathbb{R}}^d} \min_{v \in x} \|u-v\|$
and $\| \cdot \|$ is the Euclidean norm on ${\mathbb{R}}^d$. At the time of
writing \cite{McMullenMinkowski}, \equ{eq:
covrad} was known to hold for well-rounded lattices in dimension at most $6$, and in recent work of
Hans-Gill, Raka, Sehmi and Leetika \cite{hans-gill1, hans-gill2, leetika}, \equ{eq: covrad} has
been proved for well-rounded lattices in dimensions $n=7,8,9$, thus settling Minkowski's question in
those cases.
Our work gives two new approaches to Minkowski's conjecture. A direct
application of Theorem \ref{thm: main} (see Corollary
\ref{cor: for Minkowski 1}) shows that it follows in
dimension $n$, from the assertion that for any stable $x \in {\mathcal{L}_n}$, \equ{eq: covrad} holds. Note
that we do not require \equ{eq: covrad} in dimensions less than
$n$. Using the strategy of Woods and Hans-Gill et al, in Theorem
\ref{thm: use of KZ diagonal} we define a compact subset $\mathrm{KZS} \subset {\mathbb{R}}^n$ and a
collection
of $2^{n-1}$ subsets $\{ \mathcal{W}(\mathcal{I})\}$ of $ {\mathbb{R}}^n$. We show that the
assertion $\mathrm{KZS} \subset \bigcup_{\mathcal{I}} \mathcal{W}(\mathcal{I})$ implies
Minkowski's conjecture in dimension
$n$.
Secondly, an induction using the naturality of stable lattices,
leads to the following sufficient condition:
\begin{cor}\Name{cor: Minkowski}
Suppose that for some dimension $n$, for all $d\leq n$, any stable
lattice $x \in \mathcal{L}_{d}$ which is a local maximum of the function covrad, satisfies
\equ{eq: covrad}. Then \equ{eq: Minkowski conj} holds for any $x \in {\mathcal{L}_n}$.
\end{cor}
The local maxima of the function covrad have been
studied in depth in recent work of Dutour-Sikiri\'c, Sch\"urmann
and Vallentin \cite{mathieu}, who characterized them and showed that there
are finitely many in each dimension.
These two approaches give two new
proofs of Minkowski's Conjecture in dimensions $n \leq 7$.
A natural question is to what extent stable lattices are typical
in ${\mathcal{L}_n}$. The definition of stability may appear at first sight to be very
restrictive. Nevertheless in \S \ref{sec: volume computation} we show that as $n\to \infty$,
the probability that a random lattice is stable tends to 1 (where the
probability is taken
with respect to the
natural $G$-invariant measure on ${\mathcal{L}_n}$). This answers a question of
G. Harder. In fact a stronger statement is true, see Proposition
\ref{prop: strengthening volume}. For the results of \S \ref{sec: volume computation}
we use Siegel's approach to measure volumes \cite{SiegelFormula} and
rely on computations of Thunder \cite{Thunder}.
A significant difference between Theorem \ref{thm:
main} and McMullen's work on well-rounded lattices, is that we do
not need to assume that the orbit $Ax$ is bounded.
One may wonder whether McMullen's result is valid without the
hypothesis that $Ax$ is bounded, namely is it true that the closure of
any $A$-orbit in ${\mathcal{L}_n}$ (or even the orbit itself) contains a
well-rounded lattice? We answer these
questions affirmatively for {\em closed} orbits in \S \ref{sec: closed
orbits}. To this end we use results
of Tomanov and the second-named author \cite{TW}, as well as a
covering result (which we learned from Michael Levin) generalizing one of the results of
\cite{McMullenMinkowski}; this topological result appears to be
well-known to experts, but in order to keep this paper self-contained,
we give the proof in the appendix. For
another perspective on this and related questions, see \cite{PS}.
\subsection{Acknowledgements} Our work was inspired by Curt McMullen's
breakthrough paper \cite{McMullenMinkowski}
and many of our arguments are adaptations of
arguments appearing in \cite{McMullenMinkowski}.
We are also grateful to Curt McMullen for additional insightful remarks, and in
particular for the suggestion to study the set of stable lattices in
connection with the $A$-action on ${\mathcal{L}_n}$. We thank Michael Levin for
useful discussions on topological questions and for agreeing to
include the proof of Theorem \ref{thm: covering} in the appendix. We also thank Mathieu
Dutour-Sikiri\'c, Rajinder Hans-Gill, G\"unter Harder, Gregory Minton and
Gerhard Ramharter for useful discussions. The authors' work was
supported by ERC starter grant DLGAPS 279893 and ISF grant 190/08.
\section{Orbit closures and stable lattices}
Given a lattice $x\in {\mathcal{L}_n}$ and a subgroup $\Lambda \subset x$, we denote by
$r(\Lambda)$ the rank of $\Lambda$ and by $\av{\Lambda}$ the covolume of $\Lambda$
in the linear subspace ${\rm span}(\Lambda)$.
Let
\begin{align}\label{alpha}
\nonumber \mathcal{V}(x)&\overset{\on{def}}{=}\set{\av{\Lambda}^{\frac{1}{r(\Lambda)}}: \Lambda \subset
x
}, \\
\alpha(x)&\overset{\on{def}}{=}\min\mathcal{V}(x).
\end{align}
Since we may take $\Lambda = x$ we have
$\alpha(x) \leq 1$ for all $x \in {\mathcal{L}_n}$, and $x$ is stable precisely if $\alpha(x)=1$.
Observe that $\mathcal{V}(x)$
is a countable discrete subset of the positive reals, and hence the
minimum in
\eqref{alpha} is attained.
Also note that the function $\alpha$ is a variant of the `length of the shortest
vector'; it is continuous and the sets $\{x: \alpha(x) \geq \varepsilon\}$
are an exhaustion of ${\mathcal{L}_n}$ by compact sets.
We begin by explaining the strategy for proving Theorem \ref{thm:
main}, which is identical to the one used by McMullen.
For a lattice $x\in X$ and $\varepsilon>0$ we define an open cover
$\mathcal{U}^{x,\varepsilon}=\set{U^{x,\varepsilon}_k}_{k=1}^n$
of the diagonal group $A$, where if $a\in U^{x,\varepsilon}_k$ then $\alpha(ax)$
is `almost attained' by a subgroup of rank $k$. In particular,
if $a\in U^{x,\varepsilon}_n$ then $ax$ is `almost stable'.
The main point is to show that for any $\varepsilon>0$, $U^{x,\varepsilon}_n \neq
\varnothing$; for then, taking $\varepsilon_j \to 0$ and $a_j \in A$ such
that $a_j\in U_n^{x,\varepsilon_j}$, we find (passing to a subsequence) that
$a_jx$ converges to a stable lattice.
In order to establish that
$U_n^{x,\varepsilon}\ne\varnothing$, we apply a topological result of McMullen
(Theorem~\ref{topological input}) regarding open covers
which is reminiscent of the classical result of Lebesgue
that asserts that in an open cover of Euclidean $n$-space by bounded balls
there must be a point which is covered $n+1$ times. We will work to
show that the
cover $\mathcal{U}^{x,\varepsilon}$ satisfies the assumptions of
Theorem~\ref{topological input}. We will be able to verify these assumptions
when the orbit $Ax$ is bounded. In~\S\ref{sec: reduction to compact orbits} we reduce the proof of
Theorem~\ref{thm: main} to this case.
\subsection{Reduction to bounded orbits}\Name{sec: reduction to compact orbits}
Using a result of Birch and Swinnerton-Dyer, we
will now show that it suffices to prove
Theorem~\ref{thm: main}
under the assumption that the orbit $Ax\subset {\mathcal{L}_n}$ is bounded; that is,
that $\overline{Ax}$ is compact.
In this subsection we will denote $A,G$ by $A_n, G_n$ as various dimensions will appear.
For a matrix $g\in G_n$ we denote by $\br{g}\in {\mathcal{L}_n}$ the corresponding lattice. If
\eq{block form}{
g=\mat{g_1&*&\dots&*\\ 0& g_2&\dots&\vdots \\ \vdots& &\ddots&* \\ 0&\dots&0&g_k}
}
where $g_i\in G_{n_i}$ for each $i$, then we say that $g$ is in
\textit{upper triangular block form}
and refer to the $g_i$'s as the \textit{diagonal blocks}. Note
that in this definition, we insist that
each $g_i$ is of determinant one.
\begin{lemma}\Name{lem: block stable is stable}
Let $x=\br{g}\in {\mathcal{L}_n}$ where $g$ is in upper triangular block form as
in~\eqref{block form} and for each $1\le i\le k$, $\br{g_i}$ is
a stable lattice in $\mathcal{L}_{n_i}$. Then $x$ is stable.
\end{lemma}
\begin{proof}
By induction, in proving the Lemma we may assume that $k=2$. Let us
denote the standard basis of ${\mathbb{R}}^n$ by $\mathbf{e}_1, \ldots, \mathbf{e}_n$, let
us write $n=n_1+n_2$,
$V_1 {\, \stackrel{\mathrm{def}}{=}\, }\on{span}\set{\mathbf{e}_1, \ldots, \mathbf{e}_{n_1}}$,
$V_2 {\, \stackrel{\mathrm{def}}{=}\, } \on{span}\set{\mathbf{e}_{n_1+1} \ldots, \mathbf{e}_n}$, and let $\pi: {\mathbb{R}}^n
\to V_2$ be the natural projection. By construction we have $x \cap
V_1 = [g_1], \pi(x) = [g_2]$.
Let $\Lambda \subset x$ be a subgroup, write
$\Lambda_1 {\, \stackrel{\mathrm{def}}{=}\, } \Lambda\cap V_1$ and choose a direct complement
$\Lambda_2 \subset \Lambda$, that is
$$\Lambda=\Lambda_1+\Lambda_2, \ \ \Lambda_1 \cap \Lambda_2 = \{0\}.$$
We claim that
\eq{eq: claim 1}{\av{\Lambda}=\av{\Lambda_1}\cdot\av{\pi(\Lambda_2)}.}
To see this we recall that one may compute
$|\Lambda|$ via the Gram-Schmidt process. Namely, one begins with a set of generators $v_j$
of $\Lambda$ and successively defines $u_1=v_1$ and $u_j$ is the
orthogonal projection of $v_j$ on ${\rm span} (v_1, \ldots,
v_{j-1})^\perp$. In these terms, $|\Lambda| = \prod_j \|u_j\|$. Since $\pi$ is an orthogonal
projection and $\Lambda \cap V_1$ is in ${\rm Ker} \pi$, \equ{eq: claim 1} is clear from the
above description.
The discrete subgroup
$\Lambda_1$, when viewed as a subgroup of $\br{g_1}\in \mathcal{L}_{n_1}$ satisfies
$\av{\Lambda_1}\ge 1$ because $\br{g_1}$ is assumed to be
stable. Similarly $\pi(\Lambda_2) \subset [g_2] \in \mathcal{L}_{n_2}$
satisfies $\av{\pi(\Lambda_2)}\ge 1$, hence
$\av{\Lambda}\ge 1$.
\end{proof}
\begin{lemma}\Name{lem: compt red}
Let $x\in {\mathcal{L}_n}$ and assume that $\overline{Ax}$ contains a lattice
$\br{g}$ with $g$ of upper triangular block form
as in~\eqref{block form}. For each $1\le i\le k$,
suppose $\br{h_i}\in\overline{A_{n_i}\br{g_i}}\subset \mathcal{L}_{n_i}$. Then there
exists a lattice $\br{h}\in\overline{Ax}$ such that $h$ has the
form~\eqref{block form} with $h_i$ as its diagonal blocks.
\end{lemma}
\begin{proof}
Let $\Omega$ be the set of all lattices $[g]$ of a fixed triangular
form as in ~\eqref{block form}. Then $\Omega$ is a closed subset of
${\mathcal{L}_n}$ and there is a projection
$$\tau: \Omega \to \mathcal{L}_{n_1} \times
\cdots \times \mathcal{L}_{n_k}, \ \ \tau(\br{g}) =
(\br{g_1},\dots,\br{g_k}).$$
The map $\tau$ has a compact fiber and is equivariant with respect to the action
of $\widetilde{ A} {\, \stackrel{\mathrm{def}}{=}\, } A_{n_1} \times \cdots \times A_{n_k}$.
By assumption, there is a sequence $\tilde{a}_j = \left(a^{(j)}_1,
\ldots, a^{(j)}_k\right), \ a^{(j)}_i \in A_{n_i}$ in
$\widetilde{ A}$ such that $a^{(j)}_i [g_i] \to [h_i]$, then after passing to
a subsequence, $\tilde{a}_j [g] \to [h]$ where $h$ has the required
properties. Since $\overline{Ax} \supset \overline {A[g]}$, the claim
follows.
\end{proof}
\begin{lemma}\label{BSD}
Let $x\in {\mathcal{L}_n}$. Then there is $[g] \in \overline{Ax}$ such that, up to
a possible permutation of the coordinates,
$g$ is of upper triangular block form as in~\eqref{block form} and
each $A_{n_i}\br{g_i}\subset \mathcal{L}_{n_i}$ is bounded.
\end{lemma}
\begin{proof}
If the orbit $Ax$ is bounded there is nothing to prove. According to
Birch and Swinnerton-Dyer \cite{BirchSD}, if $Ax$ is
unbounded
then $\overline{Ax}$ contains a lattice with a
representative as in~\eqref{block form} (up to a possible permutation
of the coordinates) with $k=2$. Now the claim follows using
induction and appealing to
Lemma~\ref{lem: compt red}.
\end{proof}
\begin{proposition}\label{copt red prop}
It is enough to establish Theorem~\ref{thm: main} for
lattices having a bounded $A$-orbit.
\end{proposition}
\begin{proof}
Let $x\in {\mathcal{L}_n}$ be arbitrary. By Lemma~\ref{BSD}, $\overline{A x}$
contains a lattice $\br{g}$ with $g$ of upper triangular block form
(up to a possible permutation of the coordinates)
with diagonal blocks representing lattices with bounded orbits under
the corresponding diagonal groups. Assuming Theorem~\ref{thm: main}
for lattices having bounded orbits, and applying Lemma~\ref{lem: compt
red} we may take $g$ whose diagonal blocks represent
stable lattices. By Lemma~\ref{lem: block stable is stable}, $\br{g}$ is
stable as well.
\end{proof}
\subsection{Some technical preparations}
We now discuss the subgroups of a lattice $x\in {\mathcal{L}_n}$ which
almost attain the minimum $\alpha(x)$ in~\eqref{alpha}.
\begin{definition}\label{bn}
Given a lattice $x\in {\mathcal{L}_n}$ and $\delta>0$, let
\begin{align*}
\on{Min}_{\delta}(x)&\overset{\on{def}}{=}\set{\Lambda \subset x:\av{\Lambda}^{\frac{1}{r(\Lambda)}}<(1+\delta)\alpha(x)},\\
\tb{V}_{\delta}(x)&\overset{\on{def}}{=}\on{span}\on{Min}_{\delta}(x),\\
\dim_\delta(x)&\overset{\on{def}}{=}\dim\tb{V}_{\delta}(x).
\end{align*}
\end{definition}
We will need the following technical statement.
\begin{lemma}\label{for the inradius}
For any $\rho>0$
there exists a
neighborhood of the identity $W\subset
G$ with the
following property. Suppose $ 2\rho \leq \delta_0 \leq
d+1$ and suppose $x\in {\mathcal{L}_n}$
such that
$\dim_{\delta_0-\rho}(x)=\dim_{\delta_0+\rho}(x)$.
Then for any $g\in W$ and any
$\delta\in \left(\delta_0-\frac{\rho}{2},\delta_0+\frac{\rho}{2} \right)$ we have
\begin{equation}\label{eq 1806}
\tb{V}_{\delta}(gx)=g\tb{V}_{\delta_0}(x).
\end{equation}
In particular, there is $1 \leq k \leq n$ such that
for any $g\in W$ and any $\delta\in
\left(\delta_0-\frac{\rho}{2},\delta_0+\frac{\rho}{2} \right)$,
$\dim_\delta(gx)=k$.
\end{lemma}
\begin{proof}
Let $c>1$ be chosen close enough to 1 so that for $2\rho \leq \delta_0
\leq d+1$ we have
\eq{eq: defn c}{c^2\left(1+\delta_0+\frac{\rho}{2} \right) < 1+\delta_0 +\rho \ \ \text{and
} \ \frac{1+\delta_0-\frac{\rho}{2}}{c^2} >
1+\delta_0-\rho.}
Let $W$ be a small enough neighborhood of the identity
in $G$, so that for any discrete subgroup $\Lambda \subset \mathbb{R}^n$ we have
\begin{equation}\label{22.2.2}
g\in W \ \ \implies \ \ c^{-1}\av{\Lambda}^\frac{1}{r(\Lambda)}\le
\av{g\Lambda}^\frac{1}{r(g\Lambda)}\le c\av{\Lambda}^\frac{1}{r(\Lambda)}.
\end{equation}
Such a neighborhood exists since the linear action of $G$
on $\bigoplus_{k=1}^n\bigwedge^k_1 {\mathbb{R}}^n$ is continuous, and since
we can write $|\Lambda| = \|v_1 \wedge \cdots \wedge v_r\|$
where $v_1, \ldots, v_r$ is a generating set for $\Lambda$.
It follows from~\eqref{22.2.2} that for any $x\in {\mathcal{L}_n}$ and $g\in W$ we have
\eq{22.2.3}{
c^{-1}\alpha(x)\le \alpha(gx)\le c\alpha(x).
}
Let $\delta\in \left(\delta_0-\frac{\rho}{2}, \delta_0+\frac{\rho}{2}
\right)$ and $g\in W$. We will show below that
%
\begin{equation}\label{22.2.1'}
g\on{Min}_{\delta_0-\rho}(x)\subset
\on{Min}_{\delta}(gx)\subset
g\on{Min}_{\delta_0+\rho}(x).
\end{equation}
Note first that
\equ{22.2.1'} implies the assertion of the Lemma; indeed, since
$\tb{V}_{\delta_1}(x) \subset \tb{V}_{\delta_2}(x)$ for $\delta_1 < \delta_2$, and since we assumed
that
$\dim_{\delta_0-\rho}(x)=\dim_{\delta_0+\rho}(x)$,
we see that
$\tb{V}_{\delta_0}(x)=\tb{V}_{\delta}(x)$ for $\delta_0-\rho \leq \delta
\leq \delta_0+\rho$. So by \equ{eq: defn c}, the
subspaces spanned by the two sides of \eqref{22.2.1'}
are equal to $g\tb{V}_{\delta_0}(x)$ and \eqref{eq 1806}
follows.
It remains to prove~\eqref{22.2.1'}. Let
$\Lambda\in\on{Min}_{\delta_0-\rho}(x)$. Then we find
\[
\begin{split}
\av{g\Lambda}^{\frac{1}{r(g\Lambda)}} & \stackrel{\eqref{22.2.2}}{\leq}
c\av{\Lambda}^{\frac{1}{r(\Lambda)}} \leq c(1+\delta_0 -\rho) \alpha(x) \\ &
\stackrel{\equ{eq: defn c}}{\le}
c^{-1}\left(1+\delta_0-\frac{\rho}{2} \right)\alpha(x) \stackrel{\equ{22.2.3}}{<}(1+\delta)\alpha(gx).
\end{split}\]
By definition this means that $g\Lambda\in\on{Min}_{\delta}(gx)$ which
establishes the first inclusion in \eqref{22.2.1'}. The second
inclusion is similar and is left to the reader.
\ignore{
For the second inclusion, let $\Lambda \subset x$ such that
$g\Lambda\in\on{Min}_{(\delta)}(gx)$. Using the definition and~\eqref{22.2.2},\eqref{22.2.3} we conclude that
\[\av{\Lambda}^{\frac{1}{r(\Lambda)}} \leq c\av{g\Lambda}^{\frac{1}{r(g\Lambda)}}\le
c(1+\delta)\alpha(gx)< c^2 (1+\delta_0+\frac{\rho}{2})\alpha(x).\]
I.e.\ $\Lambda\in\on{Min}_{c^2(1+\delta_0+\frac{\rho}{2})}(x)$ which
establishes the right inclusion in~\eqref{22.2.1'}. }
\end{proof}
\subsection{The cover of $A$}
Let $x\in {\mathcal{L}_n}$ and let $\varepsilon>0$ be given. Define
$\mathcal{U}^{x,\varepsilon}=\left\{U^{x,\varepsilon}_i \right\}_{i=1}^n$ where
\begin{equation}\label{the cover}
U^{x,\varepsilon}_k\overset{\on{def}}{=}\set{a\in A: \on{dim}_\delta(ax)=k\textrm{ for $\delta$
in a neighborhood of }k\varepsilon}.
\end{equation}
\begin{theorem}\Name{order of cover}
Let $x\in {\mathcal{L}_n}$ be such that $Ax$ is bounded. Then for any $\varepsilon \in (0,1)$,
$U^{x,\varepsilon}_n\neq \varnothing.$
\end{theorem}
In this subsection we will reduce the proof of Theorem \ref{thm: main}
to Theorem \ref{order of cover}. This will be done via the following
statement,
which could be interpreted as saying that a
lattice satisfying $ \dim_{\delta}(x)=n
$ is `almost stable'.
\begin{lemma}\label{not growing lemma}
For each $n$, there exists a positive function $\psi(\delta)$ with
$\psi(\delta) \to_{\delta\to 0}0$, such that for any $x\in {\mathcal{L}_n}$,
\eq{eq: lemma first part}{\set{\Lambda_i}_{i=1}^\ell\subset\on{Min}_{\delta}(x) \ \implies \
\Lambda_1+\dots+\Lambda_\ell\in\on{Min}_{\psi(\delta)}(x).
}
In particular, if $\dim_\delta(x)=n$ then $\alpha(x)\ge (1+\psi(\delta))^{-1}$.
\end{lemma}
\begin{proof}
Let $\Lambda,\Lambda'$ be two discrete subgroups of $\mathbb{R}^d$.
The following inequality is straightforward to
prove via the Gram-Schmidt procedure for computing $|\Lambda|$:
\begin{equation}\label{volume formula}
\av{\Lambda+\Lambda'}\le\frac{\av{\Lambda}\cdot\av{\Lambda'}}{\av{\Lambda\cap\Lambda'}}.
\end{equation}
Here we adopt the convention that $\av{\Lambda\cap\Lambda'}=1$ when
$\Lambda\cap\Lambda'=\set{0}$.
Let $x\in {\mathcal{L}_n}$ and let $\set{\Lambda_i}_{i=1}^\ell\subset
\on{Min}_{\delta}(x)$. Assume first that
$\ell\le n$. We prove by induction on $\ell$ the existence of a
function $\psi_\ell(\delta)\overset{\delta\to0}{\longrightarrow}0$
for which
$\Lambda_1+\dots+\Lambda_\ell\in\on{Min}_{\psi_\ell(\delta)}(x)$. For
$\ell=1$ one can trivially pick $\psi_1(\delta)=\delta$.
Assuming the existence of $\psi_{\ell-1}$, set $\Lambda=\Lambda_1$,
$\Lambda'=\Lambda_2+\dots+\Lambda_\ell$, $\alpha=\alpha(x)$ and note
that $r(\Lambda+\Lambda')=r(\Lambda)+r(\Lambda')-r(\Lambda\cap\Lambda')$. We deduce
from~\eqref{volume formula} and the definitions that
\begin{align}\label{eneq2007}
\nonumber \av{\Lambda+\Lambda'}&\le
\frac{\av{\Lambda}\cdot\av{\Lambda'}}{\av{\Lambda\cap\Lambda'}}
\le
\frac{\pa{(1+\delta)\alpha}^{r(\Lambda)}\pa{(1+\psi_{\ell-1}(\delta))\alpha}^{r(\Lambda')}}{\alpha^{r(\Lambda\cap\Lambda')}}\\
&=(1+\delta)^{r(\Lambda)}(1+\psi_{\ell-1}(\delta))^{r(\Lambda')}\alpha^{r(\Lambda+\Lambda')}.
\end{align}
Hence, if we set
$$\psi_\ell(\delta)\overset{\on{def}}{=} \max
\pa{(1+\delta)^{r(\Lambda)}(1+\psi_{\ell-1}(\delta))^{r(\Lambda')}}^{\frac{1}{r(\Lambda+\Lambda')}}-1,$$
where
the maximum is taken over all possible values of $r(\Lambda), r(\Lambda'),
r(\Lambda+\Lambda')$ then $\psi_\ell(\delta)\longrightarrow_{\delta\to 0}0$
and~\eqref{eneq2007} implies that
$\Lambda+\Lambda'\in\on{Min}_{\psi_\ell(\delta)}(x)$ as desired. We take
$\psi(\delta)\overset{\on{def}}{=}\max_{\ell=1}^n\psi_\ell(\delta).$
Now if $\ell >n$ one can find a subsequence
$1\le i_1<i_2\dots<i_d\le n$ such that
$r(\sum_{i=1}^\ell\Lambda_i)=r(\sum_{j=1}^d\Lambda_{i_j})$ and in
particular,
$\sum_{j=1}^d\Lambda_{i_j}$ is of finite index in $\sum_{i=1}^\ell\Lambda_i$. From the first part of the
argument we see that $\sum_{j=1}^d\Lambda_{i_j}\in \on{Min}_{\psi(\delta)}(x)$ and as the covolume of
$\sum_{i=1}^\ell\Lambda_i$ is not larger than that of $\sum_{j=1}^d\Lambda_{i_j}$ we deduce that
$\sum_{i=1}^\ell\Lambda_i \in\on{Min}_{\psi_\ell(\delta)}(x)$ as well.
To verify the last assertion, note that
when $\dim_\delta(x)=n$, \equ{eq: lemma first part} implies the
existence of a finite index subgroup $x'$
of $x$ belonging to $\on{Min}_{\psi(\delta)}(x)$. In particular,
$1\leq \av{x'}^{\frac{1}{n}}\le(1+\psi(\delta))\alpha(x)$ as desired.
\end{proof}
\ignore{
The following statement essentially
says that a lattice $x$ satisfying $\dim_\delta(x)=n$ with $\delta$ small
can be considered to be `almost stable'.
\begin{corollary}\label{hitting stable cor}
If $x_j\in {\mathcal{L}_n}$ satisfies $\dim_{\delta_j}(x_j)=n$ for some sequence
$\delta_j \to 0$, then any accumulation point of
$\set{x_j}$ is a stable lattice.
\end{corollary}
\begin{proof}
By Lemma~\ref{not growing lemma} we have
$$1\ge
\limsup\alpha(x_j)\ge \liminf \alpha(x_j)\ge \lim (1+\psi(\delta_j))^{-1}=1,$$
which shows that $\lim\alpha(x_j)=1$.
The function $\alpha$ is continuous on ${\mathcal{L}_n}$ and therefore if $x$ is an
accumulation point of $\set{x_j}$ then $\alpha(x)=1$, i.e. $x$ is stable.
\end{proof}
}
\begin{proof}[Proof of Theorem~\ref{thm: main} assuming
Theorem~\ref{order of cover}]
By Proposition~\ref{copt red prop} we may assume that $Ax$ is
bounded. Let $\varepsilon_j \in (0,1)$ so that $\varepsilon_j \to_j 0$. By
Theorem~\ref{order of cover} we know that
$U^{x,\varepsilon_j}_n
\neq \varnothing$. This means there is a sequence $a_j\in A$ such that
$\dim_{\delta_j}(a_jx)=n$
where $\delta_j=n\varepsilon_j\to 0$.
The sequence $\set{a_jx}$ is
bounded, and hence has limit points, so passing to a subsequence we
let $x' {\, \stackrel{\mathrm{def}}{=}\, } \lim a_jx.$
By Lemma~\ref{not growing lemma} we have
$$1\ge
\limsup_j\alpha(a_jx)\ge \liminf_j \alpha(a_j x)\ge \lim_j (1+\psi(\delta_j))^{-1}=1,$$
which shows that $\lim_j\alpha(a_j x)=1$.
The function $\alpha$ is continuous on ${\mathcal{L}_n}$ and therefore $\alpha(x')=1$,
i.e. $x' \in \overline{Ax}$ is stable.
\end{proof}
\section{Covers of Euclidean space}\Name{establishing
topological input}
In this section we will prove Theorem \ref{order of cover}, thus
completing the proof of Theorem \ref{thm: main}. Our main
tool will be McMullen's
Theorem~\ref{topological input}. Before stating it we introduce some
terminology. We fix an invariant metric on $A$, and let $R>0$ and $k \in \{0, \ldots, n-1\}$.
\begin{definition}\Name{def: almost affine}
We say that a subset $U\subset A$ is $(R,k)$-\textit{almost affine} if it is
contained in an $R$-neighborhood of a coset of a connected $k$-dimensional
subgroup of $A$.
\end{definition}
\begin{definition}\Name{def: inradius}
An open cover $\mathcal{U}$ of $A$ is said to have \textit{inradius} $r>0$ if
for any $a\in A$ there exists $U\in\mathcal{U}$ such that $B_r(a)\subset
U$, where $B_r(a)$ denotes the ball in $A$ of radius $r$ around $a$.
\end{definition}
\begin{theorem}[Theorem 5.1 of~\cite{McMullenMinkowski}]\Name{topological input}
Let $\mathcal{U}$ be an open cover of $A$ with inradius $r>0$ and let
$R>0$. Suppose that for any $1\le k\le n-1$,
every connected component $V$ of the intersection of
$k$ distinct elements of $\mathcal{U}$ is
$(R,(n-1-k))$-almost affine. Then there is
a point in $A$ which belongs to
at least $n$ distinct elements of $\mathcal{U}$. In particular, there are at
least $n$ distinct non-empty sets in $\mathcal{U}$.
\end{theorem}
The hypotheses of McMullen's theorem were slightly weaker but the
version above is sufficient for our purposes.
We give a different proof of
Theorem \ref{topological input} in this paper; namely it follows from
the more general Theorem
\ref{thm: covering}, which is proved
in Appendix \ref{appendix: Levin}.
\ignore{
\begin{proposition}\label{assumptions hold}
If $x\in {\mathcal{L}_n}$ has a bounded $A$-orbit and $\varepsilon>0$ then the collection
$\mathcal{U}^{x,\varepsilon}$ is an open cover of $A$ with positive inradius
such that at least one of the following two possibilities hold:
\begin{enumerate}
\item $U^{x,\varepsilon}_n\ne\varnothing$.
\item The hypothesis of Theorem~\ref{topological input} are satisfied.
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Theorem~\ref{order of cover} assuming
Proposition~\ref{assumptions hold}]
By the Proposition the possibility that $U^{x,\varepsilon}_d=\varnothing$ is
ruled out as if this is the case then we may apply
Theorem~\ref{topological input} and deduce that $\mathcal{U}^{x,\varepsilon}$ must
contain at least $n$ non-empty sets and in particular,
$U_n^{x,\varepsilon}\ne\varnothing$ which contradicts our assumption.
\end{proof}
}
\subsection{Verifying the hypotheses of Theorem \ref{topological
input} }
Below we fix a compact set $K\subset {\mathcal{L}_n}$ and a lattice $x$ for which
$Ax\subset K$. Furthermore, we fix $\varepsilon>0$ and denote
the collection $\mathcal{U}^{x, \varepsilon}$ defined in~\eqref{the cover} by
$\mathcal{U}=\set{U_i}_{i=1}^n$.
\begin{lemma}\Name{lem: positive inradius}
The collection $\mathcal{U}$ forms an open cover of $A$ with positive inradius.
\end{lemma}
\begin{proof}
The fact that the sets $U_i\subset A$ are open follows readily from
the requirement in~\eqref{the cover} that $\on{dim}_\delta$ is constant
in a neighborhood of $\delta=k\varepsilon$.
Given $a\in A$, let $1\le k_0\le n$ be the minimal number $k$ for which
$\dim_{(k+\frac{1}{2})\varepsilon}(ax)\le k$
(this inequality holds trivially for $k=n$).
From the minimality of $k_0$ we conclude that
$\dim_\delta(ax)=k_0$ for
any
$\delta\in \left[\pa{k_0-\frac{1}{2}}\varepsilon,\pa{k_0+\frac{1}{2}}\varepsilon \right]$.
This
shows that $a\in U_{k_0}$ so
$\mathcal{U}$ is indeed a cover of $A$.
We now show that the cover has positive inradius.
Let $W \subset G$ be the open neighborhood of the identity obtained
from
Lemma~\ref{for the inradius} for $\rho {\, \stackrel{\mathrm{def}}{=}\, } \frac{\varepsilon}{2}$.
Taking $\delta_0 {\, \stackrel{\mathrm{def}}{=}\, } k_0 \varepsilon$ we find that
for any
$g\in W$,
$\delta\in \pa{\pa{k_0-\frac{1}{4}}\varepsilon,\pa{k_0+\frac{1}{4}}\varepsilon}$ we
have that $\dim_\delta(gax)=k_0$. This shows that
$(W\cap A)a\subset U_{k_0}$. Since $W\cap A$ is an open neighborhood
of the identity in $A$ and the metric on $A$ is invariant under
translation by elements of $A$, there exists $r>0$
(independent of $k_0$ and $a$) so that $B_r(a)\subset U_{k_0}$. In
other words, the inradius of $\mathcal{U}$ is positive as desired.
\end{proof}
The following will be used for verifying the second hypothesis of
Theorem~\ref{topological input}.
\begin{lemma}\Name{flat things}
There exists $R>0$ such that any connected component of $U_k$ is $(R,k-1)$-almost affine.
\end{lemma}
\begin{definition}\label{cv}
For a discrete subgroup $\Lambda \subset \mathbb{R}^d$ of rank $k$,
let $$c(\Lambda)\overset{\on{def}}{=}\inf\set{\av{a\Lambda}^{1/k}:a\in A},$$ and say that
$\Lambda$ is {\it incompressible} if $c(\Lambda)>0$.
\end{definition}
Lemma~\ref{flat things} follows from:
\begin{theorem}[{\cite[Theorem 6.1]{McMullenMinkowski}}]\Name{finite
distance from a group}
For any positive $c,C$ there exists $R>0$ such that if
$\Lambda \subset \mathbb{R}^n$ is an incompressible discrete subgroup of rank
$k$ with $c(\Lambda)\ge c$ then
$\set{a\in A: \av{a\Lambda}^{1/k}\le C}$ is $(R,j)$-almost affine for some $j\le
\gcd(k,n)-1$.
\end{theorem}
\ignore{
\begin{lemma}\label{ending lemma}
There are positive constants $c,C$ such that if $V\subset U_k$ is a
connected component, then there exists
$\Lambda \subset x$ with $c(\Lambda)>c$ such that $V\subset\set{a\in A: \av{a\Lambda}^{1/k}\le C}$.
\end{lemma}
The following Lemma gives us the lower bound $c$ that appears in
Lemma~\ref{ending lemma}. This is the only place in the proof
where the boundedness of the orbit $Ax$ really necessary.
\begin{lemma}\label{why bounded}
There exists a constant $c>0$ (that depends only on the compact set
$K$ which contains $Ax$), such that
for any discrete subgroup $\Lambda \subset x$ we have that $c(\Lambda)\ge c$.
\end{lemma}
\begin{proof}
Let $\rho>0$ be a lower bound for the lengths of non-zero vectors
belonging to the lattices in $K$ (by Mahler's criterion
the compactness of $K$ implies the existence of such $\rho$). Observe
that there is an upper bound $\ell_d$ (related to the
so called Hermite constants) on the lengths of the shortest non-zero
vectors of discrete subgroups $\Lambda<\mathbb{R}^d$ satisfying
$\av{\Lambda}=1$. This in turn implies that for $a\in A$, $\Lambda<x$ we
must have that $\rho\av{a\Lambda}^{-1/r(\Lambda)}\le\ell_d$,
or equivalently $\frac{\rho}{\ell_d}\le\av{a\Lambda}^{1/r(\Lambda)}$, which
concludes the proof.
\end{proof}
For any $1\le k\le d$, write $\tb{gr}_k$ for the Grassmannian of
$k$-dimensional subspaces of $\mathbb{R}^d$.
Define a map $\mathcal{M}:U_k\to \tb{gr}_k$ by
$$U_k\ni a\mapsto \mathcal{M}(a)\overset{\on{def}}{=} a^{-1}\tb{V}_{(1+k\varepsilon)}(ax).$$
\begin{lemma}\Name{locally constant}
The function $\mathcal{M}$ is locally constant on $U_k$.
\end{lemma}
\begin{proof}
By definition, the fact that $a_0\in U_k$ means that there exists
$0<\rho<k\varepsilon$ such that
$\dim_\delta(a_0x)=k$ for any $\delta\in(k\varepsilon-\rho,k\varepsilon+\rho)$. Applying
Lemma~\ref{for the inradius}
for the lattice $a_0x$ with $\rho$ and $\delta_0=k\varepsilon$
we see by~\eqref{eq 1806} that for any $a$ in a certain neighborhood
of the identity
\begin{align*}
\mathcal{M}(aa_0)&=a_0^{-1}a^{-1}\tb{V}_{(1+k\varepsilon)}(aa_0x)
=a_0^{-1}\tb{V}_{(1+k\varepsilon)}(a_0x)=\mathcal{M}(a_0).
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{ending lemma}]
Let $c=\inf c(\Lambda)$ where the infimum
is taken over all discrete subgroups $x$. By Lemma~\ref{why bounded} we have that
$c>0$.
Let $\Lambda=\mathcal{M}(a)\cap x$ where $a\in V$ is chosen arbitrarily. By
Lemma~\ref{locally constant} $\Lambda$ is independent of the choice of
$a\in V$.
Given $a\in V$, by the definition of $\Lambda$ and $\mathcal{M}(a)$ we see that
\begin{align*}
a\Lambda&=a(x\cap\mathcal{M}(a)) =a(x\cap a^{-1}\tb{V}_{(1+k\varepsilon)}(ax))=ax\cap\tb{V}_{(1+k\varepsilon)}(ax).
\end{align*}
By Lemma~\ref{not growing lemma} we have that
\begin{align*}
\av{a\Lambda}^{1/k}=\av{
ax\cap\pa{\tb{V}_{(1+k\varepsilon)}(ax)}}^{1/k}<(1+\psi(k\varepsilon))\alpha(ax)\le
C,
\end{align*}
where $C$ is an absolute constant that depends only on the dimension
$d$ (because $\alpha$ is bounded by 1 and $\psi$ is bounded
and depends only on $d$). This finishes the proof of the Lemma and by
that concludes the proof of Proposition~\ref{assumptions hold} as
well.
\end{proof}
}
\begin{proof}[Proof of Lemma~\ref{flat things}]
We first claim that there exists $c>0$ such that
for any discrete subgroup $\Lambda \subset x$ we have that $c(\Lambda)\ge
c$. To see this, recall that $Ax$ is contained in a compact subset
$K$, and hence by Mahler's compactness criterion, there is a positive
lower bound on
the length of any non-zero vector
belonging to a lattice in $K$.
On the other
hand, Minkowski's convex body theorem shows that the shortest nonzero
vector in a discrete subgroup $\Lambda \subset {\mathbb{R}}^n$ is bounded above
by a constant multiple of $|\Lambda|^{1/r(\Lambda)}$. This implies the
claim.
In light of Theorem \ref{finite
distance from a group}, it suffices to show that there is $C>0$
such that if $V\subset U_k$ is a
connected component, then there exists
$\Lambda \subset x$ such that $V\subset\set{a\in A: \av{a\Lambda}^{1/k}\le C}$.
For any $1\le k\le n$, write $\tb{gr}_k$ for the Grassmannian of
$k$-dimensional subspaces of $\mathbb{R}^n$.
Define
$$
\mathcal{M}:U_k\to \tb{gr}_k, \ \
\mathcal{M}(a)\overset{\on{def}}{=} a^{-1}\tb{V}_{k\varepsilon}(ax).$$
Observe that $\mathcal{M}$ is locally constant on $U_k$. Indeed, by
definition of $U_k$,
for $a_0\in U_k$ there exists
$0<\rho< \frac{\varepsilon}{2}$ such that
$\dim_\delta(a_0x)=k$ for any $\delta\in(k\varepsilon-\rho,k\varepsilon+\rho)$. Applying
Lemma~\ref{for the inradius}
for the lattice $a_0x$ with $\rho$ and $\delta_0=k\varepsilon$
we see
that for any $a$ in a neighborhood
of the identity in $A$,
\begin{align*}
\mathcal{M}(aa_0)&=a_0^{-1}a^{-1}\tb{V}_{k\varepsilon}(aa_0x)
=a_0^{-1}\tb{V}_{k\varepsilon}(a_0x)=\mathcal{M}(a_0).
\end{align*}
Now let $\Lambda {\, \stackrel{\mathrm{def}}{=}\, } x\cap \mathcal{M}(a)$ where $a\in V$; $\Lambda$ is well-defined
since $\mathcal{M}$ is locally
constant.
Then
for $a \in V$,
\begin{align*}
a\Lambda&=a(x\cap\mathcal{M}(a)) =a(x\cap a^{-1}\tb{V}_{k\varepsilon}(ax))=ax\cap\tb{V}_{k\varepsilon}(ax).
\end{align*}
By Lemma~\ref{not growing lemma} we have that
\begin{align*}
\av{a\Lambda}^{1/k}=\av{
ax\cap \tb{V}_{k\varepsilon}(ax)}^{1/k}<(1+\psi(k\varepsilon))\alpha(ax).
\end{align*}
Since $\alpha(ax) \leq 1$ we may take $C {\, \stackrel{\mathrm{def}}{=}\, } 1+\psi(k\varepsilon)$ to
complete the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{order of cover}]
Assume by contradiction that $Ax$ is bounded but $U_n^{x, \varepsilon} =
\varnothing$ for some $\varepsilon \in (0,1)$. Then by Lemma \ref{lem: positive inradius},
$$\mathcal{U} {\, \stackrel{\mathrm{def}}{=}\, } \left \{U_1,
\ldots, U_{n-1} \right \}, \text{ where } U_j {\, \stackrel{\mathrm{def}}{=}\, } U_j^{x, \varepsilon}, $$
is a cover of $A$ of positive inradius. Moreover, if $V$ is a
connected component of $U_{j_1} \cap \cdots \cap U_{j_k}$ with $j_1 < \cdots < j_k \leq n-1$,
then $V_k \subset U_{j_1}$ and $j_1 \leq n-k$. So in
light of Lemma \ref{flat things},
the hypotheses of Theorem~\ref{topological input} are satisfied.
We
deduce that $\mathcal{U} = \left\{U_1, \ldots, U_{n-1} \right \}$
contains at least $n$ elements, which is impossible.
\end{proof}
\section{Bounds on Mordell's constant}\Name{sec: rankin bounds}
In analogy with~\eqref{alpha} we define for
any $x\in {\mathcal{L}_n}$ and $1\le k\le n$,
\begin{align}\Name{eq: k quantities}
\mathcal{V}_k(x)&\overset{\on{def}}{=}\set{\av{\Lambda}^{1/r(\Lambda)}:\Lambda \subset x, r(\Lambda)=k},\\
\alpha_k(x)&\overset{\on{def}}{=}\min\mathcal{V}_k(x).
\end{align}
The following is clearly a consequence of Theorem \ref{thm: main}:
\begin{cor}\Name{cor: Euclidean}
For any $x \in {\mathcal{L}_n}$, any $\varepsilon>0$ and any $k \in \{1, \ldots, n\}$ there is $a \in
A$ such that $\alpha_k(ax) \geq 1-\varepsilon$.
\end{cor}
As the lattice $x = {\mathbb{Z}}^n$ shows, the constant 1 appearing in
this corollary cannot be improved for any $k$. Note also that the case
$k=1$ of
Corollary \ref{cor: Euclidean}, although not stated explicitly in
\cite{McMullenMinkowski}, could be derived easily from McMullen's results in
conjunction with \cite{BirchSD}.
\begin{proof}[Proof of Corollary \ref{cor: Ramharter conj}]
Since the $A$-action maps a symmetric box $\mathcal{B}$ to a
symmetric box of the same volume, the function $\kappa : {\mathcal{L}_n} \to {\mathbb{R}}$
in \equ{eq: defn const} is $A$-invariant. By the case $k=1$ of
Corollary \ref{cor: Euclidean}, for any $\varepsilon>0$ and any $x \in {\mathcal{L}_n}$
there is $a \in A$ such that $ax$ does not contain nonzero vectors of
Euclidean length at most $1-\varepsilon$, and hence does not contain nonzero vectors
in the cube $\left [-\left(\frac{1}{\sqrt{n}} - \varepsilon\right),
\left(\frac{1}{\sqrt{n}} - \varepsilon\right) \right ]^n$. This implies that
$\kappa(x) \geq \left(\frac{1}{\sqrt{n}} \right)^n$, as claimed.
\end{proof}
We do not know whether the bound $\kappa_n \geq n^{-n/2}$ is
asymptotically optimal. However, it is not optimal for any fixed
dimension $n$:
\begin{proposition} \Name{prop: not optimal}
For any $n$, $\kappa_n > n^{-n/2}$.
\end{proposition}
\begin{proof}
It is clear from the definition of the functions $\kappa$ and
$\alpha_k$ that if $x_j \to x_0$ in ${\mathcal{L}_n}$, then
$$\kappa(x_0) \leq \liminf_j \kappa(x_j) \ \ \text{and } \
\alpha_k(x_0) \geq \limsup_j \alpha_k(x_j).$$
A simple compactness argument implies that the
infimum in \equ{eq: defn kappan} is attained, that is there is
$x \in {\mathcal{L}_n}$ such that $\kappa_n = \kappa(x)$; moreover, for any $x_0
\in \overline{Ax}, \kappa(x_0) = \kappa(x)=\kappa_n$. Using the case $k=1$ of Corollary
\ref{cor: Euclidean}, we let $x_0$ be a stable lattice in
$\overline{Ax}$ such that $\alpha_1(x_0)
\geq 1$. That is, $x_0$ contains no vectors in the open unit
Euclidean ball, so the open cube
$C {\, \stackrel{\mathrm{def}}{=}\, } \left( -\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}\right)^n$ is
admissible. Moreover, the only possible vectors in $x_0$ on $\partial
\, C$ are on the corners of $C$, so there is $\varepsilon>0$ such that the box
$C' {\, \stackrel{\mathrm{def}}{=}\, } \left( -\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}\right)^{n-1} \times
\left(-\left(\frac{1}{\sqrt{n}} + \varepsilon \right) ,
\frac{1}{\sqrt{n}}+ \varepsilon\right)$ is also admissible. Taking closed
boxes $\mathcal{B} \subset C'$ with volume arbitrarily close to that of $C'$,
we see that
$$
\kappa_n = \kappa(x_0) \geq \frac{\Vol(C')}{2^n} > n^{-n/2}.
$$
\end{proof}
\ignore{
We take this opportunity to mention another connection between the
Mordell constant and the dynamics of the $A$-action on
${\mathcal{L}_n}$.
\begin{proposition}\Name{prop: conjecture minimizers}
There is $x \in {\mathcal{L}_n}$ with a bounded n $\kappa_n$ is attained on a compact $A$-orbit.
\end{proposition}
\begin{proof}
\end{proof}
The following is a well-known conjecture:
$$
\text{(CSDM)} \ \ \text{any bounded } A \text{-orbit on } {\mathcal{L}_n} \text{ is compact.}
$$
This first appeared in the paper \cite{} of
of Cassels and
Swinnerton-Dyer, and was recast in dynamical terms by Margulis in
\cite{}. Moreover compact
$A$-orbits correspond to algebraic lattices obtained from orders in
totally real number fields, see \cite{LW}.
}
Our next goal is Corollary \ref{cor: 1 mod 4} which gives an explicit
lower bound on $\kappa_n$, which
improves \equ{eq: our bound} for $n$ congruent to 1 mod 4. To obtain our bound
we treat separately lattices with bounded or unbounded
$A$-orbits. If $Ax$ is unbounded we bound $\kappa(x)$ by using an inductive
procedure and the work of Birch and Swinnerton-Dyer, as in \S
\ref{sec: reduction to compact orbits}. In the bounded case we use arguments of
McMullen and known
upper bounds for Hadamard's determinant problem. Our method
applies with minor modifications whenever $n$ is not divisible by
4.
We begin with an analogue of Lemma \ref{lem: block stable is stable}.
\begin{lemma}\Name{lem: bound on kappa in blocks}
Suppose $x = [g] \in {\mathcal{L}_n}$ with $g$ in upper triangular block form as
in \equ{block form}. Then $\kappa(x) \geq \prod_1^k \kappa
([g_i])$. In particular $\kappa(x) \geq \prod_1^k n_i^{-n_i/2}.$
\end{lemma}
\begin{proof}
By induction, it suffices to prove the Lemma in case $k=2$. In this
case there is a direct sum decomposition ${\mathbb{R}}^n = V_1 \oplus V_2$ where
the $V_i$ are spanned by standard basis vectors, and if we write $\pi:
{\mathbb{R}}^n \to V_2$ for the corresponding projection, then $[g_1] = x \cap
V_1, [g_2] = \pi(x)$. Write $\kappa^{(i)} {\, \stackrel{\mathrm{def}}{=}\, } \kappa([g_i])$. Then for
$\varepsilon>0$,
there are symmetric boxes $\mathcal{B}_i \subset V_i$
such that $\mathcal{B}_i$ is admissible for $[g_i]$ and
$$\Vol(\mathcal{B}_i) \geq \frac{\kappa^{(i)} - \varepsilon}{2^{n_i}}.$$
We claim that $\mathcal{B} {\, \stackrel{\mathrm{def}}{=}\, } \mathcal{B}_1 \times \mathcal{B}_2$ is
admissible for $x$. To see this, suppose $u \in x \cap
\mathcal{B}$. Since $\pi(u) \in \mathcal{B}_2$ and $\mathcal{B}_2$
is admissible for $\pi(x) = [g_2]$ we must have
$\pi(u) =0$, i.e. $u \in x \cap V_1 = [g_1]$; since
$\mathcal{B}_1$ is admissible for $[g_1]$ we must have $u=0$.
This implies
$$\kappa(x) \geq 2^n \Vol(\mathcal{B}) = 2^{n_1} \Vol(\mathcal{B}_1)
\cdot 2^{n_2}
\Vol(\mathcal{B}_2) \geq (\kappa^{(1)} -
\varepsilon)(\kappa^{(2)}-\varepsilon),$$
and the result follows taking $\varepsilon \to 0$.
\end{proof}
\begin{corollary}\Name{cor: kappa unbounded orbits}
If $x \in {\mathcal{L}_n}$ is such that $Ax$ is unbounded then
\eq{eq: kappa star}{
\kappa(x) \geq
(n-1)^{-(n-1)/2}.
}
\end{corollary}
\begin{proof}
If $Ax$ is unbounded then by \cite{BirchSD}, up to a permutation of
the axes, there is $x' \in
\overline{Ax}$ so that $x' = [g]$ is in upper triangular form, with $k
\geq 2$ blocks. Let the
corresponding parameters as in \equ{block form} be $n= n_1+ \cdots
+n_k$. Since $\kappa(x)
\geq \kappa(x')$, by Lemma \ref{lem: bound on kappa in blocks} it
suffices to prove
that
\eq{eq: suffices to prove that}{
\prod_{i=1}^k \frac{1}{n_i^{n_i/2}} \geq \frac{1}{(n-1)^{\frac{n-1}{2}}}.
}
It is easy to check that for $j=1, \ldots, n-1$,
$$
j^{\frac{j}{2}}(n-j)^{\frac{n-j}{2}} \leq (n-1)^{\frac{n-1}{2}},
$$ and the case $k=2$ of \equ{eq: suffices to prove that} follows.
By induction on $k$ one then shows that
$
\prod_{i=1}^k n_i^{-n_i/2} \geq (n-k+1)^{-\frac{n-k+1}{2}}
$
and this implies \equ{eq: suffices to prove that} for all $k\geq 2$.
\end{proof}
To treat the bounded orbits we will use known bounds on the Hadamard
determinant problem, which we now
recall. Let
\eq{eq: defn hn}{
h_n {\, \stackrel{\mathrm{def}}{=}\, } \sup \left \{ |\det (a_{ij})|: \forall i,j \in \{1, \ldots, n\},
|a_{ij}| \leq 1 \right \}.
}
Hadamard showed that $h_n \leq n^{n/2}$ and proved that this bound is
not optimal unless $n$ is equal to 1,2 or is a multiple of 4. Explicit
upper bounds for such $ n $ have been obtained
by Barba, Ehlich and Wojtas (see \cite{brenner, wiki_hadamard}).
\begin{proposition}\Name{prop: improving using Hadamard}
If $x \in {\mathcal{L}_n}$ has a bounded $A$-orbit then $\kappa(x) \geq
\frac{1}{h_n}$.
\end{proposition}
\begin{proof}[Sketch of proof]
Let $\varepsilon>0$. There is $p <\infty$ such that the $L^p$ norm and the
$L^\infty$ norm on ${\mathbb{R}}^n$ are $1+\varepsilon$-biLipschitz; i.e. for any $v
\in {\mathbb{R}}^n$,
\eq{eq: bilipschitz}{
\frac{\|v\|_p}{1+\varepsilon} \leq \|v\|_{\infty} \leq (1+\varepsilon) \|v\|_p.
}
In \cite{McMullenMinkowski}, McMullen showed that the closure of any
bounded $A$-orbit contains a well-rounded lattice, i.e. a lattice
whose shortest nonzero vectors span ${\mathbb{R}}^n$. In McMullen's paper, the
length of the shortest vectors was measured using the Euclidean
norm, but {\em McMullen's arguments apply equally well to the shortest
vectors with respect to the $L^p$ norm}. Thus there is $a \in A$ and
vectors $v_1, \ldots, v_n \in ax$ spanning ${\mathbb{R}}^n$, such that for $i=1, \ldots, n$,
$$ \|v_i\| \in [r, (1+\varepsilon)r].
$$
Here $r$ is the length, with respect to
the $L^p$-norm, of the shortest nonzero vector of $ax$. Using the two
sides of \equ{eq: bilipschitz} we find that $ax$
contains an admissible symmetric box of sidelength $r/(1+\varepsilon)$, and
the $L^\infty$ norm of the $v_i$ is at most $(1+\varepsilon)^2 r$. Let $A$ be
the matrix whose columns are the $v_i$. Since the $v_i$ span ${\mathbb{R}}^n$,
$\det A \neq 0$, and since $x$ is unimodular, $|\det A| \geq
1$. Recalling \equ{eq: defn hn} we find that
$$1 \leq |\det A| \leq \left((1+\varepsilon)^2)r\right)^{n} h_n,
$$
and by definition of $\kappa$ we find
$$
\kappa(x) = \kappa(ax) \geq \left(\frac{r}{1+\varepsilon} \right)^n.
$$
Putting these together and letting $\varepsilon \to 0$ we see that
$\kappa(x) \geq \frac{1}{h_n}$, as claimed.
\end{proof}
\begin{corollary}\Name{cor: 1 mod 4}
If $n \geq 5$ is congruent to 1 mod 4, then
\eq{eq: better bound}{
\kappa_n \geq \frac{1}{\sqrt{2n-1}(n-1)^{(n-1)/2}}.
}
\end{corollary}
\begin{proof}
The right hand side of
\equ{eq: better bound} is clearly smaller than the right hand side of
\equ{eq: kappa star}. Now the
claim follows from Corollary \ref{cor: kappa unbounded orbits} and
Proposition \ref{prop: improving using Hadamard}, using Barba's bound
\eq{eq: Barba}{
h_n \leq \sqrt{2n-1}(n-1)^{(n-1)/2}.
}
\end{proof}
The same argument applies in the other cases in which $n$ is
sufficiently large and is not divisible by 4, since in these cases
there are explicit upper bounds for the numbers $h_n$ which could be
used in place \equ{eq: Barba}.
\section{Two strategies for Minkowski's conjecture}\Name{sec: MC}
We begin by recalling the well-known Davenport-Remak strategy
for proving Minkowski's conjecture. The function
$N(u) = \prod_1^n u_i$ is clearly $A$-invariant, and
it follows that the quantity
$$\widetilde{ N}(x) {\, \stackrel{\mathrm{def}}{=}\, } \sup_{u \in {\mathbb{R}}^n} \inf_{v
\in x} |N(u-v)|$$
appearing in \equ{eq: Minkowski conj} is
$A$-invariant. Moreover, it is easy to show that if $x_n \to x$ in
${\mathcal{L}_n}$ then $\widetilde{ N}(x) \geq \limsup_n \widetilde{ N}(x_n)$. Therefore, in order
to show the estimate \equ{eq: Minkowski conj} for $x' \in {\mathcal{L}_n}$, it is
enough to show it for some $x \in \overline{Ax'}$. Suppose that $x$
satisfies \equ{eq: covrad} with $d=n$; that is for every $u \in {\mathbb{R}}^n$ there is $v
\in x$ such that $\|u-v\| \leq \frac{\sqrt{n}}{2}$.
Then applying
the inequality of arithmetic and geometric means one finds
$$\prod_1^n \left(|u_i-v_i|^2 \right)^{\frac1n} \leq \frac{1}{n} \sum_1^n |u_i-v_i|^2 \leq \frac{1}{4}
$$
which implies $|N(u-v)| \leq \frac{1}{2^n}$.
The upshot is that in order to prove Minkowski's conjecture, it is
enough to prove that for every $x' \in {\mathcal{L}_n}$ there is $x \in
\overline{Ax}$ satisfying \equ{eq: covrad}. So in light of Theorem
\ref{thm: main} we obtain:
\begin{corollary}\Name{cor: for Minkowski 1}
If all stable lattices in ${\mathcal{L}_n}$ satisfy \equ{eq: covrad}, then
Minkowski's conjecture is true in dimension $n$.
\end{corollary}
In the next two subsections, we outline two strategies for
establishing that all stable lattices satisfy
\equ{eq: covrad}.
Both strategies yield affirmative answers in dimensions $n
\leq 7$, thus providing new proofs of Minkowski's conjecture in these
dimensions.
\subsection{Using Korkine-Zolotarev reduction}
Korkine-Zolotarev reduction is a classical method for
choosing a basis $v_1, \ldots, v_n$ of a lattice $x \in {\mathcal{L}_n}$. Namely
one takes for $v_1$ a shortest nonzero vector of $x$ and
denotes its length by $A_1$. Then, proceeding inductively, for $v_i$ one takes
a vector whose projection onto $({\rm span}(v_1, \ldots, v_{i-1}))^\perp$ is
shortest (among those with nonzero projection), and denotes the length
of this
projection by $A_i$. In case there is more
than one shortest vector the process
is not uniquely defined. Nevertheless we call $A_1, \ldots, A_n$ the
{\em diagonal KZ coefficients of $x$} (with the understanding that
these may be multiply defined for some measure zero subset of ${\mathcal{L}_n}$). Since $x$
is unimodular we always have
\eq{eq: det one}{\prod A_i
=1.}
Korkine and Zolotarev proved the bounds
\eq{eq: KZ bounds}{
A_{i+1}^2 \geq \frac34 A_i^2, \ \ A_{i+2}^2 \geq \frac23 A_i^2.
}
A method introduced by Woods \cite{Woods_n=4} and developed further in
\cite{hans-gill1} leads to an upper bound on
$\covrad(x)$ in terms of the diagonal KZ coefficients.
The method relies on the following estimate. Below
$\gamma_n {\, \stackrel{\mathrm{def}}{=}\, } \sup_{x \in {\mathcal{L}_n}} \alpha_1(x)$, where $\alpha_1$
is defined via \equ{eq: k quantities}, that is, $\gamma^2_n$ is the
so-called Hermite constant.
\begin{lemma}[Woods]\Name{lemma of Woods}
Suppose that $x$ is a lattice in ${\mathbb{R}}^n$ of covolume $d$, and suppose
that $2 A_1^n \geq
d \gamma_{n+1}^{n+1}$. Then
$$
\covrad^2(x) \leq A_1^2 -\frac{A_1^{2n+2}}{d^2 \gamma_{n+1}^{2n+2}}.
$$
\end{lemma}
Woods also used the following observation:
\begin{lemma}\Name{first Woods lemma}
Let $x$ be a lattice in ${\mathbb{R}}^n$, let $\Lambda$ be a subgroup, and let
$\Lambda'$ denote the projection of $x$ onto $({\rm span}
\Lambda)^{\perp}$. Then
$$
\covrad^2(x) \leq \covrad^2(\Lambda) +\covrad^2(\Lambda')
$$
\end{lemma}
As a consequence of Lemmas \ref{lemma of Woods} and \ref{first Woods
lemma}, we obtain:
\begin{proposition}\Name{prop: combining Woods}
Suppose $A_1, \ldots, A_n$ are diagonal KZ coefficients of $x \in
{\mathcal{L}_n}$ and suppose $n_1, \dots, n_k$ are positive integers with $n = n_1 + \cdots
+ n_k$.
Set
\eq{eq: defn mi di}{m_i {\, \stackrel{\mathrm{def}}{=}\, } n_1 + \cdots
+ n_i \ \text{and } d_i {\, \stackrel{\mathrm{def}}{=}\, } \prod_{j=m_{i-1}+1}^{m_{i}} A_j.}
If
\eq{eq: assumption of woods}{
2A_{m_{i-1}+1} \geq d_i \gamma_{n_i+1}^{n_i+1}
}
for each $i$, then
\eq{eq: consequence}{
\covrad^2(x ) \leq \sum_{i=1}^k \left(A^2_{m_{i-1}+1} -
\frac{A^{2n_i+2}_{m_{i-1}+1}}{d_i^2 \gamma_{n_i+1}^{n_i+1}} \right)
}
\end{proposition}
\begin{proof}
Let $v_1, \ldots, v_n$ be the basis of $x$
obtained by the Korkine Zolotarev reduction process. Let
$\Lambda_1$ be the subgroup of $x$ generated by $v_1, \ldots,
v_{n_1}$, and for $i=2, \ldots, k$ let
$\Lambda_i$ be the projection onto
$(\bigoplus_1^{i-1} \Lambda_j)^{\perp}$ of the subgroup of $x$
generated by $v_{m_{i-1}+1}, \ldots, v_{m_{i}}$. This is a lattice of
dimension $m_i$, and arguing as in the proof of \equ{eq: claim 1} we
see that it has covolume $d_i$. The assumption
\equ{eq: assumption of woods} says that we may apply Lemma \ref{lemma
of Woods}
to each $\Lambda_i$. We obtain
$$\covrad^2(\Lambda_i) \leq
A^2_{m_{i-1}+1} - \frac{A^{2n_i+2}_{m_{i-1}+1}}{d_i^2
\gamma_{n_i+1}^{n_i+1}}
$$ for each $i$, and we combine these estimates using Lemma
\ref{first Woods lemma} and an obvious induction.
\end{proof}
\begin{remark}
Note that it is an open question to determine the numbers $\gamma_n$;
however, if we have a bound $\tilde{\gamma}_n \geq \gamma_n$ we may
substitute it into Proposition \ref{prop: combining Woods} in place of $\gamma_n$, as this
only makes the requirement \equ{eq: assumption of woods} stricter and
the conclusion \equ{eq: consequence}
weaker.
\end{remark}
Our goal is to apply this method to the problem of bounding the covering
radius of stable lattices. We note:
\begin{proposition}\Name{prop: KZ of stable}
If $x$ is stable then we have the inequalities
\eq{eq: defn KZ stable}{
A_1 \geq 1, \ \ A_1 A_2 \geq 1, \ \ \ldots \ \ A_1 \cdots A_{n-1} \geq 1.
}
\end{proposition}
\begin{proof}
In the above terms, the number $A_1 \cdots A_i$ is equal to
$|\Lambda|$ where $\Lambda$ is the subgroup of $x$ generated by $v_1,
\ldots, v_i$.
\end{proof}
This motivates the following:
\begin{definition}
We say that an $n$-tuple of positive real numbers $A_1, \ldots, A_n$
is {\em KZ stable} if the inequalities \equ{eq: det one}, \equ{eq: KZ
bounds}, \equ{eq: defn KZ stable} are
satisfied. We denote the set of KZ stable $n$-tuples by
$\mathrm{KZS}$.
\end{definition}
Note that $\mathrm{KZS}$ is a compact subset of
${\mathbb{R}}^n$. Recall that a {\em composition of $n$} is an ordered $k$-tuple $(n_1,
\ldots, n_k)$ of positive integers, such that $n=n_1+\ldots +n_k$. As
an immediate application of Corollary \ref{cor: for
Minkowski 1} and
Propositions \ref{prop: combining Woods} and \ref{prop: KZ of stable}
we obtain:
\begin{theorem}\Name{thm: use of KZ diagonal}
For each composition $\mathcal{I} {\, \stackrel{\mathrm{def}}{=}\, } (n_1, \ldots, n_k)$ of $n$, define $m_i, d_i$ by
\equ{eq: defn mi di} and let $\mathcal{W}(\mathcal{I})$ denote
the set
$$\left\{(A_1, \ldots, A_n): \forall
i, \,
\equ{eq: assumption of woods} \text{ holds, and } \sum_{i=1}^k \left(A^2_{m_{i-1}+1} -
\frac{A^{2n_i+2}_{m_{i-1}+1}}{d_i^2 \gamma_{n_i+1}^{n_i+1}} \right) \leq
\frac{n}{4} \right \}.
$$
If
\eq{eq: covering suffices}{
\mathrm{KZS} \subset \bigcup_{\mathcal{I}} \mathcal{W}(\mathcal{I})
}
then Minkowski's conjecture holds in
dimension $n$.
\end{theorem}
Rajinder Hans-Gill has informed the authors that using
arguments as in \cite{hans-gill1, hans-gill2}, it is possible to
verify \equ{eq: covering suffices}
in dimensions up to 7, thus
reproving Minkowski's conjecture in these dimensions.
\subsection{Local maxima of covrad}
The aim of this subsection is to prove Corollary \ref{cor: Minkowski},
which shows that in
order to establish that all stable lattices in ${\mathbb{R}}^n$ satisfy the covering
radius bound \equ{eq: covrad}, it suffices to check this on a finite
list of lattices in each dimension $d \leq n$.
The function $\covrad : {\mathcal{L}_n} \to {\mathbb{R}}$ may have local maxima, in the
usual sense; that is, lattices $x \in {\mathcal{L}_n}$ for which there is a
neighborhood $\mathcal{U}$ of $x$ in ${\mathcal{L}_n}$
such that for all $x' \in \mathcal{U}$ we have $\covrad(x') \leq
\covrad(x)$. Dutour-Sikiri\'c, Sch\"urmann and Vallentin
\cite{mathieu} gave a geometric characterization of lattices
which are local maxima of the function
$\covrad$, and showed that there are finitely many in each dimension.
Corollary \ref{cor: Minkowski} asserts that Minkowski's conjecture
would follow if all local maxima of covrad satisfy the bound \equ{eq:
covrad}.
\begin{proof}[Proof of Corollary \ref{cor: Minkowski}]
We prove by induction on $n$ that any stable lattice satisfies the
bound \equ{eq: covrad} and apply Corollary \ref{cor: for Minkowski 1}.
Let $\mathcal{S}$ denote the set of stable lattices in ${\mathcal{L}_n}$. It is compact
so the function $\covrad$ attains a maximum on $\mathcal{S}$, and it suffices
to show that this maximum is at most $\frac{\sqrt{n}}{2}$. Let $x \in \mathcal{S}$
be a point at which the maximum is attained. If $x$ is an interior
point of $\mathcal{S}$ then necessarily $x$ is a
local maximum for $\covrad$ and the required bound holds by
hypothesis. Otherwise, there is a sequence $x_j \to x$ such that
$x_j \in {\mathcal{L}_n} \setminus \mathcal{S}$; thus each $x_j$ contains a discrete subgroup
$\Lambda_j$ with $|\Lambda_j| <1$ and $r(\Lambda_j) <n$. Passing to a subsequence we may
assume that that $r(\Lambda_j)=k<n$ is the same for all $j$, and
$\Lambda_j$ converges to a discrete subgroup $\Lambda$ of $x$. Since
$x$ is stable we must have $|\Lambda|=1$. Let $\pi: {\mathbb{R}}^n \to ({\rm span}
\Lambda)^{\perp}$ by the orthogonal projection and let
$\Lambda' {\, \stackrel{\mathrm{def}}{=}\, } \pi(x)$.
It suffices to show that both $\Lambda$
and $\Lambda'$ are stable. Indeed, if this holds then by the induction
hypothesis, both $\Lambda$
and $\Lambda'$ satisfy \equ{eq:
covrad} in their respective dimensions $k, n-k$, and by Lemma \ref{first
Woods lemma}, so does $x$. To see that $\Lambda$ is stable, note
that any subgroup $\Lambda_0 \subset \Lambda$ is also a subgroup of
$x$, and since $x$ is stable, it satisfies $|\Lambda_0| \geq 1$. To
see that $\Lambda'$ is stable, note that if $\Lambda_0 \subset
\Lambda'$ then $\widetilde{\Lambda_0} {\, \stackrel{\mathrm{def}}{=}\, } x \cap \pi^{-1}(\Lambda_0)$ is
a discrete subgroup of $x$ so satisfies $|\widetilde{\Lambda_0}| \geq
1$. Since $|\Lambda|=1$ and $\pi$ is orthogonal, we argue as in the
proof of \equ{eq: claim 1} to obtain
$$1 \leq |\widetilde{\Lambda_0}| = |\Lambda | \cdot |\Lambda_0| =
|\Lambda_0|,$$
so $\Lambda'$ is also stable, as required.
\end{proof}
In \cite{mathieu}, it was shown that there is a unique local maximum
for covrad in dimension 1, none in dimensions 2--5, and a unique one in
dimension 6. Local maxima of covrad in dimension 7 are classified in
the manuscript \cite{mathieu2}; there are 2 such lattices. Thus
in total, in dimensions $n \leq 7$ there are 4 local maxima of the
function covrad. We
were informed by Mathieu Dutour-Sikiri\'c that these lattices all satisfy the covering radius bound
\equ{eq: covrad}. Thus Corollary \ref{cor: Minkowski} yields another proof of
Minkowski's conjecture, in dimensions $n \leq 7$. In \cite[\S
7]{mathieu}, an infinite list of lattices (denote there by $[L_n,
Q_n]$) is defined. The list consists of one lattice in each
dimension $n \geq 6$, each of which is a local maximum for the
function covrad, and satisfies the bound \equ{eq:
covrad}. It is expected that for each $n$, this lattice has the largest covering
radius among all
local maxima in dimension $n$. In light of Corollary \ref{cor: Minkowski}, the validity
of the latter assertion would imply Minkowski's conjecture in all
dimensions.
\section{A volume computation}
\Name{sec: volume computation}
The goal of this section is the following.
\begin{theorem}\Name{vol est theorem}
Let $m$ denote the $G$-invariant probability measure on
${\mathcal{L}_n}$ derived from Haar measure on $G$, and let $\mathcal{S}^{(n)} $
denote the subset of stable lattices in ${\mathcal{L}_n}$. Then $m\left(\mathcal{S}^{(n)} \right)\longrightarrow 1$ as $n \to \infty$.
\end{theorem}
Recalling the notation \equ{eq: k quantities}, for $k=1, \ldots, n-1$,
let
$$
\mathcal{S}^{(n)}_k(t) \overset{\on{def}}{=}\set{x\in {\mathcal{L}_n}: \alpha_k(x)\ge t}, \ \ \ \mathcal{S}^{(n)}_k
{\, \stackrel{\mathrm{def}}{=}\, } \mathcal{S}^{(n)}_k(1).
$$
It is clear that
$\mathcal{S}^{(n)}=\bigcap_{k=1}^{n-1}\mathcal{S}^{(n)}_k$.
In order to prove Theorem~\ref{vol est theorem} it is enough to prove
that
\eq{1312}{
\max_{k=1, \ldots, n-1} m\left({\mathcal{L}_n} \smallsetminus \mathcal{S}^{(n)}_k \right)
= o\left(\frac1n \right),
}
as this implies
\begin{align*}
m\left(\mathcal{S}^{(n)}\right )&= 1-m\left({\mathcal{L}_n}\smallsetminus \cap_{k=1}^{n-1}
\mathcal{S}^{(n)}_k \right)=1-m\left( \cup_{k=1}^{n-1} \left({\mathcal{L}_n}\smallsetminus
\mathcal{S}^{(n)}_k \right) \right)\\
&\ge 1-\sum_{k=1}^{n-1}m \left({\mathcal{L}_n} \smallsetminus
\mathcal{S}^{(n)}_k\right)=1-(n-1)o\left(\frac1n\right)\overset{n\to\infty}{\longrightarrow}1.
\end{align*}
We will actually prove a bound which is stronger than \equ{1312}, namely:
\begin{proposition}\Name{prop: strengthening volume} There is $C_1>0$
such that if we set
\eq{eq: choice of t}{
t_k=t(n,k) {\, \stackrel{\mathrm{def}}{=}\, } \left(\frac{n}{C_1} \right)^{\frac{k(n-k)}{2n} },
}
then
$$ \max_{k=1, \ldots, n-1} m\left({\mathcal{L}_n} \smallsetminus
\mathcal{S}^{(n)}_k\left(t_k\right) \right) =o
\left( \frac1n \right).
$$
In particular, $m\left(\bigcap_{k=1}^{n-1}
\mathcal{S}^{(n)}_k\left(t_k\right ) \right) \to_{n \to \infty} 1.$
\end{proposition}
Let
$$\gamma_{n,k} {\, \stackrel{\mathrm{def}}{=}\, } \sup_{x \in {\mathcal{L}_n}} \alpha_k(x).
$$
Recall that {\em Rankin's constants} or the {\em generalized Hermite's
constants}, are defined as $\gamma_{n,k}^2$ (note that our notations
differ from traditional notations by a square root).
Thunder \cite{Thunder} computed upper and lower bounds on
$\gamma_{n,k}$ and in particular established the growth
rate of $\gamma_{n,k}$. The numbers
$t(n,k)$ have the same growth rate. Thus
Proposition \ref{prop: strengthening volume} should
be interpreted as saying that the lattices in ${\mathcal{L}_n}$ for which the
value of each $\alpha_k$ is
close to the maximum possible value, occupy almost all of the measure of ${\mathcal{L}_n}$.
The proof of Proposition \ref{prop: strengthening volume} relies on
Thunder's work, which in turn was based on a variant of Siegel's
formula~\cite{SiegelFormula} which relates the Lebesgue measure
on $\mathbb{R}^n$ and the measure $m$ on ${\mathcal{L}_n}$. We now review Siegel's
method and Thunder's results.
In the sequel we consider $n \geq 2$ and $k \in \{1,
\ldots, n-1\}$ as fixed and omit, unless there is risk of confusion,
the symbols $n$ and $k$ from the notation.
Consider the (set valued) map $\Phi=\Phi^{(n)}_k$ that assigns to
each lattice $x\in {\mathcal{L}_n}$ the following subset
of $\wedge^k\mathbb{R}^n$:
$$\Phi (x)\overset{\on{def}}{=}\set{\pm w_\Lambda:\Lambda \subset x\textrm{ a
primitive subgroup with } r(\Lambda)=k},$$
where $w_\Lambda\overset{\on{def}}{=} v_1\wedge\dots\wedge v_k$ and $\set{v_i}_1^k$
forms a basis for $\Lambda$ (note that $w_\Lambda$ is well defined up to
sign, and $\Phi(x)$ contains both possible choices).
Let $$\mathscr{V} = \mathscr{V}^{(n)}_k \overset{\on{def}}{=}
\set{v_1\wedge\dots\wedge v_k: v_i\in\mathbb{R}^n} \setminus \{0\}$$
be the variety of pure tensors in $\wedge^k\mathbb{R}^n$.
For any a compactly supported bounded Riemann integrable function $f$
on $\mathscr{V}$ set
\eq{eq: finite sum}{\hat{f}: {\mathcal{L}_n} \to {\mathbb{R}}, \ \ \ \
\hat{f}(x)\overset{\on{def}}{=}\sum_{w\in\Phi (x)}f(w).}
Then it is known (see \cite{Weil}) that the (finite) sum \equ{eq:
finite sum}
defines a function in $L^1({\mathcal{L}_n}, m)$.
Let $\theta = \theta^{(n)}_k$ denote the Radon measure on $\mathscr{V}$
defined by
\begin{equation}\label{1420}
\int_{\mathscr{V}} f d\theta \overset{\on{def}}{=}\int_{{\mathcal{L}_n}} \hat{f} \, dm, \text{ for
\ } f\in C_c(\mathscr{V}).
\end{equation}
In this section we write $G=G_n {\, \stackrel{\mathrm{def}}{=}\, } \operatorname{SL}_n({\mathbb{R}})$.
There is a natural transitive action of $G_n$ on
$\mathscr{V}
$ and the stabilizer of
$e_1\wedge\dots\wedge e_k$ is the subgroup
$$H= H^{(n)}_k {\, \stackrel{\mathrm{def}}{=}\, } \left\{ \smallmat{A&B\\0&D} \in G:
A \in G_{k} , D \in G_{n-k} \right \}. $$
We therefore obtain an identification $\mathscr{V}\simeq G/H$ and view
$\theta$ as a measure on $G/H$.
It is well-known
(see e.g.~\cite{Raghunathans_book}) that up to a proportionality
constant there exists a unique $G$-invariant measure
$m_{G/H}$ on $G/H$; moreover, given Haar
measures $m_{G}, m_{H}$ on $G$ and $H$ respectively, there is a
unique normalization of $m_{G/H}$ such that
for any $f\in L^1(G,m_G)$
\eq{1440}{
\int_G f \, dm_G =\int_{G/H}\int_{H} f(gh) dm_{H}(h)dm_{G/H}(gH).
}
We choose the Haar measure $m_G$ so that it
descends to our probability measure $m$ on ${\mathcal{L}_n}$; similarly, we
choose the Haar measure $m_{H}$ so that the periodic orbit
$H\mathbb{Z}^n \subset {\mathcal{L}_n}$ has volume 1. These choices of Haar measures
determine our measure $m_{G/H}$ unequivocally.
It is clear from the defining formula~\eqref{1420} that $\theta$ is
$G$-invariant and therefore
the two measures $m_{G/H}, \theta$ are proportional. In fact (see
\cite{SiegelFormula} for the case $k=1$ and \cite{Weil} for the general case),
\eq{eq: Siegel normalization}{m_{G/H} = \theta.
}
\ignore{
\begin{proof}
We need to
calculate the proportionality constant relating the measures.
Choose a fundamental domain $F\subset G$ for $\Gamma\overset{\on{def}}{=} \operatorname{SL}_n(\mathbb{Z})$ and
another fundamental domain $\hat{F}\subset H$ for $\hat{\Gamma}\overset{\on{def}}{=} H\cap
\Gamma$ and note that by our choices
$$m_G(F)=m_{H}(\hat{F})=1.$$
Let $\pi: G \to G/H$ be the natural projection. By the implicit
function theorem there is a bounded
$U\subset G$ for which $\pi|_U$ is a homeomorphism onto its image and
the image is an open neighborhood of the identity coset.
Since $H=\bigsqcup_{\hat{\gamma}\in\hat{\Gamma}}\hat{F}\hat{\gamma} $ and the
product map $U\times H\to G$ is injective,
we find that
\eq{2130}{
\chi_{UH}(g)=\sum_{\hat{\gamma}\in\hat{\Gamma}}\chi_{U\hat{F}}(g\hat{\gamma}).
}
We now show that
\eq{1655}{
\chi_{UH}(g)=\int_{H}\chi_{U\hat{F}}(gh)dm_{H}(h).
}
Indeed, if $g\notin UH$ then both sides
of~\eqref{1655} vanish. Otherwise,
write $g=u h$, and let
$h_0\in H$
such that $gh_0\in U\hat{F}$, so that the integrand is nonzero. Then there
are $u'\in U, \hat{f}\in \hat{F}$ such that $u hh_0=u'\hat{f}$. By the
injectivity of $U\times H\to G$ we conclude that $u=u'$ and
$h_0=h^{-1}\hat{f}$. That is, $\set{h_0\in H : gh_0\in U\hat{F}}=
h^{-1} \hat{F}$ and so for a given $g\in G$,
the right hand side of~\eqref{1655}
equals $m_{H}(h^{-1}\hat{F})=1$ as desired.
As before, let $\mathbf{e}_1, \ldots, \mathbf{e}_n$ denote the standard basis of
${\mathbb{R}}^n$.
Given a lattice $x=g{\mathbb{Z}}^n$ corresponding to the coset $g\Gamma \in
{\mathcal{L}_n}$, we have
$$\Phi_k(x)=\set{g\gamma (\mathbf{e}_1\wedge\dots\wedge \mathbf{e}_k):\gamma \in\Gamma'}$$
where $\Gamma' \subset \Gamma$ is some set of coset representatives of $\hat{\Gamma}$ in $\Gamma$. Note that when
$\gamma_1, \gamma_2$ are distinct elements of $\Gamma'$, the two tensors
$g\gamma_i (e_1\wedge\dots\wedge e_k)$, $i=1,2
$ are different.
Under
identification $\mathscr{V}\simeq G/H$, we can think of a function $\varphi$ on $\mathscr{V}$
as a function on $G$ which is right-$H$-invariant and
for $x=g\Gamma\in {\mathcal{L}_n}$ we have
$$\hat{\varphi} (x) =\sum_{w\in \Phi(x)}\varphi(w)= \sum_{\gamma \in\Gamma'}\varphi(g\gamma).$$
Then
\begin{align}\label{1249}
\nonumber \int_G\chi_{U\hat{F}}\, dm_G&
\stackrel{\equ{1440}}{=}\int_{G/H}\int_{H}\chi_{U\hat{F}}(gh)dm_{H}(h)dm_{G/H}(gH)\\
&\overset{\equ{1655}}{=}\int_{G/H}\chi_{UH}(gH)dm_{G/H}(gH).
\end{align}
On the other hand
\begin{align}\label{1542}
\int_{G/H}\chi_{UH}\, d\theta &\stackrel{\equ{1420}}{=}\int_{{\mathcal{L}_n}}
\widehat{(\chi_{UH})} \, dm \\
\nonumber&\overset{\equ{1249}}{=}\int_F\sum_{\gamma' \in\Gamma'}\chi_{UH}(g\gamma')dm_G(g)\\
\nonumber&\overset{\equ{2130}}{=}\int_F\sum_{\gamma'\in\Gamma'}
\sum_{\hat{\gamma}\in\hat{\Gamma}}\chi_{U\hat{F}}(g\gamma'\hat{\gamma})dm_G(g)\\
\nonumber&=\int_F\sum_{\gamma\in\Gamma}\chi_{U\hat{F}}(g\gamma)dm_G(g)=\int_G\chi_{U\hat{F}}
\, dm_G.
\end{align}
By~\eqref{1249} and \eqref{1542} we have $\int_{G/H} \chi_{UH} \,
dm_{G/H} = \int_{G/H} \chi_{UH}
d\theta$, and this integral is finite and positive since
$UH$ is open with compact closure. Thus
the proportionality
constant relating the two measures
must be 1.
\end{proof}
}
For $t>0$,
let $\chi=\chi_t:\mathscr{V}\to\mathbb{R}$ be the restriction to $\mathscr{V}$ of the
characteristic function of the ball of radius $t$ around the origin, in $\wedge^k\mathbb{R}^n$.
Note that
$\hat{\chi}(x)=0$ if and only if $x\in \mathcal{S}^{(n)}_k(t)$ and furthermore,
$\hat{\chi}(x)\ge 1$ if $x\in {\mathcal{L}_n}\smallsetminus \mathcal{S}^{(n)}_k(t)$.
It follows that
\eq{eq: using chi}{m\left({\mathcal{L}_n}\smallsetminus
\mathcal{S}_k^{(n)}(t)\right)\le\int_{{\mathcal{L}_n}}\widehat{(\chi_t)} dm =\int_{\mathscr{V}}\chi_t
d\theta.
}
Let $V_j$ denote the volume of the Euclidean unit ball in
$\mathbb{R}^j$ and let $\zeta$ denote the Riemann zeta function. We will use
an unconventional convention $\zeta(1)=1$, which will make our
formulae simpler.
For $j \geq 1$, define
$$
R(j) {\, \stackrel{\mathrm{def}}{=}\, }
\frac{j^2 V_j}{ \zeta(j)}
$$
and
$$B( n,k)\overset{\on{def}}{=} \frac{\prod_{j=1}^nR(j)}{\prod_{j=1}^k R(j)\prod_{j=1}^{n-k}R(j)}.$$
The following calculation was carried out in~\cite{Thunder}.
\begin{theorem}[Thunder]{\label{Thunder}}
For $t>0$, we have
$$\int_{\mathscr{V} } \chi_t \, dm_{G/H}
=B( n,k)\frac{ t^n}{n}.$$
\end{theorem}
We will need to bound $B( n,k)$.
\begin{lemma}\Name{lem: bound on Bin}
There is $C> 0$ so that for all large enough $n$ and
all $k=1, \ldots, n-1$,
\eq{eq: bound on Bin}{B( n,k)\leq
\left(\frac{C}{n}\right)^{\frac{k(n-k)}{2}}.
}
\end{lemma}
\begin{proof}In this proof $c_0, c_1, \ldots$ are constants independent of $n, k, j$.
Because of the symmetry $B(n,k)=B(n,n-k)$ it is
enough to prove \equ{eq: bound on Bin} with $k\leq \frac{n}{2}.$
Using
the formula
$V_j=\frac{\pi^{j/2}}{\Gamma\left(\frac{j}{2}+1\right)}$ we obtain
\begin{align*}
B(n,k)&=\prod_{j=1}^k\frac{R(n-k+j)}{R(j)}
=\prod_{j=1}^k\frac{\zeta(j)(n-k+j)^2\frac{\pi^{(n-k+j)/2}}{\Gamma(\frac{n-k+j}{2}+1)}}
{\zeta(n-k+j)j^2\frac{\pi^{j/2}}{\Gamma(\frac{j}{2}+1)}}\\
&=\prod_{j=1}^k \frac{\zeta(j)}{\zeta(n-k+j)}\cdot\pa{\frac{n-k+j}{j}}^2\cdot\pi^{\frac{n-k}{2}}\cdot
\frac{\Gamma(\frac{j}{2}+1)}{\Gamma(\frac{n-k+j}{2}+1)}.
\end{align*}
Note that $\zeta(s) \geq 1$ is a decreasing function of $s>1$, so
(recalling our convention $\zeta(1)=1$)
$\frac{\zeta(j)}{\zeta(n-k+j)} \leq c_0 {\, \stackrel{\mathrm{def}}{=}\, } \zeta(2)$.
It follows that for all large enough $n$ and
for any $1\le j\le k, $
\eq{eq: estimate first part}{
\frac{\zeta(j)}{\zeta(n-k+j)}\cdot\pa{\frac{n-k+j}{j}}^2\cdot\pi^{\frac{n-k}{2}}\le c_0
n^2\pi^{\frac{n-k}{2}}\le 4^{\frac{n-k}{2}}.
}
According to Stirling's formula, there are positive constants $c_1,
c_2$ such that for all $x \geq 2$,
$$
c_1 \sqrt{\frac{2\pi}{x}}\left(\frac{x}{e} \right)^x \leq \Gamma(x)
\leq c_2 \sqrt{\frac{2\pi}{x}}\left(\frac{x}{e} \right)^x.
$$
We set $u {\, \stackrel{\mathrm{def}}{=}\, } \frac{j}{2}+1$ and $v {\, \stackrel{\mathrm{def}}{=}\, } \frac{n-k}{2} $, so that
$u+v \geq \frac{n-1}{4}$,
and obtain
\eq{eq: estimate second part}{
\begin{split}
\frac{\Gamma(\frac{j}{2}+1)}{\Gamma(\frac{n-k+j}{2}+1)} & =
\frac{\Gamma(u)}{\Gamma(u+v)} \leq \frac{c_2}{c_1}
\sqrt{\frac{u+v}{u}}\frac{u^u}{(u+v)^{u+v}} \frac{e^{u+v}}{e^u} \\
& \leq c_3 e^v \frac{u^{u-1/2}}{(u+v)^{u+v-1/2}} = c_3
\left(\frac{e}{u+v}\right)^v \frac{1}{\left(1+\frac{v}{u}
\right)^{u-1/2}},
\\
& \leq c_3 \left( \frac{4e}{n-1} \right)^{\frac{n-k}{2}}.
\end{split}
}
Using \equ{eq: estimate first part} and \equ{eq: estimate
second part} we obtain
$$
B( n,k) \leq \left[c_3 4^{\frac{n-k}{2}}
\left(\frac{4e}{n-1}\right)^{\frac{n-k}{2}} \right]^k = \left[ c_3
\left(\frac{16e}{n-1} \right)^{\frac{n-k}{2}} \right]^k.
$$
So taking $C > 16c_3 e$
we obtain \equ{eq: bound on
Bin} for all large enough $n$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop: strengthening volume}]
Let $C$ be as in Lemma \ref{lem: bound on Bin} and let $C_1>C$.
Then by \equ{eq: using chi}, \equ{eq: Siegel normalization} and
Theorem~\ref{Thunder}, for all sufficiently large $n$ we have
\[\begin{split}
m\left({\mathcal{L}_n} \smallsetminus \mathcal{S}^{(n)}_k(t_k) \right) & \leq
B(n,k) \frac{t_k^n}{n} \\
& \leq \frac1n \left(\frac{C}{n} \right)^{\frac{k(n-k)}{2}}
\left(\frac{n}{C_1} \right)^{\frac{k(n-k)}{2}} = \frac1n \left(\frac{C}{C_1} \right)^{\frac{k(n-k)}{2}}.
\end{split}
\]
Multiplying by $n$ and taking the maximum over $k$ we obtain
$$
n \, \max_{k=1, \ldots, n} m\left({\mathcal{L}_n} \smallsetminus
\mathcal{S}^{(n)}_k(t_k) \right) \leq \left(\frac{C}{C_1}
\right)^{\frac{n-1}{2}} \to_{n\to \infty} 0.
$$
\end{proof}
\section{Closed $A$-orbits and well-rounded lattices} \Name{sec:
closed orbits}
It is an immediate consequence of Theorem \ref{thm: main} that any
closed $A$-orbit contains a stable lattice. The purpose of this
section is to show that the same is true for the set of well-rounded
lattices.
Note that this was proved by McMullen for {\em compact} orbits but
for general closed orbits, does not follow from his results. Our
proof relies on previous work of Tomanov and the second-named author
\cite{TW}, on \cite{gruber}, and on a covering result (communicated to
the authors by Michael Levin), whose proof is given in the appendix to
this paper.
\begin{theorem}\Name{thm: closed orbits}
For any $n$, any closed orbit $Ax \subset {\mathcal{L}_n}$ contains a well-rounded lattice.
\end{theorem}
We will require the following topological result which generalizes
Theorem \ref{topological input}.
Let $s,t$ be
natural numbers, and let $\Delta$ denote the
$s$-dimensional simplex, which we think of concretely as $\mathrm{conv} (\mathbf{e}_1,
\ldots, \mathbf{e}_{s+1})$.
We will discuss covers of $M {\, \stackrel{\mathrm{def}}{=}\, } \Delta \times {\mathbb{R}}^t $, and give conditions
guaranteeing that such a cover must cover a point at least $s+t+1$
times.
For
$j=1, \ldots, s+1$ let $F_j$ be the face
of $\Delta$ opposite to $\mathbf{e}_j$, that is $F_j = \mathrm{conv} (\mathbf{e}_i: i
\neq j)$. Also let $M_j {\, \stackrel{\mathrm{def}}{=}\, } F_j \times {\mathbb{R}}^t $ be the corresponding
subset of $M$.
\begin{theorem
\Name{thm: covering}
Suppose that $\mathcal{U}$ is a cover of $M$
by open sets
satisfying the following conditions:
\begin{enumerate}[(i)]
\item\Name{09301}
For any connected component $U$ of any element of $\mathcal{U}$ there exists $j$
such that $U \cap M_j =
\varnothing.$
\item\Name{09302}
There is $R$ so that for
any connected component $U$ of the intersection of $k \leq s+t$ distinct
elements of $\mathcal{U}$,
the projection of $U$ to ${\mathbb{R}}^t$ is $(R, s+t-k)$-almost
affine.
\end{enumerate}
Then
there is a point of $M $ which is covered at least
$s+t+1$ times.
\end{theorem}
Note that hypothesis~\eqref{09302} is trivially satisfied when $k \leq s $,
since any subset of ${\mathbb{R}}^t$ is $(1, t)$-almost affine.
Note also that Theorem \ref{topological input} is the case $s=0$ of this
statement. We give the proof of Theorem \ref{thm: covering} in the appendix.
We will need some preparations in order to deduce Theorem~\ref{thm:
closed orbits} from Theorem~\ref{thm: covering}. For $1\le d\le n$,
let $$\tb{I}^n_d\overset{\on{def}}{=}\set{1\le i_1<\dots<i_d\le n}$$
denote the collection of multi-indices of length $d$ and
for $J = (i_1, \ldots, i_d)\in\tb{I}^n_d$ let
$e_J {\, \stackrel{\mathrm{def}}{=}\, } e_{i_1} \wedge \cdots \wedge e_{i_d}.
$
We equip $\bigwedge_1^d\mathbb{R}^n$ with the inner product with
respect to which $\{e_J\}$ is an orthonormal basis, and denote by $\mathcal{E}_{d,n}$
the quotient of $\bigwedge_1^d\mathbb{R}^n$ by the equivalence relation $w
\sim -w$. Note that the product of an element of $\mathcal{E}_{d,n}$ with a
positive scalar is well-defined. We will (somewhat imprecisely) refer to elements of $\mathcal{E}_{d,n}$
as vectors. Given a subspace $L \subset \mathbb{R}^n$ with $\dim L= d$,
we denote by
$w_L\in \mathcal{E}_{d,n}$ the image of a vector of norm one in
$ \bigwedge_1^d L.$
If $\Lambda \subset \mathbb{R}^n$ is a discrete subgroup of rank $d$, we
denote by $w_\Lambda\in \mathcal{E}_{d,n}$
the image of the vector
$v_1\wedge\dots\wedge v_d,$ where $\set{v_i}_1^d$ forms a basis for
$\Lambda$. The reader may verify that these vectors are well-defined and
satisfy $w_{\Lambda} = |\Lambda| w_L$ where $L = {\rm span} \Lambda$.
We denote the natural action of $ G$ on $\mathcal{E}_{d,n}$ arising from the
$d$-th exterior power of the linear action on $\mathbb{R}^n$, by $(g, w)
\mapsto gw$.
Given a subspace $L \subset \mathbb{R}^n$ and a discrete subgroup $\Lambda$ we set
$$A_L\overset{\on{def}}{=}\set{a\in A:
aw_L=w_L} \text{ and } A_\Lambda {\, \stackrel{\mathrm{def}}{=}\, } \{a \in A: aw_\Lambda = w_\Lambda\}.$$
Note that the requirement
$aw_L=w_L$ is equivalent to saying that $aL=L$ and $\det(a|_L)=1$.
Given a flag
\begin{equation}\label{flag}
\crly{F}=\set{ 0 \varsubsetneq L_1\varsubsetneq\dots\varsubsetneq L_k\varsubsetneq \mathbb{R}^n}
\end{equation}
(not necessarily full), let
$A_{\crly{F}}\overset{\on{def}}{=} \bigcap_i A_{L_i}.$
The {\em support} of an element $w \in \mathcal{E}_{d,n}$ is the subset of
$\tb{I}^n_d$ for which the corresponding coefficients of an element of
$\bigwedge^d\mathbb{R}^n$ representing $w$ are nonzero, and we write
$\on{supp}(L)$ or $\on{supp}(\Lambda)$ for the supports of $w_L$ and
$w_{\Lambda}$. For $J = \set{i_1<\dots<i_d } \in \tb{I}^n_d$, set $\mathbb{R}^J
{\, \stackrel{\mathrm{def}}{=}\, } {\rm span} (e_{i_j})$ and define the multiplicative characters
$$ \chi_J: A \to {\mathbb{R}}^*, \ \chi_J(a) \overset{\on{def}}{=} \det
(a|_{\mathbb{R}^J}).$$
Then
for any subspace $L\subset \mathbb{R}^n$,
\eq{1636}{A_L=\bigcap_{J \in \supp (L)} {\rm Ker} \chi_J}
(and similarly for discrete subgroups $\Lambda$).
As in \S \ref{establishing
topological input} we fix an invariant metric on $A$. In order to
verify hypothesis \eqref{09302} of Theorem \ref{thm: covering}, we will need the following
lemmas (cf. \cite[Theorem 6.1]{McMullenMinkowski}):
\begin{lemma}\label{bdd dist from stab}
Let $T \subset A$ be a closed subgroup and let $x\in\mathcal{L}_n$ be a
lattice with a
compact $T$-orbit. Then for any $C>0$ there exists
$R>0$
such that for any collection $\set{\Lambda_i}$ of subgroups of $x$, there exists $b\in A$ such that
\begin{equation}\label{2052}
\left \{a\in T:\forall i\; \norm{aw_{\Lambda_i}}\le C \right\} \subset R
\text{-neighborhood of } b\pa{\bigcap_iA_{\Lambda_i}}.
\end{equation}
\end{lemma}
\begin{proof}
We will identify $A$ with its Lie algebra $\mathfrak{a}$ via the
exponential map, and think of the subgroups $A_\Lambda$ as
subspaces.
By \equ{1636} only finitely many subspaces arise as $A_\Lambda$.
In particular, given a collection of discrete subgroups
$\set{\Lambda_i}$, the angles between the spaces they span (if nonzero) are bounded
below. Therefore
there exists a function $\psi:\mathbb{R}\to\mathbb{R}$ with $\psi(R)\to_{R\to\infty}\infty$, such that
\begin{align}\label{336}
&\set{a\in A: \forall\; J\in\cup_i \on{supp}(w_{\Lambda_i}),\; \psi(R)^{-1}\le \chi_J(a)\le \psi(R)}\subset\\
\nonumber &\set{a\in A: d(a,\cap_iA_{\Lambda_i})\le R}.
\end{align}
Since $Tx$ is compact, there exists a compact subset
$\Omega\subset T$
such that for any $a\in T$ there exists $b = b(a) \in T$ satisfying $bx=x$ and $b^{-1}a\in\Omega$.
It follows that there exists $M \geq 1$ such that:
\begin{enumerate}[(I)]
\item\label{1708} for any subspace $L$, $||bw_L||\le M||aw_L||$.
\item\label{1747} for any multi-index $J$, $\chi_J(ba^{-1})\le M$.
\end{enumerate}
Given $C>0$, let $C' {\, \stackrel{\mathrm{def}}{=}\, } MC$ and consider the finite set
$$\crly{S}\overset{\on{def}}{=} \set{\Lambda \subset x: ||w_{\Lambda}||\le C'}.$$
For any $\Lambda\in\crly{S}$ write
$w_{\Lambda}=\sum_{J\in\on{supp}(w_{\Lambda})} \alpha_J(\Lambda) e_J.$
Let $\varepsilon>0$ be small enough so that
\begin{align*}
\varepsilon&<\min\set{\av{\alpha_J(\Lambda)}:\Lambda\in\crly{S}, J\in\on{supp}(w_{\Lambda})},
\end{align*}
and choose $R$ large enough so that $\psi(R)>C'/\varepsilon$. We claim that
for any $\set{\Lambda_i}\subset \crly{S}$,
\begin{equation}\label{2053}
\set{a\in T:\forall i\; \norm{aw_{\Lambda_i}}\le C}\subset \set{a\in T: d(a, \cap_i A_{\Lambda_i})\le R}.
\end{equation}
To prove this claim, suppose $a$ is an element on the left hand side
of~\eqref{2053}.
By~\eqref{336}
it is enough show that for any $J\in\cup_i\on{supp}(\Lambda_i)$ we have
$\psi(R)^{-1}\le \chi_J(a)\le\psi(R)$.
Since the coefficient of $e_J$ in the
expansion of $aw_{\Lambda_i}$ is $\chi_J(a)\alpha_J(\Lambda_i)$ and since
$||aw_{\Lambda_i}||\le C$, we have
$$\chi_J(a)\le
\frac{C}{|\alpha_J(\Lambda_i)|}\le\frac{C}{\varepsilon}\le \psi(R).$$
On the other hand, letting $b = b(a)$
we have $b\Lambda_i\in\crly{S}$ from \eqref{1708}, and
\begin{align*}
\varepsilon\le |\alpha_J(b\Lambda_i)| & =\chi_J(b) |\alpha_J(\Lambda_i)| \ \Longrightarrow \ \chi_J(b^{-1})\le C/\varepsilon \\
&\overset{\textrm{\eqref{1747}}}{\Longrightarrow} \ \chi_J(a^{-1})=\chi_J(a^{-1}b)\chi_J(b^{-1})\le C'/\varepsilon\le\psi(R),
\end{align*}
which completes the proof of \eqref{2053}.
Let $\set{\Lambda_i}$ be any collection of subgroups of $x$
and assume that the set on the left hand side of ~\eqref{2052} is non-empty. That is, there
exists $a_0\in T$ such that for all $i$, $||a_0w_{\Lambda_i}||\le C$.
Let $b = b(a_0)\in T$, and set $\Lambda'_i {\, \stackrel{\mathrm{def}}{=}\, } b\Lambda_i$. It follows that $\set{\Lambda'_i}\subset\crly{S}$
and so
\begin{align*}
\set{a\in T:\forall i\norm{aw_{\Lambda_i}}\le C}
&=b\set{a\in T: \forall i \norm{aw_{\Lambda'_i}}\le C}\\
&\stackrel{\eqref{2053}}{\subset} b \set{a\in T: d(a,\cap_i A_{\Lambda_i'})\le R}\\
&= \set{a\in T: d(a, b\pa{\cap_i A_{\Lambda_i}})\le R},
\end{align*}
where in the last equality we used the fact that
$A_{\Lambda_i'}=A_{\Lambda_i}$ because $A$ is commutative.
\end{proof}
\begin{lemma}\Name{lem: flag}
Let $\crly{F}$ be a flag as in \eqref{flag} and let
$A_{\crly{F}}$ be its stabilizer. Then $A_{\crly{F}}$ is of
co-dimension
$\ge k$ in $A$.
\end{lemma}
\begin{proof}
Given a nested sequence of multi-indices
$J_1\varsubsetneq\dots\varsubsetneq J_k$ it is clear that the subgroup
$$
\bigcap_{i=1}^k {\rm Ker} \chi_{J_i}
$$
is of co-dimension $k$ in $A$. In light of \eqref{1636},
it suffices to prove the following claim:
\quad\\
\noindent \textit{Let $\crly{F}$ be a flag as
in~\eqref{flag} with $d_i\overset{\on{def}}{=} \dim L_i$. Then there is a nested
sequence
of multi-indices $J_i\in\tb{I}^n_{d_i}$ such that $J_i\in\on{supp}(L_i)$.}
\quad\\
In proving the claim we will assume with no loss of generality that
the flag is complete. Let $v_1,\dots, v_n$ be a basis of $\mathbb{R}^n$ such
that $L_i=\on{span}\set{v_j}_{j=1}^i$ for
$i=1,\dots, n-1.$
Let $T$ be the $n\times n$ matrix whose columns are $v_1, \dots, v_n$.
Given a multi-index $J$ of length $\av{J}$, we denote by $T_J$
the square matrix of dimension $\av{J}$ obtained
from $T$ by deleting the last $n-\av{J}$ columns and the rows
corresponding to the indices not in $J$. Note that with this
notation, possibly after replacing some of the $v_i$'s by their scalar
multiples, each $w_{L_d}$ is the image in
$\mathcal{E}_{d,n}$ of
\begin{equation}\label{1159}
v_1\wedge\dots\wedge v_d=\sum_{J\in \tb{I}^n_d} (\det T_J) e_J.
\end{equation}
In particular, $J\in\on{supp}(L_d)$ if and only if $\det T_J\neq 0$.
Proceeding inductively in reverse, we construct the nested sequence
$J_d$ by induction on $d =n, \ldots, 1$. Let $J_n=\set{1,\dots, n}$ so that
$T=T_{J_n}$.
Suppose we are given multi-indices
$J_{n}\supset\dots\supset J_{d+1}$ such that
$J_i\in\on{supp}(w_{L_i})$
for $i=n,\dots, d+1$. We want to define now a multi index
$J_d\in\on{supp}(w_{L_d})$ which is
contained in $J_{d+1}$. By~\eqref{1159}, $\det T_{J_{d+1}}\neq 0$. When
computing $\det T_{J_{d+1}}$ by expanding the last column we express
$\det T_{j_{d+1}}$
as a linear combination of $\set{\det T_J:J\subset J_{d+1},
\av{J}=d}$. We
conclude that there must exist at least one multi-index $J_d\subset
J_{d+1}$ for which $\det T_{J_d}\ne 0$. In turn, by~\eqref{1159}
this means that $J_d\in\on{supp}(w_{L_d})$. This finishes the proof
of the claim.
\end{proof}
The following notation is analogous to Definition \ref{bn}.
Given a lattice $x\in {\mathcal{L}_n}$ and $\delta>0$ let
\begin{align*}
\on{Min}^*_{\delta}(x)&\overset{\on{def}}{=}\set{v \in x \setminus \{0\} : \|v\|<(1+\delta)\alpha_1(x)}.\\
\tb{V}^*_{\delta}(x)&\overset{\on{def}}{=}\on{span}\on{Min}^*_{\delta}(x).\\
\dim^*_\delta(x)&\overset{\on{def}}{=}\dim\tb{V}^*_{\delta}(x).
\end{align*}
Finally, for $\varepsilon>0$, let $\mathcal{U}^{(\varepsilon)}=\set{U_j^{(\varepsilon)}}_{j=1}^n$
be the collection of open subsets of $A$ defined by
\eq{eq: cover}{ U_j = U_j^{(\varepsilon)} {\, \stackrel{\mathrm{def}}{=}\, } \{a \in A: \text{for all }
\delta \text{ in a neighborhood of }
j\varepsilon, \, \dim^*_{\delta} (ax) =j \}. }
Similarly to the discussion in Lemma \ref{lem: positive inradius} we see that
$\mathcal{U}^{(\varepsilon)}$ is an open cover of $A$.
\begin{proof}[Proof of Theorem \ref{thm: closed orbits}]
The strategy of proof is very similar to that of Theorem~\ref{thm: main}. We consider
covers $\mathcal{U}^{(\varepsilon)}$ of $A$ and use Theorem~\ref{thm: covering}
to deduce that $U_n^{(\varepsilon)}$ is non-empty.
The first step towards applying Theorem~\ref{thm: covering} is to find a decomposition
$A\simeq\mathbb{R}^{n-1}=\mathbb{R}^s\times\mathbb{R}^t$
and a simplex $\Delta\subset \mathbb{R}^s$, so that the restriction of the cover to
$\Delta\times\mathbb{R}^t$ satisfies the two hypotheses of Theorem~\ref{thm: covering}.
According to \cite{TW, gruber}, there is a
decomposition $A = T_1
\times T_2$ and a direct sum decomposition ${\mathbb{R}}^n = \bigoplus_1^{d} V_i$
such that the following hold:
\begin{itemize}
\item
Each $V_i$ is spanned by some of the standard basis vectors.
\item
$T_1$ is the group of linear transformations
which act on each $V_i$ by a homothety, preserving Lebesgue measure on
${\mathbb{R}}^n$. In particular $s {\, \stackrel{\mathrm{def}}{=}\, } \dim T_1 = d-1$.
\item
$T_2$ is the group of diagonal (with respect to the standard basis)
matrices whose restriction to each $V_i$ has determinant 1.
\item $T_2 x$ is compact and $T_1 x$ is divergent;
i.e. $Ax \cong T_1 \times T_2/(T_2)_x$, where
$(T_2)_x {\, \stackrel{\mathrm{def}}{=}\, } \{a \in T_2: ax= x\}$.
\item
Setting $\Lambda_i {\, \stackrel{\mathrm{def}}{=}\, } V_i \cap x$, each $\Lambda_i$ is a
lattice in $V_i$, so that $\bigoplus \Lambda_i$ is of finite index in
$x$.
\end{itemize}
For $a \in T_1$ we write $\chi_i(a)$ for the number
satisfying $av = e^{\chi_i(a)}v$ for all $v \in V_i$. Thus each $\chi_i$
is a homomorphism from $T_1$ to the additive group of real
numbers. The mapping $a \mapsto \bigoplus_i \chi_i(a)
\mathrm{Id}_{V_i}$, where $\mathrm{Id}_{V_i}$ is the identity map on
$V_i$, is nothing but the logarithmic map of $T_1$ and it endows
$T_1$ with the structure of a vector space. In particular we can discuss the
convex hull of subsets of $T_1$.
For each $\rho$ we let
$$\Delta_\rho {\, \stackrel{\mathrm{def}}{=}\, } \{a \in T_1: \max_i \chi_i(a) \leq \rho\}.$$
Then
$\Delta_\rho = \conv (b_1, \ldots, b_d)$ where $b_i$ is the diagonal
matrix acting on each $V_j, j \neq i$ by multiplication by $e^\rho$, and
contracting $V_i$ by the appropriate constant ensuring that $\det b_i
=1$.
Let $P_i : {\mathbb{R}}^n \to
V_i$ be the natural projection associated with the decomposition ${\mathbb{R}}^n =
\bigoplus V_i$. Each $P_i(x)$ is of finite index in $\Lambda_i$ and
hence discrete in $V_i$. Moreover, the orbit $T_2 x$ is compact, so
for each $a \in T_2$ there is $a'$ belonging to a bounded subset of $T_2$
such that $ax=a'x$. This implies that there is
$\eta>0 $ such that for any $i$ and any $a \in T_2$, if $v \in ax$ and $P_i(v) \neq 0$
then $\| P_i(v)\| \geq \eta$. Let $C>0$ be large enough so that
$\alpha_1(x') \leq C$ for any $x' \in {\mathcal{L}_n}$. Let
$\rho$ be large enough so that
\eq{eq: choice of R}{e^\rho\eta >
2C. }
We restrict the covers $\mathcal{U}^{(\varepsilon)}$
(where
$\varepsilon \in (0,1/n)$) to $\Delta_\rho \times
T_2$ and apply Theorem \ref{thm: covering} with $t {\, \stackrel{\mathrm{def}}{=}\, } \dim T_2 = n-d$.
If we show that the
hypotheses of Theorem \ref{thm: covering} are
satisfied for each
cover $\mathcal{U}^{(\varepsilon)},$ we will obtain $U_n^{(\varepsilon)} \neq \varnothing.$ Then, taking $\varepsilon_j
\to 0$ and applying a
compactness argument, we find a well-rounded lattice in
$(\Delta_\rho \times T_2)x$.
Let $U$ be a connected subset of
$U_k^{(\varepsilon)} \in \mathcal{U}^{(\varepsilon)}$. Repeating the arguments proving Lemma
\ref{flat things}, or
appealing to \cite[\S7]{McMullenMinkowski}, we see that the
$k$-dimensional subspace
$L\overset{\on{def}}{=} a^{-1}\tb{V}^*_{k\varepsilon} (ax)$
as well as the discrete subgroup $\Lambda\overset{\on{def}}{=} L\cap x$
are independent of the choice of $a \in U$.
By definition of $U_k^{(\varepsilon)}$, for any $a \in U$, $a \Lambda$ contains $k$ vectors $v_i = v_i(a),
i=1, \ldots,
k$ which span $aL$
and satisfy \eq{eq: vi satisfy}{
\|v_i\| \in [r, (1+k\varepsilon)r ], \ \ \text{where \ } r {\, \stackrel{\mathrm{def}}{=}\, } \alpha_1(ax).
}
In order to
verify hypothesis~\eqref{09301} of Theorem \ref{thm:
covering}, we need to show that there
is at least one $j$ for which $U \cap M_j =
\varnothing$. Let $P_1, \ldots, P_d$ be the projections above. Since
${\rm Ker} P_1 \cap \cdots \cap {\rm Ker} P_d = \{0\}$ and $\dim L = k \geq 1$, it suffices to show that
whenever $U \cap M_j \neq \varnothing$, $L \subset {\rm Ker} P_j$.
The face $F_j$ of $\Delta_\rho$ consists of those elements $a_1 \in T_1$
which expand vectors in $V_j$ by a factor of
$e^\rho$. If $U \cap M_j \neq \varnothing$ then there is $a \in T_2, a_1
\in F_j$ so that $a_1a \in U$. Now \equ{eq: choice of R}, \equ{eq: vi
satisfy} and the choice of $\eta$ and $C$ ensure that
the vectors $v_i = v_i(a_1a)$ satisfy $P_j(v_i)=0$. Therefore $L \subset
{\rm Ker} P_j$.
It remains to
verify hypothesis~\eqref{09302} of Theorem \ref{thm: covering}.
Let $U$ be a connected subset of an intersection $U_{i_1}\cap\dots\cap
U_{i_k}\cap(\Delta_\rho\times T_2)$ and let
$L_{i_j}\overset{\on{def}}{=} a^{-1}\tb{V}^*_{i_j\varepsilon} (ax)$ and $\Lambda_{i_j}\overset{\on{def}}{=} L_{i_j}\cap x$.
As remarked above, $L_{i_j},\Lambda_{i_j}$ are independent of $a\in U$.
By the definition of the $L_{i_j}$'s we have that $L_{i_j}\varsubsetneq L_{i_{j+1}}$ and so they form
a flag $\crly{F}$ as in~\eqref{flag}. Lemma~\ref{lem: flag} applies and we deduce that
\begin{equation}\label{1640}
A_{\crly{F}}=\cap_{j=1}^k A_{L_{i_j}} \textrm{ is of co-dimension}\ge k\textrm{ in }A.
\end{equation}
For each $a\in U$ and each $j$ let $\set{v^{(j)}_\ell(a)}\in a\Lambda_{i_j}$ be the vectors spanning
$aL_{i_j}$ which satisfy~\eqref{eq: vi satisfy}. Let
$u^{(j)}_\ell(a)\overset{\on{def}}{=} a^{-1} v^{(j)}_\ell\in\Lambda_{i_j}$. Observe that:
\begin{enumerate}[(a)]
\item\label{02281} $\on{span}_{\mathbb{Z}}\set{u^{(j)}_\ell(a)}$ is of finite
index in $\Lambda_{i_j}$ and in particular,
$u^{(j)}_1(a)\wedge\dots\wedge u^{(j)}_{i_j}(a)$ is an integer
multiple of $\pm w_{\Lambda_{i_j}}$. As a consequence $||aw_{\Lambda_{i_j}}||\le
||v^{(j)}_1(a)\wedge\dots\wedge v^{(j)}_{i_j}(a)||$.
\item\label{02282} Because of~\eqref{eq: vi satisfy} we have that $ ||v^{(j)}_1(a)\wedge\dots\wedge v^{(j)}_{i_j}(a)||< C$ for some constant depending on $n$ alone.
\end{enumerate}
It follows from~\eqref{02281},\eqref{02282} and Lemma~\ref{bdd dist from stab}
that there exist $R>0$ and an element $b\in T_2$ so that
$$U\subset \Delta_\rho \times \set{a\in T_2:\forall i_j,
||aw_{\Lambda_{i_j}}||<C}\subset T_1\times \set{a\in T_2:
d(a,bA_{\crly{F}})\le R}.$$
By~\eqref{1640} we deduce that
if $p_2 : A \to T_2$ is the projection
associated with the
decomposition $A = T_1 \times T_2$ then $p_2(U)$ is $(R',s+t-k)$-almost afine, where $R'$ depends only on $R,\rho$. This concludes the proof.
\end{proof}
| {
"timestamp": "2013-09-17T02:13:07",
"yymm": "1309",
"arxiv_id": "1309.4025",
"language": "en",
"url": "https://arxiv.org/abs/1309.4025",
"abstract": "Inspired by work of McMullen, we show that any orbit for the action of the diagonal group on the space of lattices, accumulates on a stable lattice. We use this to settle a conjecture of Ramharter about Mordell's constant, get new proofs of Minkowski's conjecture in dimensions up to seven, and answer a question of Harder on the volume of stable lattices.",
"subjects": "Dynamical Systems (math.DS); Number Theory (math.NT)",
"title": "On stable lattices and the diagonal group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211561049159,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.7900719912111142
} |
https://arxiv.org/abs/1805.11278 | Partition problems in high dimensional boxes | Alon, Bohman, Holzman and Kleitman proved that any partition of a $d$-dimensional discrete box into proper sub-boxes must consist of at least $2^d$ sub-boxes. Recently, Leader, Milićević and Tan considered the question of how many odd-sized proper boxes are needed to partition a $d$-dimensional box of odd size, and they asked whether the trivial construction consisting of $3^d$ boxes is best possible. We show that approximately $2.93^d$ boxes are enough, and consider some natural generalisations. |
\section{Introduction}
The following lovely problem, due to Kearnes and Kiss~\cite[][Problem 5.5]{kearneskiss}, was presented at the open problem session at the August 1999 meeting at MIT that was held to celebrate Daniel Kleitman's 65th birthday~\cite{saks}. A set of the form $$A=A_1 \times A_2 \times \ldots \times A_d,$$ where $A_1, A_2,\ldots,A_d$ are finite sets with $|A_i|\geq 2$ will be called here a \emph{$d$-dimensional discrete box}. A set of the form $B=B_1\times B_2\times\ldots\times B_d$, where $B_i\subseteq A_i$ for all $i\in [d]$, is a \emph{sub-box} of $A$. Such a sub-box $B$ is said to be \emph{proper} if $B_i \neq A_i$ for every $i$. The question of Kearnes and Kiss was as follows: can the box $A=A_1 \times A_2 \times \ldots \times A_d$ be partitioned into fewer than $2^d$ proper sub-boxes?
Within a day, Alon, Bohman, Holzman and Kleitman solved~\cite{alonbohman} this problem. Their eventual distillation of the proof, which we present in Section~\ref{sec:setup}, is a ``proof from the book''.
\begin{theorem}[\label{thm:alon}Alon, Bohman, Holzman, Kleitman~\cite{alonbohman}]
Let $A$ be a $d$-dimensional discrete box, and let $\{B^1,B^2,\ldots,B^m\}$ be a partition of $A$ into proper sub-boxes. Then $m\geq 2^d$.
\end{theorem}
The following interesting question was recently posed by Leader, Mili\'{c}evi\'{c} and Tan~\cite{leader}. Say that the $d$-dimensional box $A=A_1\times A_2\times \ldots \times A_d$ is \emph{odd} if each $|A_i|$ is odd (and finite). Similarly, say that the sub-box $B=B_1\times B_2\times \ldots \times B_d$ is \emph{odd} if $|B_i|$ is odd for all $i$. It is easy to see that given a $d$-dimensional odd box $A$, there exists a partition of $A$ into $3^d$ odd proper sub-boxes, by partitioning each side into three odd parts and taking all possible products.
\begin{question}[\label{qu:leader}Leader, Mili\'{c}evi\'{c}, Tan~\cite{leader}]
Let $A$ be a $d$-dimensional odd box, and let $\{B^1,B^2,\ldots,B^m\}$ be a partition of $A$ into odd proper sub-boxes. Does it follow that then $m\geq 3^d$?.
\end{question}
Our first result is that the answer to this question is `no':
\begin{theorem}\label{thm:main}
Let $d\in\mathbb{Z}^+$ be divisible by $3$. Then there exists a partition of $[5]^d$ into $25^{d/3}\leq 2.93^d$ odd proper sub-boxes.
\end{theorem}
The proof is based on an example which shows that it is possible to partition $[5]^3$ into $25$ odd proper sub-boxes, see Figure~\ref{fig:25oddbox}. We originally found examples with the help of a computer, but the example presented here was found by hand, keeping in mind certain properties of the examples provided by the computer. The solution is not unique.
\iffalse
\begin{figure}[h]
\caption{25 odd boxes partitioning $[5]^3$}
\includegraphics[width=\textwidth]{boxespic2.jpg}
\centering
\label{fig:25oddbox}
\end{figure}
\fi
\begin{figure}[h]
\caption{25 odd boxes partitioning $[5]^3$}
\input{boxespic2.tex}
\centering
\label{fig:25oddbox}
\end{figure}
The situation changes, however if we require the odd boxes in our partition to be products of intervals. Say that the box $B=B_1\times B_2\times\ldots\times B_d$ is a \emph{brick} if for each $i\in\{1,2,\ldots,d\}$ there exist integers $i_0,i_1$ with $i_0\leq i_1$, such that $B_i=\{i_0,i_0+1,\ldots,i_1\}$. As examples, consider the following two boxes:
\begin{itemize}
\item The set $B=\{2,3,4\}\times \{4\}\times \{1,6,7\}$ is an odd proper sub-box of $[7]^3$ but it is not a brick, as $\{1,6,7\}$ does not have the required form.
\item The set $B=\{2,3,4\}\times \{3,4\}$ is a proper brick contained in $[5]^2$. However it is not odd, as $|\{3,4\}|=2$.
\end{itemize}
Our next result shows that the answer to Question~\ref{qu:leader} is `yes' under the additional assumption that the sub-boxes are in fact proper, odd bricks.
\begin{theorem}\label{prop:bricks}
Let $n\geq 2$ be odd, and $d\geq 1$ arbitrary integer. Let $\{B^1,B^2,\ldots,B^m\}$ be a partition of $[n]^d$ into proper, odd bricks. Then $m\geq 3^d$.
\end{theorem}
There are a number of natural generalisations of this question. In this paper we shall consider a weakening of the parity constraint. A key property enforced by a partition into odd, proper boxes is that any axis-parallel line through $[n]^d$ intersects at least 3 distinct sub-boxes, with the result that the most obvious construction involves dividing each dimension into 3 parts and taking the resulting $3^d$ sub-boxes. It is therefore natural to pose the following question, which we refer to as the \emph{$k$-piercing} problem.
\begin{question}[$k$-piercing]\label{qu:kpiercingbox}
Let $n\ge k$ and $d\ge 1$ be integers. Let $\{B^1,B^2,\dots, B^m\}$ be a partition of $[n]^d$ into proper boxes with the property that every axis-parallel line intersects at least $k$ distinct $B_i$ (we call this the $k$-piercing property). How small can $m$ be?
\end{question}
This question can obviously be phrased in a continuous setting, replacing $[n]$ with the interval $[0,1]$ and eliminating $n$ altogether. For simplicity we shall not do this, but instead we will generally present bounds on $m$ as a function of $k$ and $d$ only by considering $n$ large enough (for most of our results it is sufficient to take $n>3k$).
The 2-piercing problem corresponds precisely to the original problem of Kearnes and Kiss, and so the bound $m\ge 2^d$ holds. However, Theorem~\ref{thm:main} tells us that $m< 3^d$ when $k=3$. In fact the easy observation that $3^d$ cannot be a lower bound follows from a simple 2-dimensional construction shown in Figure~\ref{fig:2dimkp}.
\begin{figure}
\caption{8 bricks in two dimensions satisfying the $3$-piercing property.}
\begin{center}
\begin{tikzpicture}
\draw [-] (0,0) -- (4,0);
\draw [-] (0,0) -- (0,4);
\draw [-] (0,4) -- (4,4);
\draw [-] (4,0) -- (4,4);
\draw [-] (2,0) -- (2,4);
\draw [-] (0,2) -- (4,2);
\draw [-] (1,0) -- (1,2);
\draw [-] (3,2) -- (3,4);
\draw [-] (2,1) -- (4,1);
\draw [-] (0,3) -- (2,3);
\end{tikzpicture}
\end{center}
\label{fig:2dimkp}
\end{figure}
Our later results will concentrate on the $k$-piercing problem. We show, perhaps surprisingly, that $m$ is bounded by $c^dk$ for some $c$ which is independent of $k$.
\begin{theorem}\label{thm:kpiercingbox}
Let $k \ge 2$ and $d\ge 1$ be integers. For $n$ large enough there exists a partition $\{B^1,\dots,B^m\}$ of $[n]^d$ into proper boxes having the $k$-piercing property with $m\le 15^{d/2}k$.
\end{theorem}
Recall that the answer to Question~\ref{qu:leader} changes fundamentally when boxes are replaced with bricks, with the trivial construction becoming best possible. In light of this, we also consider the special case of Question \ref{qu:kpiercingbox} when all the boxes are assumed to be bricks. We obtain a similar result, even under this additional restriction.
\begin{theorem}\label{thm:kpiercingbrick}
Let $k \ge 2$ and $d\ge 1$ be integers. For $n$ large enough there exists a partition $\{B^1,\dots,B^m\}$ of $[n]^d$ into proper bricks having the $k$-piercing property with $m\le 3.92^{d}k$.
\end{theorem}
Both proofs involve building an intermediate partition coming from a low-dimensional example and then solving a smaller instance of the same problem within each part. It seems almost certain that better examples exist, and in fact it is not out of the question that $m=(2+o(1))^d$ for every fixed $k$, in both regimes.
For the lower bounds, there is a simple inclusion-exclusion argument which shows $m \ge d2^dk,$ but this only applies for bricks. With boxes, lower bounds are difficult to obtain, as neither the argument mentioned above nor the one used to prove Theorem \ref{thm:alon} seem to extend to this problem. In fact, we fail to obtain any lower bound of the form $(1+ \eps)^dk$ for any $\eps>0. $ Such a bound almost certainly holds, and this presents a very interesting open problem.
In this setting even the $2$-dimensional case is not easy to resolve. The upper bound of $m\le 4k-4$ follows from the left image in Figure~\ref{fig:firstkp} and is easily seen to be tight in the case of bricks. With the aim of showing that this is best possible even for boxes, we introduce a graph theory question of an extremal flavour and solve it asymptotically. This gives the following result.
\begin{proposition}\label{prop:2dim}
Let $\{B^1,\dots,B^m\}$ be a minimal partition of $[n]^2$ into proper sub-boxes satisfying the $k$-piercing property. Then, assuming $n \ge 2k-2$ we have $m=(4+o_k(1))k$.
\end{proposition}
This short paper is organized as follows. In Section~\ref{sec:setup} we give some set-up and preliminary observations. In Section~\ref{sec:proofs} we prove Theorem~\ref{thm:main} and Theorem~\ref{prop:bricks}. In Section~\ref{sec:piercing} we consider the $k$-piercing problem and present our results, including Theorem~\ref{thm:kpiercingbox}, Theorem \ref{thm:kpiercingbrick} and Proposition~\ref{prop:2dim}. A selection of open questions are given in Section~\ref{sec:concl}.
Before beginning with the set-up for our investigations, we draw attention to other variants of the problem which have been considered in the literature, including geometrical results concerning the minimal partitions obtained in Theorem~\ref{thm:alon}~\cite{Krzysztof2} and extensions of these ideas in the context of cube tiling~\cite{Krzysztof}.
\section{Set-up and previous results}\label{sec:setup}
We begin this section by giving the proof of Alon, Bohman, Holzman and Kleitman of Theorem~\ref{thm:alon}, as presented in~\cite{saks}.
\begin{proof}[Proof of Theorem~\ref{thm:alon}]
Let $A=A_1\times A_2\times\ldots\times A_d$ be a $d$-dimensional discrete box and let $\{B^1,B^2,\ldots,B^m\}$ be a partition of $A$ into proper sub-boxes, where $B^j=B^j_1\times B^j_2\times\ldots\times B^j_d$ for all $j$. Select sets $R_i$, $i\in\{1,2,\ldots,d\}$, independently, uniformly at random amongst all odd-sized subsets of $A_i$, and let $R:=R_1\times R_2\times\ldots\times R_d$.
For $j\in\{1,2,\ldots,m\}$, let $X_j$ be the indicator function of the event that $|B^j\cap R|$ is odd, and set $X=\sum_{j=1}^m X_j$. Then we have that the expectation of $X_j$ satisfies
$$\mathbb{E}(X_j)=\mathbb{P}\left(|B^j\cap R|\text{ is odd}\right) = \prod_{i=1}^d\mathbb{P}\left(|B^j_i\cap R_i|\text{ is odd}\right)=2^{-d},$$where we have used the observation that half of the odd cardinality subsets of $A_i$ intersect $B^j_i$ in an odd number of elements. By linearity of expectation we have $\mathbb{E}(X)=m2^{-d}$. Note also that
$$X\equiv \sum_j X_j \equiv \sum_j |B^j\cap R|\equiv |R|\equiv 1 \text{ mod }2.$$ Hence $X\geq 1$ with probability $1$, implying that $\mathbb{E}(X)\geq 1$ and so $m\geq 2^d$ as claimed.
\end{proof}
Let $f_{\text{odd}}(n,d)$ denote the number of odd proper sub-boxes required to partition the box $[n]^d$. Note that it is easily seen from Theorem~\ref{thm:alon} that whenever $n\geq 2$ is even we have $f_{\text{odd}}(n,d) = 2^d$. Hence we will always assume that the first argument of $f_{\text{odd}}$ is odd. Using this notation, Theorem~\ref{thm:main} simply states that if $d\geq 3$ is divisible by $3$ then $f_{\text{odd}}(5,d)\leq 25^{d/3}$.
Note first that if $m\geq n$ are odd integers, and $\mathcal{B}$ is a partition of $[n]^d$ into odd proper sub-boxes, then one can obtain a partition of $[m]^d$ into $|\mathcal{B}|$ odd proper sub-boxes by identifying the element $\{n\}$ with the interval $\{n,n+1,\ldots,m\}$. Hence if $2<n\leq m$ are odd integers and $d\geq 1$ then
\begin{equation}\label{eq:monotone}
f_{\text{odd}}(n,d)\geq f_{\text{odd}}(m,d).
\end{equation}
Note that if $\mathcal{B}_1$ and $\mathcal{B}_2$ are partitions of $[n]^d_1$ and $[n]^d_2$ respectively into odd boxes, then $\mathcal{B}_1\times \mathcal{B}_2$ is a partition of $[n]^{d_1+d_2}$ into $|\mathcal{B}_1|\cdot |\mathcal{B}_2|$ odd boxes. Hence the function $f_{\text{odd}}$ satisfies
\begin{equation}\label{eq:submult}
f_{\text{odd}}(n,d_1+d_2)\leqf_{\text{odd}}(n,d_1)\cdot f_{\text{odd}}(n,d_2)
\end{equation} for all $n\geq 2$ and $d_1,d_2\geq 1$. Since by Theorem~\ref{thm:alon} we have that $f_{\text{odd}}(n,d)\geq 2^d$ for all $n,d$, Fekete's lemma~\cite{fekete} can be applied. It follows that for every $n\geq 2$, there exists a nonnegative constant $\alpha_n$ depending only on $n$, such that $f_{\text{odd}}(n,d)=\left(\alpha_n+o_d(1)\right)^d$, where $o_d(1)\rightarrow 0$ as $d\rightarrow \infty$.
By inequality~\eqref{eq:monotone} the sequence $(\alpha_n)_{n\in\mathbb{N}}$ is monotone decreasing. An interesting open question is whether the limit of the sequence on the odd integers is equal to two or not -- see Section~\ref{sec:concl} for more details.
Note that these considerations apply equally to the $k$-piercing problem, showing that for fixed $k$ the minimum number of boxes in a partition with the $k$-piercing property is at least $(\beta_{k,n}+o_{d}(1))^d$ for some monotone decreasing sequence $(\beta_{k,n})_{n\in \mathbb{N}}$. Letting $\beta_k=\lim_{n\to \infty}\beta_{k,n}$, Theorem~\ref{thm:kpiercingbox} shows that $\beta_k\le 15^{1/2}$ for all $k$. Similarly, one can define $\gamma_k$ for the case of bricks, in which case Theorem~\ref{thm:kpiercingbrick} implies $\gamma_k \le 3.92.$
Let us denote by $p_{\text{box}}(n,d,k)$ the answer to Question~\ref{qu:kpiercingbox}
and by $p_{\text{brick}}(n,d,k)$ the answer to the same question, but restricted to bricks. Let $p_{\text{box}}(d,k)=\lim_{n\rightarrow\infty}p_{\text{box}}(n,d,k)$ and $p_{\text{brick}}(d,k)=\lim_{n\rightarrow\infty}p_{\text{brick}}(n,d,k)$, which both exist by the above observations.
As any brick is a box, we know that $p_{\text{box}}(d,k) \le p_{\text{brick}}(d,k).$ Note that with the above definitions $p_{\text{brick}}(d,k)=(\beta_k+o_d(1))^d$ and $p_{\text{box}}(d,k)=(\gamma_k+o_d(1))^d.$
The case of $k=2$ is resolved completely by Theorem~\ref{thm:alon} as there is a trivial partition into $2^d$ bricks, by splitting the original box into two along each dimension, implying $p_{\text{brick}}(d,2) \le 2^d $. On the other hand, a partition being $2$-piercing is equivalent to it consisting only of proper boxes, so Theorem~\ref{thm:alon} implies that $2^d \le p_{\text{box}}(d,2).$ In particular, this implies a very surprising result that for $k=2$ the answer is the same for boxes and bricks: $p_{\text{box}}(d,2) = p_{\text{brick}}(d,2) =2^d.$
\section{Partitioning into odd boxes}\label{sec:proofs}
We start with proving the upper bound, given by Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Note that by inequality~\eqref{eq:submult}, it suffices to show that $f_{\text{odd}}(5,3)\leq 25$. That is, we seek a partition of $[5]^3$ into $25$ proper odd boxes. This partition can be seen on Figure~\ref{fig:25oddbox}. The list of the coordinates of the $25$ boxes can be found in the appendix.
This solution was found by phrasing the problem as an integer program, with one (Boolean) variable for every possible odd sub-box, and one constraint per coordinate saying that the sum of variables that correspond to boxes which contain this point is one. We then used Gurobi~\cite{gurobi}, a commercially available solver, to find the counterexample.
\end{proof}
We now turn to lower bounds, starting with the easy observation that for each fixed $n$ we have $\alpha_n>2$.
\begin{proposition}\label{prop:oddlower}
Let $n>2$ be odd, and $d\geq 1$. Then we have the lower bound $$f_{\text{odd}}(n,d)\geq \left(2+\frac{1}{2^{n-2}-1}\right)^d.$$
\end{proposition}
\begin{proof}
The proof of Proposition~\ref{prop:oddlower} is a trivial modification of the proof of Alon, Bohman, Holzman and Kleitman of Theorem~\ref{thm:alon}. We simply take the sets $R_i$ to be uniformly chosen at random amongst \emph{proper}, odd-sized subsets of $[n]$. That is, $R_i$ is a uniformly random element of the set $\{S\subset A : S\neq A \text{ and }|S| \text{ is odd}\}$. Define $X_j, X$ and $R$ as in the proof of Theorem~\ref{thm:alon} and note that
$$\mathbb{E}(X_j)=\mathbb{P}\left(|B^j\cap R|\text{ is odd}\right)=\left(\frac{2^{n-2}-1}{2^{n-1}-1}\right)^d.$$ As before we have that $X\geq 1$ with probability $1$, hence $\mathbb{E}(X)=m\mathbb{E}(X_j)\geq 1$. After rearranging, this gives the required result.
\end{proof}
Note that Proposition~\ref{prop:oddlower} simply says that $\alpha_n\geq 2+\frac{1}{2^{n-2}-1}$ for all odd $n$, but this sequence of lower bounds on the $\alpha_n$-s converges to two.
We will now consider the case where the members of our partition are proper, odd bricks. The idea behind the proof of Theorem~\ref{prop:bricks} is to remove the `top' and `bottom' layers of a partition and prove that the number of remaining bricks has to be large, since their projection onto the first $d-1$ layers forms a partition of a $d-1$-dimensional odd box. While this is not quite true, this proof method can be made to work by considering a stronger induction hypothesis.
\begin{proof}[Proof of Proposition~\ref{prop:bricks}]
Let $n\geq 2$ be odd, $d\geq 1$ arbitrary integer. We will prove the stronger claim that if $\mathcal{B}=\{B^1,B^2,\ldots,B^m\}$ is a set of odd proper bricks that cover every element of $[n]^d$ an odd number of times, then $m\geq 3^d$. The proof goes by induction on $d$.
Let $n,d,\mathcal{B}$ be given. For any brick $B \in \mathcal{B}$ let $B=B_1 \times \cdots \times B_d$ where $B_i$ are odd length intervals. Let $\mathcal{C},\mathcal{D}\subset\mathcal{B}$ be defined as
$$\mathcal{C}=\left\{B^i : B^i\cap\left( \underbrace{[n] \times [n] \times \ldots \times [n]}_{d-1}\times \{1\}\right)\neq \emptyset\right\},$$
and
$$\mathcal{D}=\left\{B^i : B^i\cap\left( \underbrace{[n] \times [n] \times \ldots \times [n]}_{d-1}\times \{n\}\right)\neq \emptyset\right\}.$$
Note that $\mathcal{C}\cap \mathcal{D}=\emptyset$, as all $B^i$-s are proper bricks. Moreover, as elements of $\mathcal{C}$ cover every point of $[n]^{d-1}\times\{1\}$ an odd number of times, by induction we have $|\mathcal{C}|\geq 3^{d-1}$, and similarly $|\mathcal{D}|\geq 3^{d-1}$. Remains to show that $|\mathcal{B}\setminus (\mathcal{C}\cup\mathcal{D})|\geq 3^{d-1}$.
For every point $(i_1,i_2,\ldots,i_d)\in [n]^d$ and any family of bricks $\mathcal{E}$, denote by $x_{i_1,i_2,\ldots,i_d}(\mathcal{E})$ the number of bricks in $\mathcal{E}$ that contain $\{i_1\}\times\{i_2\}\times\ldots\times\{i_d\}$, and note that by assumptions $x_{i_1,i_2,\ldots,i_d}(\mathcal{B})$ is odd for all choices of the $i_j$-s.
For all $(i_1,i_2,\ldots,i_{d-1})\in[n]^{d-1}$ define the quantity
$$y_{i_1,i_2,\ldots,i_{d-1}}=\sum_{j=1}^n x_{i_1,i_2,\ldots,i_{d-1},j}(\mathcal{B}\setminus (\mathcal{C}\cup \mathcal{D})),$$
and note that $y_{i_1,i_2,\ldots,i_{d-1}}$ is odd for all choices of $i_1,\ldots,i_{d-1}$. Indeed, as $\mathcal{C}\cap \mathcal{D}=\emptyset$
\begin{align*}
y_{i_1,i_2,\ldots,i_{d-1}}
& =\sum_{j=1}^n x_{i_1,i_2,\ldots,i_{d-1},j}(\mathcal{B})-\sum_{j=1}^n x_{i_1,i_2,\ldots,i_{d-1},j}(\mathcal{C})-\sum_{j=1}^n x_{i_1,i_2,\ldots,i_{d-1},j}(\mathcal{D})\\
& =\sum_{j=1}^n x_{i_1,i_2,\ldots,i_{d-1},j}(\mathcal{B})-\sum_{C \in \mathcal{C}}\sum_{j=1}^n \mathbbm{1} ((i_1,i_2,\ldots,i_{d-1},j) \in \mathcal{C})-\sum_{D \in \mathcal{D}}\sum_{j=1}^n \mathbbm{1} ((i_1,i_2,\ldots,i_{d-1},j) \in \mathcal{D}) \\
& = \sum_{j=1}^n x_{i_1,i_2,\ldots,i_{d-1},j}(\mathcal{B})-\sum_{C \in \mathcal{C}}|C_d|-\sum_{D \in \mathcal{D}}|D_d|,
\end{align*}
where $\mathbbm{1}(\cdot)$ denotes the indicator function of an event.
Now as $C_d,D_d$ are odd size intervals each term in the above sums is odd, so the total residue mod $2$ is $n-|\mathcal{C}|-|\mathcal{D}|,$ which is odd.
Consider the projection of the bricks in $\mathcal{B}\setminus(\mathcal{C}\cup\mathcal{D})$ onto the first $d-1$ coordinates and note that it induces an odd cover of a $d-1$ dimensional odd cube, as follows. For any brick $B\in \mathcal{B}\setminus(\mathcal{C}\cup\mathcal{D})$ define $\pi(B):= B_1\times B_2\times \ldots \times B_{d-1}$ to be the projection of the box $B$ onto the first $d-1$ coordinates. For all $(i_1,i_2,\ldots,i_{d-1})\in [n]^{d-1}$ define the quantity
$$z_{i_1,i_2,\ldots,i_{d-1}}=\sum_{B\in\mathcal{B}\setminus(\mathcal{C}\cup\mathcal{D})}\mathbbm{1}\left((i_1, i_2,\ldots,i_{d-1})\in \pi(B)\right).$$
Observe that
$$z_{i_1,i_2,\ldots,i_{d-1}}\equiv y_{i_1,i_2,\ldots,i_{d-1}} \text{ mod }2$$
for all choices of coordinates, and hence all $z_{i_1,i_2,\ldots,i_{d-1}}$-s are odd. Since the set of bricks
$$\{\pi(B): B\in\mathcal{B}\setminus(\mathcal{C}\cup\mathcal{D})\}$$
form a cover of $[n]^{d-1}$ with each point covered $z_{i_1,i_2,\ldots,i_{d-1}}$ times, it follows by induction that $|\mathcal{B}\setminus(\mathcal{C}\cup\mathcal{D})|\geq 3^{d-1}$ and the proof is complete.
\end{proof}
\section{Piercing}\label{sec:piercing}
In this section we will consider piercing problems related to Question~\ref{qu:kpiercingbox}. We start by giving some simple bounds, derived by generalising the arguments used for $k=2,$ which illustrate various difficulties that arise. In the following subsections we give various improvements to these bounds.
In the case of bricks, observe that a single brick of the partition that does not contain a corner vertex can be incident to only one edge of the original cube, as otherwise it would not be proper and thus fail the $k$-piercing property (even for $k=2$). Also, for each edge there needs to be at least $k$ boxes which are incident to it. Combining these two observations we deduce that there needs to be at least $d2^{d-1}(k-2)$ different non-corner boxes, as there are $d2^{d-1}$ edges. Including the additional $2^d$ corner boxes this implies that there are at least $d2^{d-1}(k-2)+2^d$ different boxes.
On the other hand, generalising the partition used for $k=2$, splitting the original cube into $k$ parts along each dimension obtains a $k$-piercing partition into $k^d$ bricks. So we have shown the following two easy bounds:
\begin{equation} \label{eq:brick-trivial}
d2^{d-1}(k-2)+2^d \le p_{\text{brick}}(d,k) \le k^d.
\end{equation}
In the case of boxes, the lower bound no longer applies, as almost all the bricks counted as different above might become parts of a single box. The same kind of argument only gives $p_{\text{box}}(d,k) \ge d(k-1)+1$ by fixing a corner and counting all the boxes incident to an edge containing this corner, which need to be different. Furthermore, it is not clear how to exploit the $k$-piercing property in the argument used in Theorem~\ref{thm:alon} for $k>2$. However Theorem~\ref{thm:alon} is directly applicable in the case $k=2$, which gives a lower bound of $2^d$ which then holds for all $k\ge 2$. From the other direction, it is also not clear how one could exploit the freedom afforded by using boxes instead of bricks when trying to find a partition, and in fact when $k=2$ this turns out not to be possible. We can, however, reuse the bound for bricks to obtain the following simple bounds:
\begin{equation}\label{eq:box-trivial}
\max (k(d-1)+1,2^d) \le p_{\text{box}}(d,k) \le k^d.
\end{equation}
Note that the lower bound for $p_{\text{box}}$ highlights a disconnect between our methods for dealing with the two most extreme regimes: firstly the case of $k$ fixed and $d\to \infty$ in which the lower bound is $2^d$, and secondly the case of $d$ fixed and $k\to\infty$ in which the bound of $k(d-1)+1$ is relevant. We shall give our results in terms of both $k$ and $d$ so that they apply generally, and indeed the upper bounds we shall describe are the best we know across all regimes. Our lower bound efforts, however, are most relevant for the latter scenario (when $d$ is small compared to $k$).
In the following subsections we will describe our various improvements to the above bounds. In the first subsection we will discuss upper bounds on $p_{\text{brick}}(d,k)$ and $p_{\text{box}}(d,k)$ and in the second subsection we discuss lower bounds.
\subsection{Upper bounds for the $k$-piercing problem}
In this section we will present the proof of Theorems~\ref{thm:kpiercingbox} and \ref{thm:kpiercingbrick}, giving a major improvement over the upper bound in \eqref{eq:brick-trivial} and \eqref{eq:box-trivial}. We begin by presenting a simple partition into at most $4^dk$ bricks that satisfies the $k$-piercing property. This construction is so simple and natural that one might imagine that it could be best possible. This is not the case, however, and we will go on to present two different approaches for obtaining improvements in the base of the exponent, one of which is specific for boxes and gives a slightly better bound.
We define $f_d(a_1,\dots,a_d)$ to be the minimum size of a partition of $[n]^d$ into boxes so that every line in dimension $i$ hits at least $a_i$ boxes, (we refer to this as the $(a_1,\dots,a_d)$-piercing condition). In the first two dimensions, we split $[n]^d$ into 4 quadrants. In the top left and bottom right quadrants we place a construction satisfying the $(1,k-1,k,\dots,k)$-piercing condition. In the bottom left and top right quadrants we place a construction satisfying the $(k-1,1,k,\dots,k)$-piercing condition. This is shown in Figure~\ref{fig:firstkp}. This gives a construction satisfying the $k$-piercing condition, and observing that $f_d(1,k,\dots,k)\le f_{d-1}(k-1,k,\dots,k) \le f_{d-1}(k,k,\dots,k)$ gives the following bound for $d\ge 2$:
\[f_d(k,\dots,k)\le 4f_{d-1}(k,\dots,k).\]
Combining this with the fact that $f_1(k)=k$ we find that $f_d(k,\dots,k)\le 4^d k$.
\begin{figure}
\caption{On the left we see a $k$-piercing configuration in two dimensions with $4(k-1)$ bricks. On the right, we use this idea to give a $k$-piercing construction with $k4^d$ boxes. In the first two dimensions we divide the cube into quadrants and then place optimal constructions in each quadrant satisfying the piercing conditions shown.
}
\begin{center}
\begin{tikzpicture}
\draw [-] (0,0) -- (6,0);
\draw [-] (0,0) -- (0,6);
\draw [-] (0,6) -- (6,6);
\draw [-] (6,0) -- (6,6);
\draw [-] (8,0) -- (14,0);
\draw [-] (8,0) -- (8,6);
\draw [-] (8,6) -- (14,6);
\draw [-] (14,0) -- (14,6);
\draw [very thin] (9.5,1.5) node[] {\scalebox{0.8}{$\Spvek{1;k-1;k;\vdots;k}$}};
\draw [very thin] (9.5,4.5) node[] {\scalebox{0.8}{$\Spvek{k-1;1;k;\vdots;k}$}};
\draw [very thin] (12.5,4.5) node[] {\scalebox{0.8}{$\Spvek{1;k-1;k;\vdots;k}$}};
\draw [very thin] (12.5,1.5) node[] {\scalebox{0.8}{$\Spvek{k-1;1;k;\vdots;k}$}};
\draw [very thin] (3,7.5) node[] {a) \,\, $p_{\text{brick}}(2,k)\le 4(k-1)$};
\draw [very thin] (11,7.5) node[] {b) \,\, $p_{\text{brick}}(d,k)\le 4p_{\text{brick}}(d-1,k)$};
\draw [-] (11,0) -- (11,6);
\draw [-] (8,3) -- (14,3);
\draw [-] (3,0) -- (3,6);
\draw [-] (0,3) -- (6,3);
\draw [-] (0,0.5) -- (3,0.5);
\draw [-] (0,1) -- (3,1);
\draw [-] (0,2.5) -- (3,2.5);
\draw [very thin] (1.5,1.85) node[] {\scalebox{1.5}{\vdots}};
\draw [-] (3.5,0) -- (3.5,3);
\draw [-] (4,0) -- (4,3);
\draw [-] (5.5,0) -- (5.5,3);
\draw [very thin] (4.85,1.5) node[] {\scalebox{1.5}{\dots}};
\draw [-] (0.5,3) -- (0.5,6);
\draw [-] (1,3) -- (1,6);
\draw [-] (2.5,3) -- (2.5,6);
\draw [very thin] (1.85,4.5) node[] {\scalebox{1.5}{\dots}};
\draw [-] (3,3.5) -- (6,3.5);
\draw [-] (3,4) -- (6,4);
\draw [-] (3,5.5) -- (6,5.5);
\draw [very thin] (4.5,4.85) node[] {\scalebox{1.5}{\vdots}};
\draw [decorate,decoration={brace,amplitude=10pt,raise=4pt},yshift=0pt]
(0,0) -- (0,3) node [black,midway,xshift=-1cm] {\footnotesize
$k-1$};
\draw [decorate,decoration={brace,amplitude=10pt,raise=4pt},yshift=0pt]
(6,0) -- (3,0) node [black,midway,yshift=-0.8cm] {\footnotesize
$k-1$};
\draw [decorate,decoration={brace,amplitude=10pt,raise=4pt},yshift=0pt]
(6,6) -- (6,3) node [black,midway,xshift=1cm] {\footnotesize
$k-1$};
\draw [decorate,decoration={brace,amplitude=10pt,raise=4pt},yshift=0pt]
(0,6) -- (3,6) node [black,midway,yshift=0.8cm] {\footnotesize
$k-1$};
\end{tikzpicture}
\end{center}
\label{fig:firstkp}
\end{figure}
In particular this shows:
\begin{equation}\label{eq:4tothed}
p_{\text{box}}(d,k) \le p_{\text{brick}}(d,k) \le 4^dk
\end{equation}
So, in the notation introduced in Section~\ref{sec:setup}, we have $\gamma_k \le \beta_k \le 4.$
One may wonder if these bounds are tight, and the construction describes above is essentially best possible (at least in the case of bricks). We will now show that this is not the case, and give two different approaches for improving the base of the exponent further. In both following subsections we will reuse the general idea of splitting the cube along a couple of dimensions. In the following subsection we work with bricks and prove Theorem~\ref{thm:kpiercingbrick} and in the subsequent subsection we exploit a simple observation which holds for boxes but not for bricks to get an even better bound.
\subsubsection{Bricks}
In some sense a more surprising part of the result \eqref{eq:4tothed} is the fact that for a fixed dimension $d$ both $p_{\text{box}}(d,k)$ and $p_{\text{brick}}(d,k)$ are linear in $k$, but using the sub-multiplicative inequalities such as \eqref{eq:submult} can never give results linear in $k.$ The idea of finding a small example and then using these inequalities as was done in the previous section for $f_{odd}$ can only ever give something interesting when $k$ is rather small. However, the idea behind the argument giving \eqref{eq:4tothed} is to use small examples in a different manner. The following observation gives a more general view of this idea.
Given a partition of $[n]^d$ into bricks $A_1,\ldots, A_m$ such that we can assign to each $A_i$ a $d$-tuple $(a_{i,1}\ldots,a_{i,d})$ of positive integers such that for any line in $j$-th dimension the sum of $a_{i,j},$ with $i$ ranging over the bricks crossed by this line, is at least $k.$ Whenever we have such a partition we obtain that $f_d(k,\ldots,k) \le \sum_{i=1}^m f_d(a_{i,1},\ldots,a_{i,d})$ as we can solve the corresponding subproblem within each brick of the partition. We will call such a partition \emph{intermediate}.
The natural goal is to find small examples of intermediate partitions. For example, given a $k$-piercing example for small $d,$ if we can group several bricks into sets $A_i$ to obtain an intermediate partition then we obtain an upper bound on $f_d(k,\ldots,k).$ For instance, in the proof of \eqref{eq:4tothed}, we used the example on the left of Figure~\ref{fig:firstkp} which gives a natural grouping into $4$ bricks, yielding the intermediate example on the right of this figure.
The following lemma gives a way of obtaining, from an intermediate partition in $d$ dimensions, a new intermediate partition in $d+1$ dimensions in a slightly better way than the trivial approach of stacking two copies on top of one another.
\begin{lemma}\label{lem:proper-subbox}
Let $A_1,\ldots,A_m$ be an intermediate partition of $[n]^d.$ Let $X$ and $Y$ be corners of the cube such that the largest proper sub-brick containing $X$ covers w.l.o.g. $A_1, \ldots, A_s$ and let $A_r$ be the brick containing corner $Y.$ Then
\begin{align*}
f_{d+1}(k,\ldots,k)\le & \sum_{i=1}^s f_{d+1}(a_{i,1}\ldots,a_{i,d},1)+\sum_{i=s+1}^m f_{d+1}(a_{i,1}\ldots,a_{i,d},k-1)+\\
& \sum_{i=1,i \neq r}^mf_{d+1}(a_{i,1}\ldots,a_{i,d},1)+f_{d+1}(a_{r,1}\ldots,a_{r,d},k-1).
\end{align*}
\end{lemma}
\begin{proof}
We split the cube in two parts along the $d+1$-st dimension. We use the given partition for both parts, but with the top part rotated in such a way that $Y$ corresponds to $X.$ We then rescale the top partition in such a way that $A_r$ covers all of $A_1, \ldots, A_s$ in the original partition (note that his may require a minor increase in the $n$ we use). We add $k-1$ for the last dimension of $A_r$ in the top part and all the bricks in the lower part except $A_1, \ldots, A_s,$ we add $1$ for the remaining bricks. This new partition is a new intermediate partition in $d+1$ dimensions, as along first $d$ dimensions all the lines satisfy the condition because we started with an intermediate partition, and along the $d+1$'st, if it passes through any of $A_1, \ldots A_s$ of the lower part it will pass through $A_r$ of the upper part so the sum will be at least $1+k-1$ and, otherwise it will pass through some $A_i,$ $i \ge s+1$ in the lower part and something in the upper part again giving sum at least $k-1+1.$ The inequality now follows from the above observation.
\end{proof}
We now apply this lemma to the $5$-part intermediate partition, derived from the one given in Figure~\ref{fig:firstkp}, and given in Figure~\ref{fig:secondkp}. We obtain the $3$-dimensional intermediate partition shown in Figure~\ref{fig:thirdkp}.
\begin{figure}
\caption{The intermediate partition in 2 dimensions, to which we apply Lemma~\ref{lem:proper-subbox}. $X$ is denoted by red circle, $Y$ by a blue circle, the parts $A_1,\ldots,A_s$ are shaded red and $A_r$ is shaded blue.}
\begin{center}
\begin{tikzpicture}
\draw [-] (0,0) -- (6.25,0);
\draw [-] (0,0) -- (0,5);
\draw [-] (0,5) -- (6.25,5);
\draw [-] (4,0) -- (4,2.5);
\draw [-] (6.25,0) -- (6.25,5);
\draw [-] (2.5,0) -- (2.5,5);
\draw [-] (0,2.5) -- (6.25,2.5);
\draw[thick] (0,0) circle(0.3)[red];
\draw[thick] (6.25,0) circle(0.3)[blue];
\fill[red,opacity=0.2] (0,0) rectangle (4,2.5);
\fill[blue,opacity=0.2] (4,0) rectangle (6.25,2.5);
\draw [very thin] (1.25,1.25) node[] {{$\Spvek{1;k-1}$}};
\draw [very thin] (1.25,3.75) node[] {{$\Spvek{k-1;1}$}};
\draw [very thin] (3.25,1.25) node[] {\footnotesize{$\Spvek{k-2;1}$}};
\draw [very thin] (4.375,3.75) node[] {{$\Spvek{1;k-1}$}};
\draw [very thin] (5.125,1.25) node[] {\footnotesize{$\Spvek{1;1}$}};
\end{tikzpicture}
\end{center}
\label{fig:secondkp}
\end{figure}
\begin{figure}
\caption{The intermediate partition in 3 dimensions, provided by the above lemma.}
\begin{center}
\begin{tikzpicture}
\draw [-] (0,0) -- (6.25,0);
\draw [-] (0,0) -- (0,5);
\draw [-] (0,5) -- (6.25,5);
\draw [-] (3.75,0) -- (3.75,2.5);
\draw [-] (6.25,0) -- (6.25,5);
\draw[thick] (0,0) circle(0.3)[red];
\draw [-] (2.5,0) -- (2.5,5);
\draw [-] (0,2.5) -- (6.25,2.5);
\fill[red,opacity=0.2] (0,0) rectangle (3.75,2.5);
\draw [very thin] (3.125,-1) node[] {Bottom Layer};
\draw [very thin] (1.25,1.25) node[] {\footnotesize{$\Spvek{1;k-1;1}$}};
\draw [very thin] (1.25,3.75) node[] {\footnotesize{$\Spvek{k-1;1;k-1}$}};
\draw [very thin] (3.125,1.25) node[] {\tiny{$\Spvek{k-2;1;1}$}};
\draw [very thin] (4.375,3.75) node[] {\footnotesize{$\Spvek{1;k-1;k-1}$}};
\draw [very thin] (5,1.25) node[] {\tiny{$\Spvek{1;1;k-1}$}};
\draw [-] (8,0) -- (14.25,0);
\draw [-] (8,0) -- (8,5);
\draw [-] (8,5) -- (14.25,5);
\draw [-] (11.75,0) -- (11.75,2.5);
\draw [-] (14.25,0) -- (14.25,5);
\draw [-] (13,0) -- (13,5);
\draw [-] (8,2.5) -- (14.25,2.5);
\draw[thick] (8,0) circle(0.3)[blue];
\fill[blue,opacity=0.2] (8,0) rectangle (11.75,2.5);
\draw [very thin] (11.125,-1) node[] {Top Layer};
\draw [very thin] (9.875,1.25) node[] {\footnotesize{$\Spvek{1;1;k-1}$}};
\draw [very thin] (10.5,3.75) node[] {\footnotesize{$\Spvek{1;k-1;1}$}};
\draw [very thin] (12.375,1.25) node[] {\tiny{$\Spvek{k-2;1;1}$}};
\draw [very thin] (13.625,3.75) node[] {\tiny{$\Spvek{k-1;1;1}$}};
\draw [very thin] (13.625,1.25) node[] {\tiny{$\Spvek{1;k-1;1}$}};
\end{tikzpicture}
\end{center}
\label{fig:thirdkp}
\end{figure}
In particular, this implies:
$$f_d(k,\ldots,k) \le 2f_d(1,k-1,k-1,k,\ldots,k)+6f_d(1,1,k-1,k,\ldots,k)+2f_d(1,1,k-2,k,\ldots,k)$$
Unfortunately, this bound still only implies $f_d(k,\ldots,k) \le (4+o_d(1))^dk,$ but modifying this partition slightly, we consider Figure~\ref{fig:fourthkp} and apply the lemma once again. This does achieve an improvement in the base of the exponential term.
\begin{figure}
\caption{The intermediate partition in 3 dimensions, to which we apply Lemma~\ref{lem:proper-subbox}. $X$ is denoted by red circle, $Y$ by a blue circle, the parts $A_1,\ldots,A_s$ are shaded red and $A_r$ is shaded blue.}
\begin{center}
\begin{tikzpicture}
\draw [-] (0,0) -- (7.5,0);
\draw [-] (0,0) -- (0,5);
\draw [-] (0,5) -- (7.5,5);
\draw [-] (3.75,0) -- (3.75,2.5);
\draw [-] (7.5,0) -- (7.5,5);
\draw [-] (1.25,2.5) -- (1.25,5);
\draw[thick] (7.5,5) circle(0.3)[red];
\draw [-] (2.5,0) -- (2.5,5);
\draw [-] (0,2.5) -- (7.5,2.5);
\fill[red,opacity=0.2] (1.25,2.5) rectangle (7.5,5);
\draw [very thin] (3.75,-1) node[] {Bottom Layer};
\draw [very thin] (1.25,1.25) node[] {\footnotesize{$\Spvek{1;k-1;1}$}};
\draw [very thin] (5,3.75) node[] {\footnotesize{$\Spvek{1;k-1;k-1}$}};
\draw [very thin] (3.125,1.25) node[] {\tiny{$\Spvek{k-2;1;1}$}};
\draw [very thin] (1.875,3.75) node[] {\tiny{$\Spvek{k-2;1;k-1}$}};
\draw [very thin] (0.625,3.75) node[] {\tiny{$\Spvek{1;1;k-1}$}};
\draw [very thin] (5.625,1.25) node[] {\footnotesize{$\Spvek{1;1;k-1}$}};
\draw [-] (8,0) -- (15.5,0);
\draw [-] (8,0) -- (8,5);
\draw [-] (8,5) -- (15.5,5);
\draw [-] (11.75,0) -- (11.75,2.5);
\draw [-] (15.5,0) -- (15.5,5);
\draw [-] (14.25,2.5) -- (14.25,5);
\draw [-] (13,0) -- (13,5);
\draw [-] (8,2.5) -- (15.5,2.5);
\draw[thick] (15.5,5) circle(0.3)[blue];
\fill[blue,opacity=0.2] (14.25,2.5) rectangle (15.5,5);
\draw [very thin] (11.75,-1) node[] {Top Layer};
\draw [very thin] (9.875,1.25) node[] {\footnotesize{$\Spvek{1;1;k-1}$}};
\draw [very thin] (10.5,3.75) node[] {\footnotesize{$\Spvek{1;k-1;1}$}};
\draw [very thin] (12.375,1.25) node[] {\tiny{$\Spvek{k-2;1;1}$}};
\draw [very thin] (13.625,3.75) node[] {\tiny{$\Spvek{k-2;1;1}$}};
\draw [very thin] (14.875,3.75) node[] {\tiny{$\Spvek{1;1;1}$}};
\draw [very thin] (14.25,1.25) node[] {\footnotesize{$\Spvek{1;k-1;1}$}};
\end{tikzpicture}
\end{center}
\label{fig:fourthkp}
\end{figure}
In particular, we find:
\begin{align*}f_d(k,\ldots,k) \le & 8f_d(1,1,k-1,k-1,k,\ldots,k)+5f_d(1,1,k-2,k-1,k,\ldots,k)+\\
& 8f_d(1,1,1,k-1,k,\ldots,k)+3f_d(1,1,1,k-2,k,\ldots,k)
\end{align*}
This already suffices to give an example with at most about $3.97^dk$ bricks. However, since the red bricks have large piercing values in all but one dimension, it turns out that a further manual step can be made before applying Lemma~\ref{lem:proper-subbox}. In particular, using the partition given in Figure~\ref{fig:fifthkp} we obtain the following slight improvement:
\begin{align*}f_d(k,\ldots,k) \le & 10f_d(1,1,k-1,k-1,k,\ldots,k)+3f_d(1,1,k-2,k-1,k,\ldots,k)+\\
& 6f_d(1,1,1,k-1,k,\ldots,k)+3f_d(1,1,1,k-2,k,\ldots,k).
\end{align*}
\begin{figure}
\caption{An intermediate partition in 4 dimensions. The third and fourth dimensions move between the rectangles horizontally and vertically respectively.}
\begin{center}
\begin{tikzpicture}
\draw [very thin] (0,0) -- (8,0) -- (8,5) -- (0,5) -- (0,0);
\draw [very thin] (9,0) -- (17,0) -- (17,5) -- (9,5) -- (9,0);
\draw [very thin] (0,6) -- (8,6) -- (8,11) -- (0,11) -- (0,6);
\draw [very thin] (9,6) -- (17,6) -- (17,11) -- (9,11) -- (9,6);
\draw [very thin] (0,2.5) -- (8,2.5);
\draw [very thin] (0,8.5) -- (8,8.5);
\draw [very thin] (9,2.5) -- (17,2.5);
\draw [very thin] (9,8.5) -- (17,8.5);
\draw [very thin] (10,0) -- (10,5);
\draw [very thin] (11,0) -- (11,2.5);
\draw [very thin] (2,0) -- (2,2.5);
\draw [very thin] (3,0) -- (3,5);
\draw [very thin] (4,2.5) -- (4,5);
\draw [very thin] (4,8.5) -- (4,11);
\draw [very thin] (5,6) -- (5,11);
\draw [very thin] (6,6) -- (6,8.5);
\draw [very thin] (5,6) -- (5,11);
\draw [very thin] (15,6) -- (15,8.5);
\draw [very thin] (16,6) -- (16,11);
\draw[thick] (8,5) circle(0.3)[blue];
\draw[thick] (8,11) circle(0.3)[red];
\fill[red,opacity=0.2] (4,8.5) rectangle (8,11);
\fill[blue,opacity=0.2] (4,2.5) rectangle (8,5);
\draw [very thin] (2.5,1.25) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;1;1}$}}};
\draw [very thin] (1,1.25) node[] {\tiny{\scalebox{1}{$\Spvek{1;1;k-1;1}$}}};
\draw [very thin] (5.5,1.25) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;k-1;1;1}$}}};
\draw [very thin] (1.5,3.75) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;k-1;1;1}$}}};
\draw [very thin] (3.5,3.75) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;1;1}$}}};
\draw [very thin] (6,3.75) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;1;1;k-1}$}}};
\draw [very thin] (9.5,3.75) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-1;1;k-1;1}$}}};
\draw [very thin] (9.5,1.25) node[] {\tiny{\scalebox{0.8}{$\Spvek{1;k-1;1;1}$}}};
\draw [very thin] (10.5,1.25) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;1;1}$}}};
\draw [very thin] (13.5,3.75) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;k-1;k-1;1}$}}};
\draw [very thin] (14,1.25) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;1;k-1;1}$}}};
\draw [very thin] (2.5,7.25) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;k-1;1;k-1}$}}};
\draw [very thin] (2,9.75) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;k-1;1;k-1}$}}};
\draw [very thin] (4.5,9.75) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;k-1;1}$}}};
\draw [very thin] (6.5,9.75) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;k-1;k-1;1}$}}};
\draw [very thin] (5.5,7.25) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;1;k-1}$}}};
\draw [very thin] (7,7.25) node[] {\tiny{\scalebox{1}{$\Spvek{1;1;k-1;k-1}$}}};
\draw [very thin] (12.5,9.75) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;k-1;1;k-1}$}}};
\draw [very thin] (16.5,9.75) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-1;1;1;k-1}$}}};
\draw [very thin] (12,7.25) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;1;k-1;k-1}$}}};
\draw [very thin] (15.5,7.25) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;1;k-1}$}}};
\draw [very thin] (16.5,7.25) node[] {\tiny{\scalebox{0.8}{$\Spvek{1;k-1;1;k-1}$}}};
\end{tikzpicture}
\end{center}
\label{fig:fifthkp}
\end{figure}
This inequality implies
$$f_d(k,\ldots,k) \le 13f_{d-2}(k,\ldots,k)+9f_{d-3}(k,\ldots,k)$$
which in turn implies $f_d(k,\ldots,k) \le x_0^dk$ where $x_0$ is the largest root of $x^3-13x-9,$ $x_0 \approx 3.91.$ In particular, this shows that $\gamma_k\le \beta_k\le x_0.$
For small values of $k$ the above inequality actually implies a somewhat stronger result, provided we take more care with the $k-1,k-2$ terms. E.g. for $k=3$ we get:
\begin{align*}f_d(3,\ldots,3) \le & 10f_d(1,1,2,2,3,\ldots,3)+9f_d(1,1,1,2,3,\ldots,3)+
3f_d(1,1,1,1,3,\ldots,3)\\
\le & (10\cdot 4+9 \cdot 2+ 3)f_{d-4}(3,\ldots,3)=61f_{d-4}(3,\ldots,3)
\end{align*}
Where we repeatedly used $f_d(2,a_1,\ldots,a_{d-1}) \le 2f_d(1,a_1,\ldots,a_{d-1})=f_{d-1}(a_1,\ldots,a_{d-1}),$ which follows by taking two identical copies of the $d-1$ dimensional example. This inequality implies $\gamma_3 \le \beta_3 \le \sqrt[4]{61}\approx 2.79.$
\subsubsection{Boxes}
It is highly unclear how one could use the additional freedom afforded by using boxes instead of bricks. The only ways that we found exploits the fact that it is possible to cover a square using only three boxes, as shown in Figure~\ref{fig:cover}. This allows us to obtain better examples using boxes than the ones using bricks described above.
\begin{figure}
\caption{A square can be covered by $3$ boxes, but not with $3$ bricks.}
\begin{center}
\begin{tikzpicture}
\draw [] (0,0) rectangle (4,4);
\draw [] (0,1) -- (4,1);
\draw [] (0,3) -- (4,3);
\draw [] (1,0) -- (1,4);
\draw [] (3,0) -- (3,4);
\draw[pattern=vertical lines, pattern color=red] (0,1) rectangle (3,4);
\draw[pattern=north east lines, pattern color=green] (0,0) rectangle (1,1);
\draw[pattern=north east lines, pattern color=green] (3,0) rectangle (4,1);
\draw[pattern=north east lines, pattern color=green] (0,3) rectangle (1,4);
\draw[pattern=north east lines, pattern color=green] (3,3) rectangle (4,4);
\draw[pattern=horizontal lines, pattern color=blue] (1,0) rectangle (4,3);
\end{tikzpicture}
\end{center}
\label{fig:cover}
\end{figure}
We will reuse the intermediate partition given in Figure~\ref{fig:secondkp} to obtain a new intermediate $3$ dimensional partition which will use three copies of it stacked on top of each other such that in each layer the copy of $A_r$ incident to vertex $Y$ is stretched to make one of the three boxes used to cover a square in Figure~\ref{fig:cover} and divided into $k-1$ copies of itself along the third dimension, as shown in Figure~\ref{fig:sixthkp}. This implies that
$f_d(k,\ldots,k) \le 9f_d(1,1,k-1,k,\ldots,k)+6f_d(1,1,k-2,k,\ldots,k).$
\begin{proof}[Proof of Theorem~\ref{thm:kpiercingbox}]
The above inequality directly implies $f_d(k,\ldots,k) \le 15 f_{d-2}(k,\ldots,k),$ showing $\gamma_k \le \sqrt{15} \approx 3.87$ implying Theorem~\ref{thm:kpiercingbox}.
\end{proof}
\begin{figure}
\caption{An intermediate partition based on the above observation. The blue box is labelled $\protect\tvect{1}{1}{k-1}$, red and orange are labelled $\protect\tvect{k-1}{1}{1}$, green is labelled $\protect\tvect{1}{k-1}{1}$ and yellow is labelled $\protect\tvect{1}{k-1}{1}$. All other boxes are in fact bricks.}
\begin{center}
\begin{tikzpicture}
\draw [very thin] (0,0) rectangle (4.5,4.5);
\draw [very thin] (5.5,0) rectangle (10,4.5);
\draw [very thin] (11,0) rectangle (15.5,4.5);
\draw [very thin] (0.9,0) -- (0.9,4.5);
\draw [very thin] (0,3) -- (4.5,3);
\draw [very thin] (1.8,0) -- (1.8,3);
\draw [very thin] (5.5,1.5) -- (10,1.5);
\draw [very thin] (9.1,0) -- (9.1,4.5);
\draw [very thin] (8.2,1.5) -- (8.2,4.5);
\draw [very thin] (12.8,0) -- (12.8,4.5);
\draw [very thin] (13.7,0) -- (13.7,4.5);
\draw [very thin] (11,1.5) -- (15.5,1.5);
\draw [very thin] (13.7,3) -- (15.5,3);
\draw [very thin] (11,2.25) -- (15.5,2.25);
\draw [very thin] (11,3) -- (12.8,3););
\fill[blue,opacity=0.2] (11,0) rectangle (12.8,1.5);
\fill[blue,opacity=0.2] (11,3) rectangle (12.8,4.5);
\fill[blue,opacity=0.2] (13.7,0) rectangle (15.5,1.5);
\fill[blue,opacity=0.2] (13.7,3) rectangle (15.5,4.5);
\fill[red,opacity=0.2] (11,1.5) rectangle (12.8,2.25);
\fill[red,opacity=0.2] (13.7,1.5) rectangle (15.5,2.25);
\fill[green,opacity=0.2] (11,2.25) rectangle (12.8,3);
\fill[green,opacity=0.2] (13.7,2.25) rectangle (15.5,3);
\fill[yellow,opacity=0.2] (12.8,1.5) rectangle (13.7,2.25);
\fill[orange,opacity=0.2] (12.8,0) rectangle (13.7,1.5);
\fill[orange,opacity=0.2] (12.8,2.25) rectangle (13.7,4.5);
\draw [very thin] (0.45,3.75) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-1;1;1}$}}};
\draw [very thin] (2.7,3.75) node[] {\tiny{\scalebox{0.9}{$\Spvek{1;k-1;1}$}}};
\draw [very thin] (0.45,1.5) node[] {\tiny{\scalebox{0.8}{$\Spvek{1;k-1;1}$}}};
\draw [very thin] (1.35,1.5) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;1}$}}};
\draw [very thin] (3.15,1.5) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;1;k-1}$}}};
\draw [very thin] (6.85,3) node[] {\footnotesize{\scalebox{1}{$\Spvek{1;1;k-1}$}}};
\draw [very thin] (7.3,0.75) node[] {\tiny{\scalebox{1}{$\Spvek{1;k-1;1}$}}};
\draw [very thin] (9.55,0.75) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-1;1;1}$}}};
\draw [very thin] (9.55,3) node[] {\tiny{\scalebox{0.8}{$\Spvek{1;k-1;1}$}}};
\draw [very thin] (8.65,3) node[] {\tiny{\scalebox{0.8}{$\Spvek{k-2;1;1}$}}};
\end{tikzpicture}
\end{center}
\label{fig:sixthkp}
\end{figure}
\subsection{Lower bounds}\label{subseq:2dim}
The lower bound on $p_{\text{brick}}$ given in \eqref{eq:brick-trivial} seemed like having a good chance of actually being the truth. For example, it is tight for all $k$ in two dimensions, as the left image in Figure~\ref{fig:firstkp} shows that $p_{\text{brick}}(2,k) \le 4(k-1),$ which matches the lower bound. In higher dimensions it satisfies the recursive lower bound obtained by the inclusion-exclusion principle through analysing the number of bricks touching faces of dimensions from $0$ to $d-1$; for example the proof of the lower bound used only faces of dimensions $0$ (corners) and $1$ (edges). It turns out however that $21 \le p_{\text{brick}}(3,3)$ showing that the bound is not always tight. In fact, exploiting this fact and the aforementioned inclusion-exclusion inequality one can obtain a lower bound, for $k=3,$ which is by a constant factor better than \eqref{eq:brick-trivial}. We omit further details as both parts of the argument are quite cumbersome and result in only a very weak improvement.
The case of boxes seems more difficult, even in $2$ dimensions. We conjecture that $p_{\text{box}}(2,k) = 4(k-1)$ ($=p_{\text{brick}}(2,k)$) and show that this is in fact asymptotically correct, as $k$ gets large. To this end we consider the following reduction. Given a partition of $[n]^2$ with the $k$-piercing property we construct an auxiliary graph with one vertex for each box. We colour the edge between two vertices red if there exists a vertical line intersecting both boxes and blue if there exists a horizontal line intersecting both boxes. Since the $k$-piercing constraint requires that any line intersects at least $k$ boxes, we see that every vertex in our auxiliary graph is both contained inside a clique of at least $k$ vertices with all edges coloured red and a clique of at least $k$ vertices with all edges coloured blue. We therefore formulate the following question, which we find interesting in its own right.
\begin{question}\label{qu:graphqn}
Let $k\ge 1$ be an integer. What is the minimal $N$ such that we can colour the edges of a graph on $N$ vertices red and blue such that every vertex belongs to a monochromatic $K_k$ of each colour?
\end{question}
Note that, by the above construction, the answer to the above question is an upper bound for $p_{\text{box}}(2,k).$ We conjecture that this $N=4(k-1)$. A construction arising from the example in the left image of Figure~\ref{fig:firstkp} which matches this bound can be seen in Figure~\ref{fig:graph}. However, we were only able to prove an asymptotic result.
\begin{figure}
\caption{A graph in which every vertex is contained in a red $K_k$ and a blue $K_k$.}
\label{fig:graph}
\begin{center}
\begin{tikzpicture}
\draw [-,name path = u] (0,-1) -- (5,-1);
\draw [-,name path = v] (0,1) -- (5,1);
\draw [-, name path = w] (0,2) -- (5,2);
\draw [-, name path = x] (0,4) -- (5,4);
\draw [-, name path = w] (-1,0) -- (-1,3);
\draw [-, name path = x] (1,0) -- (1,3);
\draw [-, name path = w] (4,0) -- (4,3);
\draw [-, name path = x] (6,0) -- (6,3);
\draw[thick] (0cm,0cm) circle(1cm)[fill=red,opacity=0.4];
\draw[thick] (0cm,3cm) circle(1cm)[fill=blue,opacity=0.4];
\draw[thick] (5cm,0cm) circle(1cm)[fill=blue,opacity=0.4];
\draw[thick] (5cm,3cm) circle(1cm)[fill=red,opacity=0.4];
\draw[thick, name path = A] (0cm,0cm) circle(1cm)[];
\draw[thick, name path = B] (0cm,3cm) circle(1cm)[];
\draw[thick, name path = C] (5cm,0cm) circle(1cm)[];
\draw[thick, name path = D] (5cm,3cm) circle(1cm)[];
\draw [very thin] (0,0) node[] {red $K_{k-1}$};
\draw [very thin] (5,3) node[] {red $K_{k-1}$};
\draw [very thin] (5,0) node[] {blue $K_{k-1}$};
\draw [very thin] (0,3) node[] {blue $K_{k-1}$};
\draw [very thin] (2.5,0) node[] {red edges};
\draw [very thin] (2.5,3) node[] {red edges};
\draw [very thin] (0,1.5) node[] {blue edges};
\draw [very thin] (5,1.5) node[] {blue edges};
\draw[fill=red,opacity=0.4]
([shift={(-90:1cm)}]0,0) arc (-90:90:1cm)
--
([shift={(-90+180:1cm)}]5,0) arc (-90+180:90+180:1cm)
-- cycle;
\draw[fill=red,opacity=0.4]
([shift={(-90:1cm)}]0,3) arc (-90:90:1cm)
--
([shift={(-90+180:1cm)}]5,3) arc (-90+180:90+180:1cm)
-- cycle;
\draw[fill=blue,opacity=0.4]
([shift={(0:1cm)}]0,0) arc (0:180:1cm)
--
([shift={(-180:1cm)}]0,3) arc (-180:0:1cm)
-- cycle;
\draw[fill=blue,opacity=0.4]
([shift={(0:1cm)}]5,0) arc (0:180:1cm)
--
([shift={(-180:1cm)}]5,3) arc (-180:0:1cm)
-- cycle;
\end{tikzpicture}
\end{center}
\end{figure}
\begin{proposition}\label{prop:graph}
In Question~\ref{qu:graphqn} we have $N\ge (4+o_k(1))k$.
\end{proposition}
\begin{proof}
Let $R$ be the vertex set of a largest red clique and $B$ the vertex set of a largest blue clique in the graph. Note that $|R \cap B| \le 1,$ as each edge can only have one colour. Define $A_0 =R \setminus B$ and $B_0=B \setminus R.$ Let $a_0=|A_0| \ge k-1$ and $b_0=|B_0|\ge k-1.$
In general, let $R$ and $B$ be the vertex sets of a largest red and blue clique on $G \setminus ( A_0 \cup \ldots \cup A_{i-1} \cup B_0 \cup \ldots \cup B_{i-1}),$ respectively. As before, $|R \cap B| \le 1$ and we define $A_i =R \setminus B$ and $B_i=B \setminus R.$ Let $a_i=|A_i|$ and $b_i=|B_i|.$
Given a vertex $v$ in $A_0 \cup \ldots \cup A_{i-1}$ it belongs to a blue $k$-clique. This clique can have at most one vertex in each of $A_0,A_1, \ldots,A_{i-1},$ one of which is $v$ itself. Similarly, by choice of $B_i$ we know this clique can have at most $b_i+1$ vertices outside of $A_0 \cup \ldots \cup A_{i-1} \cup B_0 \cup \ldots \cup B_{i-1}.$ This implies that $v$ has blue degree at least $k-1-(i-1)-(b_i+1)=k-i-b_i-1$ towards $B_0 \cup \ldots \cup B_{i-1}.$ An analogous argument shows that any $w \in B_0 \cup \ldots \cup B_{i-1}$ has red degree at least $k-i-a_i-1$ towards $A_0 \cup \ldots \cup A_{i-1}.$
In particular, letting $A=a_0+\ldots+a_{i-1}$ and $B=b_0+\ldots+b_{i-1}$, this implies that
$$AB \ge A(k-i-b_i-1)+B(k-i-a_i-1).$$
Now define $c_{i-1}$ by $A+B=c_{i-1}(k-1).$ Since there are at least $c_{i-1}(k-1)+a_i+b_i$ vertices in $G.$
So we get $$AB+Ab_i+Ba_i \ge (k-i-1)c_{i-1}(k-1).$$
For a fixed $c_{i-1}$ the left-hand side is maximised for $A=c_{i-1}(k-1)/2-(a_i-b_i)/2$ and $B=c_{i-1}(k-1)/2+(a_i-b_i)/2.$ This gives
$$c_{i-1}^2(k-1)^2/4-(a_i-b_i)^2/4+(a_i+b_i)c_{i-1}(k-1)/2+(a_i-b_i)^2/2 \ge (k-i-1)c_{i-1}(k-1)$$
$$\Rightarrow c_{i-1}^2(k-1)^2+(a_i-b_i)^2+2(a_i+b_i)c_{i-1}(k-1)\ge 4(k-i-1)c_{i-1}(k-1)$$
$$\Rightarrow c_{i-1}^2(k-1)^2+(a_i+b_i)^2+2(a_i+b_i)c_{i-1}(k-1)\ge 4(k-i-1)c_{i-1}(k-1)$$
$$\Rightarrow (c_{i-1}(k-1)+a_i+b_i)^2\ge 4(k-i-1)c_{i-1}(k-1).$$
Since $c_i(k-1)=c_{i-1}(k-1)+a_i+b_i,$ we get
$$c_i \ge 2 \sqrt{\frac{k-i-1}{k-1}c_{i-1}}\ge 2^{1+1/2+\ldots+1/2^{i}}\left(\frac{k-i-1}{k-1} \right)^{1/2+1/4+\ldots+1/2^{i}}$$
$$=4 \times 2^{-1/2^{i+1}}(1-i/(k-1))^{1-1/2^{i+1}}.$$
Choosing $i=\mathcal{O}(\log(k))$ gives the result.
\end{proof}
Proposition~\ref{prop:2dim} follows immediately from this result, by the above reduction.
Note that Question~\ref{qu:graphqn} generalises naturally to $t$ colours. The proof of Proposition~\ref{prop:graph} can be easily modified to give a lower bound of $(2t+o_k(1))k$ for this generalisation, and the construction on Figure~\ref{fig:graph} can also be modified to give an upper bound of $2t(k-1)$. While the lower bound for this question applies to the $k$-piercing question, giving a lower bound of $(2d+o_d(1))k$ in $d$-dimensions which does beat the trivial bound of $d(k-1)$ from the start of the section, this bound is not particularly strong so we omit the full details. It seems that in two dimensions Question~\ref{qu:graphqn} captures the difficulty of the $k$-piercing problem, while the generalised version does not fully capture the difficulties of the higher dimensional piercing problem.
With this in mind we consider the following reduction. Given a $k$-piercing partition in $d$ dimensions, consider the complete graph $K_n$ with vertices being boxes. We colour an edge between two boxes in colour $i$ if they are intersected by some $d-1$ dimensional plane orthogonal to the $i$-th dimensional axis. This gives a colouring in $d$ colours, such that every edge gets at most $d-1$ colours.
Furthermore, every vertex is a part of a monochromatic $K_t$ in each colour, where $t=p_{\text{box}}(d-1,k).$ We shall use this to give the following lower bound.
\begin{theorem}
$$p_{\text{box}}(d,k) \ge e^{\frac{\sqrt{d}}{4}}(k-1)$$
\end{theorem}
\begin{proof}
We consider the complement of the colouring of the $K_n$ described in the previous paragraph. In the complement each edge gets assigned only the colours it was not assigned in the above colouring. As each edge had at most $d-1$ colours, the new colouring assigns at least one colour to each edge. Furthermore, for every vertex $v$ and every colour $c$, $v$ belongs to a set of size $t$ within which there is no edge of colour $c.$
We claim that this implies that for each colour there are at most $(n-t)^2$ edges of this colour. To see this, note that there needs to exist an independent set of size $t$ in this colour and each of the remaining $n-t$ vertices can be incident to at most $n-t$ edges of this colour.
As our new colouring needed to cover all the possible edges at least once, this implies that
\begin{align*}
d & \ge\frac{n(n-1)}{2(n-t)^2} \\
\implies n-1 & \ge \left(1+\frac{1}{\sqrt{2d}-1}\right)(t-1) \\
\implies p_{\text{box}}(d,k)-1 & \ge \left(1+\frac{1}{\sqrt{2d}-1}\right)(p_{\text{box}}(d-1,k)-1).
\end{align*}
This gives
\begin{align*}
p_{\text{box}}(d,k) & \ge \prod_{i=2}^{d}\left(1+\frac{1}{\sqrt{2i}-1}\right)(k-1)+1 \\
& \ge e^{\sum_{i=2}^{d} \frac{1}{2\sqrt{2i}}}(k-1) \\
& \ge e^{\frac{1}{2\sqrt{2}}\sum_{i=2}^{d} \frac{1}{\sqrt{i}}}(k-1) \\
& \ge e^{\frac{\sqrt{d}}{4}}(k-1)
\end{align*}
as claimed.
\end{proof}
\section{Conclusion and open problems}\label{sec:concl}
There are a large number of very interesting questions that remain in this area, and we shall now list just a few.
It remains, of course, to determine the asymptotics of $f_{\text{odd}}$. The most important question seems to be the following.
\begin{question}\label{qu:finalfo}
Is $f_{\text{odd}}(n,d)=(2+o(1))^d$ as $n,d\rightarrow \infty$?
\end{question}
One may also consider the original question of Kearnes and Kiss with a relaxation of the condition that the boxes partition $[n]^d$. In their paper~\cite{leader}, Leader, Mili\'{c}evi\'{c} and Tan ask how many proper boxes are required to form a double cover of $[n]^d$, and specifically whether at least $2^d$ are required. A natural construction involves taking three copies of a partition of $[n]^{d-1}$ and taking the products of these with the sets $\{1,2\}$, $\{2,\dots,n\}$ and $\{1,3,4,\ldots,n\}$ respectively, giving a double cover of size $(3/2)2^d$. We can show that this construction is not best possible (a simulated annealing approach found a double cover of size 11 in $[3]^3$ and Gurobi did even better by finding a construction of size 21 in $[3]^4$), but we have not been able to beat $2^d$ and the question remains open.
Regarding the $k$-piercing problem, there are several possible angles. Again, the most important question concerns improving the lower bound.
\begin{question}\label{qu:finalkp}
Does there exist an $\varepsilon>0$ such that for a fixed $k$ we have $p_{\text{box}}(d,k) \ge (2+\varepsilon)^d$?
\end{question}
The analogous question for $p_{\text{brick}}$ would be a natural first step, interesting in its own right.
Along similar lines is the regime where $d$ is fixed and $k$ is allowed to grow. As discussed in Section~\ref{sec:piercing}, the bound for this problem is always linear in $k$, but finding the constant of linearity seems hard.
\begin{question}\label{qu:finalkp2}
Let $d$ be fixed so that $p_{\text{box}}(d,k)=(C_d+o_k(1))k$. How does $C_d$ grow with $d$? Must $C_d$ be exponential in $d$?
\end{question}
As noted in Section~\ref{sec:piercing}, we are only able to show that $ e^{\frac{\sqrt{d}}{4}}(k-1)\le C_d\le 15^{d/2}$. Proposition~\ref{prop:2dim} shows that $C_2=4$, but finding $C_3$ is already beyond our methods. Answering this question would directly extend Theorem~\ref{thm:alon} and therefore probably requires some interesting new ideas.
To finish, we shall describe one last problem which is of particular interest. We observe that in the $k$-piercing problem the requirement that the boxes $B_i$ partition $[n]^d$ can be dropped without trivialising the question, provided that we maintain the constraint that the $B_i$ are disjoint. In particular, we could ask the following question.
\begin{question}\label{qu:nopartition}
Let $n\ge k$ and $d\ge 1$ be integers. Let $\{B^1,B^2,\dots, B^m\}$ be a collection of disjoint proper boxes in $[n]^d$ with $k$-piercing property. What lower bounds can be shown for $m$? In particular, do we have $m\ge 2^d$?
\end{question}
When $k=2$ this generalises the original question of Kearnes and Kiss, however the proof of Theorem~\ref{thm:alon} relies on the $B_i$ forming a partition and so the same idea cannot be used. Indeed the authors know of no approach that gives a bound better than $(1+o(1))^d$ for this question, although computer search finds no examples with $m<2^d$.
\section*{Acknowledgements}
We thank Imre Leader and Bhargav Narayanan for useful conversations about this project.
| {
"timestamp": "2018-07-03T02:06:40",
"yymm": "1805",
"arxiv_id": "1805.11278",
"language": "en",
"url": "https://arxiv.org/abs/1805.11278",
"abstract": "Alon, Bohman, Holzman and Kleitman proved that any partition of a $d$-dimensional discrete box into proper sub-boxes must consist of at least $2^d$ sub-boxes. Recently, Leader, Milićević and Tan considered the question of how many odd-sized proper boxes are needed to partition a $d$-dimensional box of odd size, and they asked whether the trivial construction consisting of $3^d$ boxes is best possible. We show that approximately $2.93^d$ boxes are enough, and consider some natural generalisations.",
"subjects": "Combinatorics (math.CO)",
"title": "Partition problems in high dimensional boxes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303434461162,
"lm_q2_score": 0.7981867873410141,
"lm_q1q2_score": 0.7900695018479081
} |
https://arxiv.org/abs/1809.04221 | Constrained optimization as ecological dynamics with applications to random quadratic programming in high dimensions | Quadratic programming (QP) is a common and important constrained optimization problem. Here, we derive a surprising duality between constrained optimization with inequality constraints -- of which QP is a special case -- and consumer resource models describing ecological dynamics. Combining this duality with a recent `cavity solution', we analyze high-dimensional, random QP where the optimization function and constraints are drawn randomly. Our theory shows remarkable agreement with numerics and points to a deep connection between optimization, dynamical systems, and ecology. | \section*{Optimization as ecological dynamics}
We begin by deriving the duality between constrained optimization and ecological dynamics. Consider an optimization problem of the form
\begin{equation}
\begin{aligned}
& \underset{\mathbf R}{\text{minimize}}
& & f({\mathbf R}) \\
& \text{subject to}
& & g_i({\mathbf R}) \leq 0, \; i = 1, \ldots, S.\\
&&& R_\alpha \geq 0, \; \alpha=1, \ldots, M.
\end{aligned}
\end{equation}
where the variables being optimized ${\mathbf R}=(R_1, R_2, \ldots, R_M)$ are constrained to be non-negative. We can introduce a `generalized' Lagrange multiplier $\lambda_i$ for each of the $S$ inequality constraints in our optimization problem. In terms of the $\lambda_i$, we can write a set of conditions collectively known as the Karush-Kuhn-Tucker (KKT) conditions that must be satisfied at any local optimum ${\mathbf R_{min}}$ of our problem \cite{boyd2004convex,bertsekas1999nonlinear, bishop:2006:PRML}. We note that for this reason, in the optimization literature the $\lambda_i$ are often called KKT-multipliers rather than Lagrange multipliers. The KKT conditions are:\\\\
\indent {\it Stationarity:} $\nabla_{\mathbf{R}}f({\mathbf R}_\mathrm{min}) +\sum_{j} \lambda_j \nabla_{\mathbf{R}} g_j({\mathbf R}_\mathrm{min})=0$\\
\indent {\it Primal feasibility:} $g_i({\mathbf R}_\mathrm{min}) \leq 0$ \\
\indent {\it Dual feasibility:} $\lambda_i \ge 0$ \\
\indent {\it Complementary slackness:} $\lambda_i(g_i({\mathbf R}_\mathrm{min})-m_i)=0$,\\\\
where the last three conditions must hold for all $i=1,\ldots,M$. The KKT conditions have a straightforward and intuitive explanation. At the optimum ${\mathbf R}_\mathrm{min}$, either $g_i({\mathbf R}_\mathrm{min})=0$ and the constraint is active $\lambda_i \ge 0$, or $g_i({\mathbf R}_\mathrm{min}) \leq 0$ and the constraint is inactive $\lambda_i=0$. In our problem, the KKT conditions must be supplemented with the additional requirement of positivity $R_\alpha \ge 0$.
One can easily show that the four KKT conditions and positivity are also satisfied by the steady states of the following set of differential equations restricted
to the space $\lambda_i, R_\alpha \ge 0$:
\begin{eqnarray}
{d \lambda_i \over dt}&=&\lambda_i g_i({\mathbf R}) \nonumber \\
{d R_{\alpha} \over dt} &=& [-\partial_{R_\alpha} f({\mathbf R}) -\sum_{j} \lambda_j \partial_{R_\alpha} g_j({\mathbf R})] R_\alpha
\label{GCRM1}
\end{eqnarray}
The first of these equations just describes exponential growth of a ``species'' $i$ with a resource-dependent ``growth rate'' $g_i({\mathbf R})$. Species with $g_i({\mathbf R}_\mathrm{min}) \leq 0$ correspond to constraints that are inactive and go extinct in the ecosystem (i.e $\lambda_{i \, \mathrm{min}} =0$), whereas species with $g_i({\mathbf R}_\mathrm{min})=0$ survive at steady state and correspond to active constraints with $\lambda_{i \, \mathrm{min}} \neq 0$ (see Figure \ref{fig:figure1} for a simple two-dimensional example). The second equation in (\ref{GCRM1}) performs a ``generalized gradient descent'' on the optimization function $f(\mathbf{R}) +\sum_j \lambda_j g_j(\mathbf{R})$ (note the extra factor of $R_\alpha$ in our dynamics compared to the usual gradient descent equations). In the context of ecology, these equations describe the dynamics of a set of resources $ \{ R_\alpha \}$ produced at a rate $-\partial_{R_\alpha} f({\mathbf R}) R_\alpha$ and consumed by individuals of species $j$ at a rate $ \lambda_j \partial_\alpha g_j({\mathbf R})R_\alpha$.
This suggests a simple dictionary for constructing systems dual to optimization problems with inequality constraints (see Figure \ref{fig:figure1}) . The variables are resources whose dynamics are governed by the gradient of the function being optimized. Each inequality is associated with a species through its corresponding Lagrange (KKT) multiplier. Species that survive in the ecosystem correspond to active constraints whereas species that go extinct correspond to inactive constraints. The steady-state values of the resource and species abundances correspond to the local optimum $\mathbf{R}_\mathrm{min}$ and Lagrange multipliers at the optimum $\{ \lambda_{j \mathrm{\,min}} \}$, respectively. Finally, the $f({\mathbf R}_\mathrm{min})$ are closely related to Lyapunov functions known to exist in the literature for specific choices of resource dynamics \cite{macarthur1970species, chesson_macarthurs_1990, tikhonov2017collective}.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Figure1RQP.pdf}
\caption{{\bf Constrained optimization with inequality constraints is dual to an ecological dynamical system described by a generalized consumer
resource model (MCRM)}. The variables to be optimized (hexagons) and Lagrange multipliers (ovals) are mapped to resources
and species respectively. Species must consume resources to grow. (Bottom Left) A quadratic programming (QP) problem with two inequality constraints where the unconstrained optimum differs from the constrained optimum. (Bottom Right) Dynamics for MacArthur's Consumer Resource Model that is dual to this QP problem. The steady-state resource/species abundances correspond to the value of variables/Lagrange multipliers at the QP optimum. For this reason, species corresponding to inactive constraints go extinct. }
\label{fig:figure1}
\end{figure}
\section*{Ecological duals of Quadratic Programming (QP)}
For the rest of the paper, we focus on QP where the optimization function is quadratic, $f(\mathbf{R})= {1 \over 2} \mathbf{R}^T Q \mathbf{R} + \mathbf{b}^T \mathbf{R}$, with $Q$ a positive semi-definite matrix, and linear inequality constraints. By going to the eigenbasis of $Q$, we can always rewrite the QP problem as minimizing a square distance
\begin{equation}
\begin{aligned}
& \underset{\mathbf R}{\text{minimize}}
& & \frac{1}{2} ||\mathbf{R}-\mathbf{K}||^2 \\
& \text{subject to}
& & \sum_{\alpha} c_{i \alpha}R_\alpha \leq m_i, \; i = 1, \ldots, S.\\
&&& R_\alpha \geq 0, \; \alpha=1, \ldots, M.
\label{QPeq}
\end{aligned}
\end{equation}
Using (\ref{GCRM1}), we can construct the dual ecological model:
\begin{eqnarray}
{d \lambda_i \over dt}&=&\lambda_i (\sum_\alpha c_{i \alpha} R_\alpha -m_i) \nonumber \\
{d R_{\alpha} \over dt} &=& R_\alpha (K_\alpha -R_\alpha) -\sum_{j} \lambda_j c_{j \alpha} R_\alpha.
\label{GCRM2}
\end{eqnarray}
The is the famous MacArthur Consumer Resource Model (MCRM) which was first introduced by Robert MacArthur and Richard Levins in their seminal papers \cite{macarthur_limiting_1967, macarthur1970species} and has played an extremely important role in theoretical ecology \cite{chesson2000mechanisms, tilman_resource_1982}.
In optimization problems, one often works with the Lagrangian dual of an optimization problem. We show in the appendix that the dual to (\ref{QPeq}) is just
\begin{equation}
\begin{aligned}
& \underset{ \lambda_i}{\text{maximize}}
& & \sum_i \lambda_i [\kappa_i -\frac{1}{2} \sum_j \alpha_{ij} \lambda_j]\\
& \text{subject to}
&& \lambda_i \geq 0,
\label{DualQPeq}
\end{aligned}
\end{equation}
with $\kappa_i = \sum_{\alpha} K_\alpha (c_{i \alpha}-m_i)$, $\alpha_{ij}=\sum_{\alpha}c_{i \alpha}c_{j \alpha}$, and the sum restricted to $\alpha$ for which $R_{\alpha \mathrm{\, min}} \neq 0$. It is once again straightforward to check that the local minima of this problem are in one-to-one correspondence with steady states of the Generalized Lotka-Volterra Equations (GLVs) of the form:
\begin{equation}
{d \lambda_i \over dt}= \lambda_i (\kappa_i -\sum_j \alpha_{ij} \lambda_j)
\label{GLV}
\end{equation}
As with the primal problem, the species in the GLV have a natural interpretation as Lagrange multipliers enforcing inequality constraints. This GLV can also be directly obtained from the MCRM in (\ref{GCRM2}) in the limit where the resource dynamics are extremely fast by setting ${dR_\alpha \over dt}=0$ in the second equation and plugging in the steady-state resource abundances into the first equation \cite{macarthur1970species, chesson_macarthurs_1990} (see Appendix). This shows the Lagrangian dual of QP maps to a dynamical system described by a GLV -- which itself can be derived from the MCRM which is the dynamical dual to the primal optimization problem!
\section*{Random Quadratic Programming (RQP)}
Recently, the MCRM was analyzed in the high-dimensional limit where the number of resources and species in the regional species pool is large ($S,M \gg 1$). In this limit, the resource dynamics were extremely complex, with many resources deviating significantly from their unperturbed values and a large fraction of species in the regional pool going extinct \cite{advani2018statistical}. In terms of the corresponding optimization problem, this suggests that $f({\mathbf R}_\mathrm{min})$ will generically be far from zero and many of constraints will be inactive.
To better understand this, we analyzed Random quadratic programming (RQP) problems in high dimension. In RQP, the parameters in (\ref{QPeq}) are drawn from random distributions (see Figure \ref{fig:fig2}A). We focus on the case where the $K_\alpha$ and $m_i$ are independent random normal variables drawn from Gaussians with means $K$ and $m$ and variances $\sigma_K^2$ and $\sigma_m^2$, respectively. The elements of the constraint matrix $c_{i \alpha}$ are also drawn from Gaussians with mean $\mu_c/M$ and variance $\sigma_c^2/M$ \footnote{We note that this scaling is slightly different from that in \cite{advani2018statistical} where the elements where chosen to scale with $S$ not $M$. This choice does not change the results, but results in slightly different expressions}.This scaling with $M$ is necessary to ensure that the sum that appears in the inequality constraints in (\ref{QPeq}) has a good thermodynamic limit when $M,S \rightarrow \infty$ with $M/S=\gamma$ held fixed.
We are especially interested in understanding the statistical properties of solutions to the RQP (see Fig. \ref{fig:fig2}A) . Among the quantities we examine are the expectation value of the optimized function at the minima $ \langle f({\mathbf R}_\mathrm{min})\rangle/M$, the fraction of active constraints, $S^*/S$, the fraction of variables that are non-zero at the optimum, $M^*/M$, as well the first two moments of $R_{\alpha \mathrm{min}}$ and $\lambda_{j \min}$ (see Appendix for details).
\begin{figure}[t]
\includegraphics[width=1.0\linewidth]{Fig2RQPfinal.pdf}
\caption{{\bf Random Quadratic Programming (RQP)}. (A) In RQP, the parameters of the quadratic optimization function and inequality constraints are drawn from
a random distribution. Effect of varying the ratio of constraints to variables $S/M$ on (B) the value of the optimization function $f({\mathbf R}_\mathrm{min})/M$, (C) the fraction of non-zero variables $\frac{M^*}{M}$ and (D) the fraction of active constraints $\frac{S^*}{S}$. Cavity solutions are solid lines and shaded region show $\pm1$ standard deviation from 50 independent optimizations of RQP using the CVXOPT package in Python 3 with $M=100$, $\mu_c=1$, $K=1$, $\sigma_K=1$, $m=1$, $\sigma_m=0.1$. Code is available
in supplementary files. }
\label{fig:fig2}
\end{figure}
It is possible to a derive mean-field theory (MFT) for the statistical properties of the optimal solution in the RQP -- or correspondingly the steady-states of the MCRM -- using the cavity method. The basic idea behind the cavity method is to derive self-consistency equations that relate the optimization problem (ecosystem) with $M+1$ variables (resources) and $S+1$ inequality constraints (species) to a problem where a constraint (species) and variable (resource) have been removed: $(M+1, S+1) \rightarrow (M,S)$ \cite{advani2018statistical}. The need to remove both a constraint and variable is important for keeping all order one terms in the thermodynamic limit \cite{mezard1989space, ramezanali2015critical}. In what follows, we focus on the replica-symmetric solution.
The cavity equation exploits the observations the constraint $\sum_{\alpha=1}^M c_{i \alpha} R_{\alpha}$ is a sum of many random variables, $c_{i \alpha}$. When $M \gg 1$, due to the law of large numbers we can model such a sum by a random variable drawn from a Gaussian whose mean and variance involve the statistical quantities described above. Less obvious from the perspective of QP is that we need to introduce a second mean-field quantity $K_\alpha^{eff}$ (see Appendix and \cite{advani2018statistical}). After introducing the Lagrange multipliers that enforce the inequality constraints, the optimization function to be minimized takes the form
\begin{align}
\frac{1}{2} ||\mathbf{R}&-\mathbf{K}||^2 + \sum_j \lambda_j (c_{j \alpha}R_\alpha-m_j)\nonumber\\
&= {1 \over 2} \sum_{\alpha} \left\{ R_\alpha [R_\alpha-K_\alpha^{eff}(\lambda)] + K_\alpha [K_\alpha - R_\alpha]\right\} \nonumber,
\end{align}
where we have defined the mean-field variable
$$
K_\alpha^{eff}(\lambda) = K_\alpha -\sum_{j=1}^S \lambda_j c_{j \alpha}.
$$
Since $K_\alpha^{eff}(\lambda)$ is also a sum of many terms containing $c_{i \alpha}$, it can also be approximated as a random variable drawn from a Gaussian whose mean and variance are calculated self-consistently .
The full derivation of the replica symmetric mean-field equations is identical to that in \cite{advani2018statistical} and is given in the Appendix. The resulting self-consistent mean-field cavity equations can be solved numerically in Mathematica. Figure \ref{fig:fig2} shows the results of our mean-field equations and comparisons to numerics where we directly optimize the RQP problem over many independent realizations using the CVXOPT package in Python \cite{andersen2013cvxopt} . Notice the remarkable agreement between our MFT and results from direct optimization even for moderate system sizes with $M=100$. In the Appendix, we show that the cavity solution can also accurately describe the dual MCRM.
Figure \ref{fig:fig2} also shows that the statistical properties of the QP solutions change as we vary the number of constraints $S$ and the variance of the constraint matrix $c_{i \alpha}$. When $S\ll M$, the expectation value of the optimization function $f({\mathbf R}_\mathrm{min})/M$ approaches zero -- the minimum for the unconstrained problem. In this limit, the few constraints that are present are also active. As $S/M$ is increased, the fraction of active constraints quickly drops, $f({\mathbf R}_\mathrm{min})/M$ quickly increases, after which both quantities reach a plateau where they vary very slowly with $S$. The value of the the plateau depends on $\sigma_c$. Increasing the variance of the constraints results in more active constraints and a larger value of $f({\mathbf R}_\mathrm{min})$ at the optimum.
These results about RQP can be naturally understood using ideas from ecology. Intuitively, a smaller $\sigma_c$ means more ``redundant'' constraints. In ecology, this is the principle of limiting similarity: species with large niche overlaps (similar $c_{i \alpha}$ ) competitively exclude each other \cite{macarthur_limiting_1967, macarthur1970species, chesson_macarthurs_1990, chesson2000mechanisms, tilman_resource_1982}. In the language of optimization, this ecological intuition suggests that when constraints are similar enough, only the most stringent of these will be active due to an effective competitive exclusion between constraints. Thus, in RQP competitive exclusion becomes a statement about the geometry of how random planes in high dimension repel each other at the corners of simplices. In all cases, increasing $S$ increases the total number of active constraints (species) even though the fraction of active constraints decreases. For this reason, the optimization problem is more constrained for larger $S$ and $f({\mathbf R}_\mathrm{min})/M$ is larger. Finally the plateau in statistical quantities at large $S$ can be understood as arising from what in ecology has been called ``species packing'' -- there is a capacity to the number of distinct species that any ecosystem can typically support \cite{macarthur_limiting_1967, macarthur1970species}.
\section*{Discussion}
In this paper, we have derived a surprising duality between constrained optimization problems and ecologically inspired dynamical systems. We showed that QP (in any dimension) maps to one of the most famous models of ecological dynamics, MacArthur's Consumer Resource Model (MCRM) -- a system of ordinary differential equations describing how species compete for a pool of common resources. By combining this mapping with a recent `cavity solution' to the MCRM, we constructed a mean-field theory for the statistical properties of RQP that showed remarkable agreement with numerical simulations. Intuitions from ecology suggest that the geometry of constrained optimization can be described using a competitive exclusion between constraints which in our case correspond to random high-dimensional hyperplanes. This work suggests that the deep connection between geometry, ecology, and high-dimensional random ecosystems is a generic property of a large class of generalized consumer resource models \cite{landmann2018phase}. Our works also gives a natural explanation of the existence of Lyapunov functions in these models.
\section{Acknowledgments}
The work was supported by NIH NIGMS grant 1R35GM119461, Simons Investigator in the Mathematical Modeling of Living Systems (MMLS) to PM, and the Scialog Program sponsored jointly by Research Corporation for Science Advancement (RCSA) and the Gordon and Betty Moore Foundation.
| {
"timestamp": "2018-09-13T02:05:56",
"yymm": "1809",
"arxiv_id": "1809.04221",
"language": "en",
"url": "https://arxiv.org/abs/1809.04221",
"abstract": "Quadratic programming (QP) is a common and important constrained optimization problem. Here, we derive a surprising duality between constrained optimization with inequality constraints -- of which QP is a special case -- and consumer resource models describing ecological dynamics. Combining this duality with a recent `cavity solution', we analyze high-dimensional, random QP where the optimization function and constraints are drawn randomly. Our theory shows remarkable agreement with numerics and points to a deep connection between optimization, dynamical systems, and ecology.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech); Disordered Systems and Neural Networks (cond-mat.dis-nn); Optimization and Control (math.OC); Populations and Evolution (q-bio.PE)",
"title": "Constrained optimization as ecological dynamics with applications to random quadratic programming in high dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303401461468,
"lm_q2_score": 0.7981867729389246,
"lm_q1q2_score": 0.790069484958291
} |
https://arxiv.org/abs/2006.01679 | On a Shape Optimization Problem for Tree Branches | This paper is concerned with a shape optimization problem, where the functional to be maximized describes the total sunlight collected by a distribution of tree leaves, minus the cost for transporting water and nutrient from the base of the trunk to all the leaves. In the case of 2 space dimensions, the solution is proved to be unique, and explicitly determined. | \section{Introduction}
\label{s:0}
\setcounter{equation}{0}
In the recent papers \cite{BPS, BSu} two functionals were introduced,
measuring the amount of light collected by the leaves,
and the amount of water and nutrients collected by the roots of a tree.
In connection with a ramified transportation cost \cite{BCM, MMS, X03},
these lead to various optimization problems for tree shapes.
Quite often, optimal solutions to problems involving a
ramified transportation cost exhibit a fractal structure \cite{BCM1, BraS,
BW, DS, MoS, PSX, S1}. In the present note we analyze in more detail the
optimization
problem for tree branches proposed in \cite{BPS}, in the
2-dimensional case. In this simple setting, the unique
solution can be explicitly determined. Instead of being fractal,
its shape reminds of a solar panel.
The present analysis was partially motivated
by the goal of understanding phototropism, i.e., the tendency of plant stems
to bend toward the source of light. Our results indicate that this behavior cannot be explained
purely in terms of maximizing the amount of light collected by the leaves (Fig.~\ref{f:pg45}).
Apparently, other factors must have played a role in the evolution of this trait,
such as the competition among different plants. See \cite{BGRR} for
some results in this direction.
The remainder of this paper is organized as follows. In Section~2 we review the two functionals
defining the shape optimization problem, and state the main results. Proofs are then worked out in
Sections 3 to 5.
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=8cm]{pg45-eps-converted-to.pdf}}}
\caption{\small A stem $\gamma_1$ perpendicular to the sun rays is optimally shaped to collect the most
light. For the stem $\gamma_2$ bending toward the light source, the upper leaves put the lower ones in shade.
} \label{f:pg45}
\end{figure}
\vskip 1em
\section{Statement of the main results}
\label{s:1}
\setcounter{equation}{0}
We begin by reviewing the two functionals considered in \cite{BPS, BSu}.
\subsection{A sunlight functional}
Let $\mu$ be a positive, bounded Radon measure on
${\mathbb R}^d_+\doteq\{(x_1,x_2,\ldots, x_d)\,;~x_d\geq 0\}$.
Thinking of $\mu$ as the density of leaves on a tree,
we seek a functional ${\cal S}(\mu)$ describing
the total amount of sunlight absorbed by the leaves.
Fix a unit vector
$${\bf n}~\in~S^{d-1}~\doteq~\{ x\in {\mathbb R}^d\,;~~|x|=1\},$$
and assume that all
light rays come parallel to ${\bf n}$.
Call $E_{\bf n}^\perp$ the $(d-1)$-dimensional subspace perpendicular to ${\bf n}$
and let $\pi_{\bf n}:{\mathbb R}^d\mapsto E_{\bf n}^\perp$ be the perpendicular projection. Each point ${\bf x}\in {\mathbb R}^d$ can thus be expressed
uniquely as
\begin{equation}\label{perp}
{\bf x}~=~{\bf y} + s{\bf n}\end{equation}
with ${\bf y}\in E_{\bf n}^\perp$ and $s\in{\mathbb R}$.
On the perpendicular subspace $E_{\bf n}^\perp$ consider the projected measure $\mu^{\bf n}$, defined by setting
\begin{equation}\label{mupro}\mu^{\bf n}(A)~=~\mu\Big(\bigl\{ x\in{\mathbb R}^d\,;~~\pi_{\bf n}(x)\in A\bigr\}\Big).\end{equation}
Call $\Phi^{\bf n}$ the density of the absolutely continuous part of $\mu^{\bf n}$
w.r.t.~the $(d-1)$-dimensional Lebesgue measure on $E_{\bf n}^\perp$.
\vskip 1em
\begin{definition}
The total amount of sunlight from the direction ${\bf n}$ captured by a measure
$\mu$ on ${\mathbb R}^d$ is defined as
\begin{equation}\label{SSn}
{\cal S}^{\bf n}(\mu)~\doteq~
\int_{E_{\bf n}^\perp}\Big(1- \exp\bigl\{ - \Phi^{\bf n}(y)\bigr\}\Big)
\, dy\,.\end{equation}
More generally,
given an integrable function $\eta\in {\bf L}^1(S^{d-1})$,
the total sunshine absorbed by $\mu$ from all directions
is defined as
\begin{equation}\label{SS2}
{\cal S}^\eta(\mu)~\doteq~
\int_{S^{d-1}}\left(\int_{E_{\bf n}^\perp}\Big(1- \exp\bigl\{ - \Phi^{\bf n}(y)\bigr\}\Big)
\, dy\right) \eta({\bf n})\,d{\bf n}\,.\end{equation}
\end{definition}
\vskip 1em
In the formula (\ref{SS2}), $\eta({\bf n})$ accounts for the intensity of light
coming from the direction ${\bf n}$.
\begin{remark}\label{r:22} {\rm According to the above definition, the amount of sunlight
${\cal S}^{\bf n}(\mu)$
captured by the measure
$\mu$ only depends on its projection $\mu^{\bf n}$ on the subspace perpendicular to
${\bf n}$. In particular, if a second measure $\widetilde\mu$ is obtained from $\mu$
by shifting some of the mass in a direction parallel to ${\bf n}$, then
${\cal S}(\widetilde\mu) = {\cal S}(\mu)$.}
\end{remark}
\vskip 1em
\subsection{Optimal irrigation patterns}
Consider a positive Radon measure $\mu$ on ${\mathbb R}^d$ with total mass
$M=\mu({\mathbb R}^d)$, and let
$\Theta=[0,M]$.
We think of $\xi\in \Theta$ as a Lagrangian variable, labeling a water particle.
\begin{definition} A measurable map
\begin{equation}\label{iplan}
\chi:\Theta\times {\mathbb R}_+~\mapsto~ {\mathbb R}^d\end{equation}
is called an {\bf admissible irrigation plan}
if
\begin{itemize}
\item[(i)] For every $\xi\in \Theta$, the map
$t\mapsto \chi(\xi,t)$ is Lipschitz continuous.
More precisely, for each $\xi$ there exists a stopping time $T(\xi)$ such that, calling
$$\dot \chi(\xi,t)~=~{\partial\over\partial t} ~\chi(\xi,t)$$
the partial derivative w.r.t.~time, one has
\begin{equation}\label{stime}\bigl|\dot \chi(\xi,t)\bigr|~=~\left\{ \begin{array}{rl} 1\qquad &\quad\hbox{for a.e.}
~t\in \bigl[0, T(\xi)\bigr],\\[3mm]
0\qquad &\quad\hbox{for}
~t> T(\xi).\end{array}\right.\end{equation}
\item[(ii)] At time $t=0$ all particles are at the origin:
$\chi(\xi,0)={\bf 0}$ for all $\xi\in\Theta$.
\item[(iii)] The push-forward of the Lebesgue measure on $[0,M]$ through the map $\xi\mapsto
\chi(\xi,T(\xi))$ coincides with the measure $\mu$.
In other words, for every open set $A\subset{\mathbb R}^d$ there holds
\begin{equation}\label{chi1}\mu(A)~=~\hbox{\rm meas}\Big( \{ \xi\in \Theta\,;~~\chi(\xi,T(\xi))\in A\bigr\}\Big).\end{equation}
\end{itemize}
\end{definition}
One may think of $\chi(\xi,t)$ as the
position of the water particle $\xi$ at time $t$.
To define the corresponding transportation cost, we first compute
how many particles travel through a point $x\in{\mathbb R}^d$.
This is described by
\begin{equation}\label{chi}|x|_\chi~\doteq~\hbox{meas}\Big(\bigl\{\xi\in \Theta\,;~~\chi(\xi,t)= x~~~\hbox{for some}~~t\geq 0\bigr\}\Big).\end{equation}
We think of $|x|_\chi$ as the {\it total flux going through the
point $x$}. Following \cite{G, MMS}, we consider
\vskip 1em
\begin{definition}
{\bf (irrigation cost).}
For a given $\alpha\in [0,1]$,
the total cost of the irrigation plan $\chi$ is
\begin{equation}\label{TCg}
{\cal E}^\alpha(\chi)~\doteq~\int_\Theta\left(\int_0^{T(\xi)} \bigl|\chi(\xi,t)
\bigr|_\chi^{\alpha-1} \, dt\right)
d\xi.\end{equation}
The {\bf $\alpha$-irrigation cost} of a measure $\mu$
is defined as
\begin{equation}\label{Idef}{\cal I}^\alpha(\mu)~\doteq~\inf_\chi {\cal E}^\alpha(\chi),\end{equation}
where the infimum is taken over all admissible irrigation plans for the measure $\mu$.
\end{definition}
\begin{remark} {\rm Sometimes it is convenient to consider more general
irrigation plans where, in place of (\ref{stime}), for a.e.~$t\in [0,T(\xi)]$
the speed satisfies $|\dot\chi(\xi,t)|\leq 1$.
In this case, the cost (\ref{TCg}) is replaced by
\begin{equation}\label{TC2}
{\cal E}^\alpha(\chi)~\doteq~\int_\Theta\left(\int_0^{T(\xi)} \bigl|\chi(\xi,t)
\bigr|_\chi^{\alpha-1} \,|\dot\chi(\xi,t)|\, dt\right)
d\xi.\end{equation}
Of course, one can always re-parameterize each trajectory
$t\mapsto \chi(\xi,t)$ by arc-length, so that (\ref{stime}) holds.
This does not affect the cost (\ref{TC2}).
}
\end{remark}
\vskip 1em
\begin{remark} {\rm In the case $\alpha=1$, the expression (\ref{TCg}) reduces to
$$
{\cal E}^\alpha(\chi)~\doteq~\int_\Theta\left(\int_{{\mathbb R}_+} |\dot \chi_t(\xi,t)|\, dt\right)
d\xi~=~\int_\Theta[\hbox{total length of the path} ~\chi(\xi,\cdot)]\, d\xi\,.$$
Of course, this length is minimal if every path $\chi(\cdot,\xi)$
is a straight line, joining the origin with $\chi(\xi, T(\xi))$. Hence
$${\cal I}^\alpha(\mu)~\doteq~\inf_\chi {\cal E}^\alpha(\chi)~=~\int_\Theta |\chi(\xi, T(\xi))|\, d\xi~=~\int |x|\, d\mu\,.$$
On the other hand, when $\alpha<1$, moving along a path which is traveled by few other particles
comes at a high cost. Indeed, in this case the factor $\bigl|\chi(\xi,t)
\bigr|_\chi^{\alpha-1}$ becomes large. To reduce the total cost, is thus convenient
that many particles travel along the same path.
}\end{remark}
\vskip 1em
For the basic theory of ramified transport we refer
to
the monograph \cite{BCM}.
For future use, we recall that optimal irrigation plans
satisfy
\vskip 1em
{\bf Single Path Property:} {\it If $\chi(\xi, \tau)=\chi(\xi',\tau')$ for some
$\xi, \xi'\in\Theta$ and
$0<\tau\leq \tau'$, then
\begin{equation}\label{SPP}\chi(\xi, t)~=~\chi(\xi', t)\qquad\hbox{for all}~ t\in [0, \tau].\end{equation}
}
\vskip 1em
\subsection{The general optimization problem for branches.}
Combining the two functionals (\ref{SS2}) and (\ref{Idef}),
one can formulate an optimization problem for the shape of branches:
\vskip 1em
\begin{itemize}
\item[{\bf (OPB)}] Given a light intensity function $\eta\in {\bf L}^1(S^{d-1})$
and two constants $c>0$, $\alpha\in [0,1]$, find a positive measure $\mu$
supported on $R^d_+$ that maximizes the payoff
\begin{equation}\label{poff}{\cal S}^\eta(\mu)-c\,{\cal I}^\alpha(\mu).\end{equation}
\end{itemize}
\vskip 1em
\subsection{Optimal branches in dimension $d=2$.}
We consider here the optimization problem for branches, in the planar case $d=2$.
We assume that the sunlight comes from
a single direction ${\bf n}= (\cos\theta_0,
\sin\theta_0)$, so that the sunlight functional takes the form (\ref{SSn}).
Moreover, as irrigation cost we take (\ref{Idef}), for some fixed $\alpha\in \, ]0,1]$.
For a given constant $c>0$, this leads to the problem
\begin{equation}\label{maxb}
\hbox{maximize:}\quad {\cal S}^{\bf n}(\mu) -c{\cal I}^\alpha(\mu),\end{equation}
over all positive measures $\mu$ supported on the half space ${\mathbb R}^2_+\doteq
\{x=(x_1,x_2)\,;~~x_2\geq 0\}$.
To fix the ideas, we shall assume that $0<\theta_0<\pi/2$.
Our main goal is to prove that for this problem the ``solar panel" configuration shown in Fig.~\ref{f:ir100}
is optimal, namely:
\begin{theorem}\label{t:1} Assume that $0<\theta_0\leq \pi/2$ and $1/2 \leq \alpha \leq 1$.Then the optimization problem (\ref{maxb})
has a unique solution. The optimal measure is supported along two rays, namely
\begin{equation}\label{supm}
\hbox{\rm Supp}(\mu)~\subset~\Big\{ (r\cos\theta, r\sin\theta)\,;~~r\geq 0,~~
\hbox{either}~\theta=0 ~\hbox{or}~\theta =\theta_0 +{\pi\over 2}\Big\}~
\doteq~\Gamma_0\cup\Gamma_1\,.\end{equation}
When $0<\alpha<1/2$, the same conclusion holds provided that the angle $\theta_0$ satisfies
\begin{equation}\label{bigan} \cos\left( {\pi\over 2}-\theta_0\right) ~\geq~1-2^{2\alpha-1} .
\end{equation}
\end{theorem}
\vskip 1em
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=8cm]{ir100-eps-converted-to.pdf}}}
\caption{\small When the light rays impinge from a fixed direction ${\bf n}$, the optimal distribution of leaves is supported on the two
rays $\Gamma_0$ and $\Gamma_1$. } \label{f:ir100}
\end{figure}
In the case $\alpha=1$ the result is straightforward.
Indeed, for any measure $\mu$ we
can consider its projection $\widetilde \mu$ on $\Gamma_0\cup\Gamma_1$,
obtained by shifting the mass in the direction parallel to the vector ${\bf n}$.
In other words, for $x\in {\mathbb R}^2$ call $\phi^{\bf n}(x)$ the unique point in
$\Gamma_0\cup\Gamma_1$ such that $\phi^{\bf n}(x)-x$ is parallel to ${\bf n}$.
Then let $\widetilde\mu$ be the push-forward of the measure $\mu$ w.r.t.~$\phi^{\bf n}$.
Since this projection satisfies $|\phi^{\bf n}(x)|\leq |x|$ for every $x\in {\mathbb R}^2_+$,
the transportation cost decreases. On the other hand, by Remark~\ref{r:22}
the sunlight captured remains the same.
We conclude that
$${\cal S}^{\bf n}(\widetilde \mu) - {\cal I}^1(\widetilde\mu)~\geq~{\cal S}^{\bf n}( \mu) - {\cal I}^1(\mu),$$
with strict inequality if $\mu$ is not supported on $\Gamma_0\cup\Gamma_1$.
In the case $0<\alpha<1$, the result is not so obvious.
Indeed, we do not expect that the conclusion
holds if the hypothesis (\ref{bigan}) is removed. A proof of Theorem~\ref{t:1}
will be worked out in Sections 3 and 4.
Having proved that the optimal measure $\mu$ is supported on the two rays
$\Gamma_0\cup\Gamma_1$, the density
of $\mu$ w.r.t.~one-dimensional measure can then be determined
using the necessary conditions derived in \cite{BGRR}.
Indeed, the density $u_1$ of $\mu$ along the ray $\Gamma_1$ provides a solution
to the scalar optimization problem
\begin{equation}\label{max1}
\hbox{maximize:}~~{\cal J}_1(u)~\doteq~\int_0^{+\infty} \bigl(1-e^{-u(s)}\bigr) \, ds - c \int_0^{+\infty}
\left(\int_s^{+\infty} u(r)\, dr\right)^\alpha ds\,,\end{equation}
among all non-negative functions $u:{\mathbb R}_+\mapsto{\mathbb R}_+$.
Here $s$ is the arc-length variable along $\Gamma_1$.
Similarly, the density
$u_0$ of $\mu$ along the ray $\Gamma_0$ provides a solution
to the problem
\begin{equation}\label{max0}
\hbox{maximize:}~~{\cal J}_0(u)~\doteq~ \int_0^{+\infty} \sin\theta_0\,\bigl(1-e^{-u(s)/\sin\theta_0}\bigr) \, ds - c \int_0^{+\infty}
\left(\int_s^{+\infty} u(r)\, dr\right)^\alpha ds\,.\end{equation}
We write (\ref{max1}) in the form
\begin{equation}\label{max11}
\hbox{maximize:}~~{\cal J}_1(u)~\doteq~\int_0^{+\infty} \Big[\bigl(1-e^{-u(s)}\bigr) -
c z^\alpha \Big]ds\,,\end{equation}
subject to
\begin{equation}\label{zdot}\dot z~=~-u, \qquad z(+\infty)\,=\,0.\end{equation}
The necessary conditions for optimality (see for example \cite{BP, Cesari})
now
yield
\begin{equation}\label{us}u(s)~=~\hbox{arg}\!\max_{\omega\geq 0} \Big\{- e^{-\omega} -
\,\omega q(s)\Big\}~=~-\ln q(s),\end{equation}
where the dual variable $q$ satisfies
\begin{equation}\label{qdot}\dot q~=~c\alpha z^{\alpha-1},\qquad \qquad q(0)=0.\end{equation}
Notice that, by (\ref{us}), $u>0$ only if $q<1$.
Combining (\ref{zdot}) with (\ref{qdot}) one obtains an ODE for the
function $q\mapsto z(q)$, with $q \in [0,1]$. Namely
\begin{equation}\label{dzq} {dz(q) \over dq}~ =~ {z^{1-\alpha} \ln{q} \over c \alpha }, \qquad
\qquad z(1) = 0. \end{equation}
This equation admits the explicit solution
\begin{equation}\label{zq} z(q) ~= ~c^{-1/\alpha} \left[ 1 + q \ln{q} - q \right]^{1/\alpha}. \end{equation}
Inserting (\ref{zq}) in (\ref{qdot}), we obtain an implicit equation for
$q(s)$:
\begin{equation}\label{qs} s~=~ {1 \over \alpha c^{1/\alpha} }\int_0^{q(s)}
\left[ 1 + t \ln{t} - t \right]^{1-\alpha \over \alpha} dt. \end{equation}
In turn, the density $u(s)$ of the optimal
measure $\mu$ along $\Gamma_1$, as a function
of the arc-length $s$, is recovered from (\ref{us}).
Notice that this measure is supported only on an initial interval $[0,\ell_1]$,
determined by
$$\ell_1~=~{1 \over \alpha c^{1/\alpha} }\int_0^1
\left[ 1 + s \ln{s} - s \right]^{1-\alpha \over \alpha} ds. $$
The density of the optimal measure along the ray $\Gamma_0$ is computed
in an entirely similar way. In this case, the equations (\ref{us}) and (\ref{qs}) are replaced respectively by
$$ u(s) ~=~ -(\sin \theta_0) \ln q(s), $$
$$s~=~ {(\sin \theta_0)^{1-\alpha \over \alpha} \over \alpha c^{1/\alpha} }
\int_0^{ q(s)}
\left[ 1 + t \ln{t} - t \right]^{1-\alpha \over \alpha} dt.$$
Again, the condition $u(s)>0$ implies $q(s)<1$.
Along $\Gamma_0$, the optimal measure $\mu$ is supported on an initial
interval $[0, \ell_0]$, where
$$\ell_0~=~{(\sin \theta_0)^{1-\alpha \over \alpha} \over \alpha c^{1/\alpha} }\int_0^1
\left[ 1 + s \ln{s} - s \right]^{1-\alpha \over \alpha} ds. $$
\subsection{The case $\alpha=0$.}
In the analysis of the optimization problem {\bf (OPB)},
the case $\alpha =0$ stands apart. Indeed, the general theorem on the
existence of an optimal shape proved in \cite{BPS} does not cover this case.
When $\alpha=0$,
a measure $\mu$ is irrigable only if it is concentrated on a set
of dimension $\leq 1$. When this happens, in any dimension $d\geq 3$ we
have ${\cal S}^\eta(\mu)=0$
and the optimization problem is trivial.
The only case of interest occurs in dimension $d=2$.
In the following, $\langle\cdot, \cdot\rangle$ denotes the inner product in ${\mathbb R}^2$.
\vskip 1em
\begin{theorem}\label{t:2} Let $\alpha=0$, $d=2$.
Let $\eta\in {\bf L}^1(S^1)$ and
define
\begin{equation}\label{a9}K~\doteq~\max_{|{\bf w}| = 1} ~ \int_{{\bf n}\in S^1}
\Big|\langle {\bf w}, {\bf n}\rangle\Big|\, \eta({\bf n})\, d{\bf n}.\end{equation}
\begin{itemize}
\item[(i)] If $K>c$, then the optimization problem
{\bf (OPB)}
has no solution, because the supremum of all
possible payoffs is $+\infty$.
\item[(ii)] If $K\leq c$, then the maximum payoff is zero, which is trivially
achieved by the zero measure.
\end{itemize}
\end{theorem}
A proof will be given in Section~5.
\section{Properties of optimal branch configurations}
\label{s:3}
\setcounter{equation}{0}
In this section we consider the optimization problem (\ref{maxb}) in dimension
$d=2$. As a step toward the proof of Theorem~\ref{t:1},
some properties of optimal branch configurations will be derived.
By the result in \cite{BPS} we know that an optimal measure
$\mu$ exists and has bounded support, contained in
${\mathbb R}^2_+~\doteq~\{(x_1,x_2)\,;~~x_2\geq 0\}$.
Call $M=\mu({\mathbb R}^2_+)$ the total mass of $\mu$ and let $\chi:[0,M]\times{\mathbb R}_+\mapsto{\mathbb R}^2_+$
be an optimal
irrigation plan for $\mu$.
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=8cm]{ir115-eps-converted-to.pdf}}}
\caption{\small According to the definition~(\ref{cpm}),
the set $\chi^-(x)$ is a curve joining the origin to the point $x$. The set $\chi^+(x)$
is a subtree, containing all paths that start from $x$.}
\label{f:ir115}
\end{figure}
Next, consider the set of all branches, namely
\begin{equation}\label{brs}{\cal B}~\doteq~\{ x\in{\mathbb R}^2_+\,;~~|x|_\chi>0\}.\end{equation}
By the single path property, we can introduce a partial ordering among points in ${\cal B}$. Namely,
for any $x,y\in {\cal B}$ we say that
$x\preceq y$ if for any $\xi\in [0,M]$ we have the implication
\begin{equation}\label{prec}\chi(t,\xi)\,=\,y\qquad\Longrightarrow\qquad \chi(t',\xi)\,=\,x\qquad\hbox{for some}~~t'\in [0,t].\end{equation}
This means that all particles that reach the point $y$ pass through $x$ before
getting to $y$.
For a given $x\in {\cal B}$ the subsets of points $y\in {\cal B}$ that precede or follow $x$ are defined as
\begin{equation}\label{cpm}\chi^-(x)~\doteq~\{ y\in {\cal B}\,;~~y\preceq x\},\qquad\qquad \chi^+(x)~\doteq~\{y \in {\cal B}\,;~~x\preceq y\},\end{equation}
respectively (see Fig.~\ref{f:ir115}).
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=12cm]{ir116-eps-converted-to.pdf}}}
\caption{\small If the set $\chi^+(x)$ is not contained in the slab $\Gamma_x$ (the shaded region),
by taking the
perpendicular projections $\pi^\sharp$ and $\pi^\flat$ we obtain another irrigation plan with strictly lower cost,
which irrigates a new measure $\widetilde\mu$ gathering exactly the same amount of sunlight.
Notice that here $P$ is the point in the closed set $\overline{\chi^+(x)}\cap{\mathbb R}{\bf e}_1$ which has the
largest inner product with ${\bf n}$. }
\label{f:ir116}
\end{figure}
\vskip 1em
We begin by deriving some properties of the sets $\chi^+(x)$.
Introducing the unit vectors
${\bf e}_1 = (1,0)$, ${\bf e}_2=(0,1)$, we denote by ${\mathbb R}{\bf e}_1$ the set of points on the $x_1$-axis.
As before, ${\bf n}= (\cos\theta_0, \sin\theta_0)$ denotes the unit vector in the direction of the sunlight.
Throughout the following, the closure of a set $A$ is denoted by $\overline A$, while
$\langle \cdot, \cdot\rangle$ denotes an inner product.
\begin{lemma}\label{l:2}
Let the measure $\mu$ provide an optimal solution to the problem (\ref{maxb}), and let
$\chi$ be an optimal irrigation plan for $\mu$. Then, for every $x\in {\cal B}$, one has
\begin{equation}\label{slab} \chi^+(x)~\subset~
\Gamma_x~\doteq~\Big\{ y\in {\mathbb R}^2_+\,;~~ \langle {\bf n}, y\rangle \,\in \,[a_x, b_x]\Big\},\end{equation}
where
$a_x~\doteq~ \langle {\bf n}, x\rangle$, while $b_x$ is defined as follows.
\begin{itemize}
\item If $\overline {\chi^+(x)}\cap {\mathbb R}{\bf e}_1=\emptyset$, then $b_x = a_x=\langle {\bf n}, x\rangle$.
\item If $\overline {\chi^+(x)}\cap {\mathbb R}{\bf e}_1\not=\emptyset$, then
$$b_x~=~\max~\{ a_x, b'_x\},\qquad\qquad b'_x~\doteq~\sup\Big\{ \langle {\bf n}, z\rangle\,;~~
z\in \overline{\chi^+(x)}\cap {\mathbb R}{\bf e}_1\Big\}.$$
\end{itemize}
\end{lemma}
{\bf Proof.} The right hand side of (\ref{slab}) is illustrated in Fig.~\ref{f:ir116}.
To prove the lemma, consider the set of all particles that pass through $x$, namely
$$\Theta_x~\doteq~\bigl\{ \xi\in [0,M]\,;~~\chi(\tau,\xi)=x~~\hbox{for some }~\tau\geq 0\bigr\}.$$
\vskip 1em
{\bf 1.}
We first show that, by the optimality of the solution,
\begin{equation}\label{bsu} \langle {\bf n}\,,\,\chi(\xi,t)\rangle~\geq~a_x\qquad\qquad\hbox{for all}~ ~\xi\in \Theta_x\,,~t\geq\tau.\end{equation}
Indeed, consider the perpendicular
projection on the half plane
$$\pi^\sharp:{\mathbb R}^2~\mapsto~ S^\sharp
~\doteq~\{y\in{\mathbb R}^2\,;~\langle {\bf n}, y\rangle~\geq~a_x\}.$$
Define the projected irrigation plan
$$\chi^\sharp(t,\xi)~\doteq~\left\{ \begin{array}{rl} \pi^\sharp\circ \chi(t,\xi)\qquad &\hbox{if} ~~\xi\in \Theta_x\,,~~t\geq \tau,
\\[3mm] \chi(t,\xi)\qquad &\hbox{otherwise.}\end{array}\right.
$$
Then the new measure $\mu^\sharp$ irrigated by $\chi^\sharp$ is still supported on ${\mathbb R}^2_+$ and
has exactly the same projection on
$E^\perp_{\bf n}$ as $\mu$. Hence it gathers the same amount of sunlight. However,
if the two irrigation plans do not coincide a.e., then the cost of $\chi^\sharp$ is strictly smaller than
the cost of $\chi$,
contradicting the optimality assumption.
\vskip 1em
{\bf 2.}
Next, we show that
\begin{equation}\label{bsw} \langle {\bf n}\,,\,\chi(\xi,t)\rangle~\leq~b_x\qquad\qquad\hbox{for all}~ ~\xi\in \Theta_x\,~t\geq\tau.\end{equation}
Indeed, call
$$b''~\doteq~\sup~\Big\{ \langle {\bf n}, z\rangle\,;~~
z\in \chi^+(x)\Big\}.$$
If $b''\leq b_x$, we are done. In the opposite case, by a continuity and compactness argument
we can find $\delta>0$ such that the following holds.
Introducing the perpendicular
projection on the half plane
$$\pi^\flat:{\mathbb R}^2~\mapsto~ S^\flat
~\doteq~\{y\in{\mathbb R}^2\,;~\langle {\bf n}, y\rangle~\leq~b''-\delta\},$$
one has
\begin{equation}\label{ag}\bigl\{ \pi^\flat(y)\,;~~y\in \chi^+(x)\bigr\}
~\subseteq~{\mathbb R}^2_+\,.\end{equation}
Similarly as before, define the projected irrigation plan
$$\chi^\flat(t,\xi)~\doteq~\left\{ \begin{array}{rl} \pi^\flat\circ \chi(t,\xi)\qquad &\hbox{if} ~~\xi\in \Theta_x\,,~~t\geq \tau,
\\[3mm] \chi(t,\xi)\qquad &\hbox{otherwise.}\end{array}\right.
$$
Then the new measure $\mu^\flat$ irrigated by $\chi^\flat$ is supported on ${\mathbb R}^2_+\cap S^\flat$ and has exactly the same projection on
$E^\perp_{\bf n}$ as $\mu$. Hence it gathers the same amount of sunlight. However,
if the two irrigation plans do not coincide a.e., then the cost of $\chi^\flat$ is strictly smaller than
the cost of $\chi$,
contradicting the optimality assumption.
This completes the proof of the Lemma.
\hphantom{MM}\hfill\llap{$\square$}\goodbreak
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=10cm]{ir123-eps-converted-to.pdf}}}
\caption{\small After a rotation of coordinates, the sunlight comes from the vertical direction.
Here the blue lines correspond to the set ${\cal B}^*$ in \eqref{B*}.}
\label{f:ir123}
\end{figure}
\vskip 1em
Based on the previous lemma, we now consider the set
\begin{equation}\label{B*}{\cal B}^*~\doteq~\{ x\in {\cal B}\,;~~\overline{\chi^+(x)}\cap{\mathbb R}{\bf e}_1\not=\emptyset\}.\end{equation}
It will be convenient to rotate coordinates by an angle of $\pi/2 - \theta_0$, and choose
new coordinates $(z_1, z_2)$ oriented as in Fig.~\ref{f:ir123}.
In these new coordinates, the direction of sunlight becomes vertical, while
the positive $x_1$-axis corresponds to the line
\begin{equation}\label{bfS} {\bf S}~\doteq~\bigl\{(z_1, z_2)\,;~~z_1\geq 0\,,\quad z_2 =
-\lambda z_1\bigr\},
\qquad\hbox{where}\quad \lambda = \tan\theta_0\,.\end{equation}
Calling $\bigl(z_1(\xi,t), z_2(\xi,t)\bigr)$ the corresponding coordinates of the point $\chi(\xi,t)$,
from Lemma~\ref{l:2} we immediately obtain
\begin{corollary}\label{c:2} Let $\chi$ be an optimal irrigation plan for a solution to (\ref{maxb}). Then
\begin{itemize}
\item[(i)] For every $\xi\in [0,M]$, the map $t\mapsto z_1(\xi,t)$ is non-decreasing.
\item[(ii)] If $\bar z=(\bar z_1,\bar z_2)\notin {\cal B}^*$, then $\chi^+(\bar z)$
is contained in a horizontal line. Namely,
\begin{equation}\label{hor}\chi^+(\bar z)\subset \{ (\bar z_1, s)\,;~~s\in{\mathbb R}\}.\end{equation}
\end{itemize}
\end{corollary}
To make further progress, we define
$$z_1^{\max}~\doteq~\sup\,\bigl\{ z_1\,;~(z_1,\,z_2)\in {\cal B}^*\bigr\}.$$
Moreover, on the interval $[0, z_1^{max}[\,$ we consider the function
\begin{equation}\label{phid}
\varphi(z_1)~\doteq~\sup\,\bigl\{ s\,;~~(z_1,s)\,\in\,{\cal B}^*\bigr\}.\end{equation}
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=12cm]{ir125-eps-converted-to.pdf}}}
\caption{\small The construction used in the proof of Lemma~\ref{l:33}. }
\label{f:ir125}
\end{figure}
\begin{lemma}\label{l:33}
For every $z_1\in [0, z_1^{max}[\,$, the supremum $\varphi(z_1)$ is attained as a maximum.
\end{lemma}
{\bf Proof.}
{\bf 1.} Assume that, on the contrary, for some $\bar z_1$ the supremum is not a maximum.
In this case, as shown in Fig.~\ref{f:ir125}, there exist a sequence of points
$P_n\to P$ with $P_n = (\bar z_1, s_n)$, $P=(\bar z_1, \bar z_2)$, $s_n\uparrow \bar z_2$.
Here $P_n\in {\cal B}^*$ for every $n\geq 1$ but $P\notin {\cal B}^*$.
\vskip 1em
{\bf 2.} Choose two values $a,b$ such that
$$-\lambda \bar z_1~ <~b~<~a~<~\varphi(\bar z_1).$$
By construction, for every $n\geq 1$ the set $\overline {\chi^+(P_n)}$ intersects
${\bf S}$.
Therefore we can find points
$$P_n~\prec~A_n~\prec~B_n$$
all in ${\cal B}^*$, with
$$A_n~=~(t_n, a),\qquad B_n~=~(t'_n, b),
\qquad\qquad \bar z_1\leq t_n \leq t'_n \leq z_1^{max}\,.$$
\vskip 1em
{\bf 3.}
Since the branches $\chi^+(A_n)$ are all disjoint, we have
$$\sum_{n\geq 1} |A_n|_\chi~\leq~M~\doteq~\mu({\mathbb R}^2_+).$$
We can thus find $N$ large enough so that
\begin{equation}\label{eN}\varepsilon_N~\doteq~|A_N|_\chi~< ~(a-b)^{1\over 1-\alpha}.\end{equation}
Consider the modified transport plan $\widetilde\chi$, obtained
from $\chi$ by removing all particles that go through the point $B_N$.
More precisely,
$\widetilde\chi$ is the restriction of $\chi$ to the domain
$$\widetilde\Theta~\doteq~\Theta\setminus \{ \xi\,;~~\chi(\xi,\tau)=B_N\quad
\hbox{for some}~\tau\geq 0\}.$$
Let $\widetilde\mu$ be the measure irrigated by $\widetilde\chi$.
Since $\widetilde\mu\leq\mu$, the total amount of sunlight
gathered by the measure $\widetilde\mu$ satisfies
\begin{equation}\label{e1}{\cal S}(\mu)-{\cal S}(\widetilde \mu)~\leq~(\mu-\widetilde\mu)({\mathbb R}^2) .\end{equation}
We now estimate the reduction in the transportation cost, achieved by replacing
$\mu$ with $\widetilde \mu$. Since all water particles reaching $B_N$ must pass through
$A_N$, they must cover a distance $\geq |B_N-A_N|\geq a-b$ traveling
along a path whose maximum flux is $\leq \varepsilon_N$.
The difference in the transportation costs can thus be estimated by
\begin{equation}\label{e2}{\cal I}^\alpha( \mu)-{\cal I}^\alpha(\widetilde \mu)~\geq~
(a-b)\cdot \varepsilon_N^{\alpha-1}\cdot (\mu-\widetilde\mu)({\mathbb R}^2).\end{equation}
If (\ref{eN}) holds, combining (\ref{e1})-(\ref{e2}) we obtain
$${\cal S}(\mu) -{\cal I}^\alpha( \mu)~<~{\cal S}(\widetilde\mu) -{\cal I}^\alpha( \widetilde\mu).$$
Hence the measure $\mu$ is not optimal. This contradiction proves the lemma.
\hphantom{MM}\hfill\llap{$\square$}\goodbreak
\vskip 1em
By the previous result, the graph of $\varphi$ is contained in one single maximal trajectory of the transport plan $\chi$. As in Figure~\ref{f:ir126}, we
let $s\mapsto \gamma(s)$ be the arc-length parameterization of this curve,
which provides the
left boundary of the set ${\cal B}^*$.
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=12cm]{ir126-eps-converted-to.pdf}}}
\caption{\small The thick portions of the curve $\gamma$ are the only
points where a left bifurcation can occur.
If a horizontal branch $\sigma$ bifurcates from $C_j$, all the mass on this
branch can be shifted downward to another branch $\sigma^*$
bifurcating from $C_j^*$.
Furthermore, if some portion of the path $\gamma$ between $P^*$ and $Q$
lies above the segment $\gamma^*$ joining these two points, we can take a projection of $\gamma$ on $\gamma^*$.
In both cases, the transportation cost is strictly reduced.
}
\label{f:ir126}
\end{figure}
Along the curve $\gamma$, we now
consider the set of points $C_j = (z_{1,j}, z_{2,j})$ where some horizontal branch
bifurcates on the left. A property of such points is given below.
\begin{lemma}\label{l:34}
In the above setting, for every $j$, one has
\begin{equation}\label{phij}
\varphi(s)~<~z_{2,j}\qquad\hbox{for all}~ s<z_{1,j}\,.\end{equation}
\end{lemma}
{\bf Proof.}
If (\ref{phij}) fails, there exists another point $C_j^*= (z_{1,j}^*,z_{2,j})$
along the curve $\gamma$, with $z_{1,j}^*<z_{1,j}$.
We can now replace the measure $\mu$ by another measure $\widetilde\mu$ obtained
as follows. All the mass lying on the horizontal half-line
$\{(z_{1,j},s)\,;~~s\geq z_{2,j}\}$ is shifted downward on the half-line
$\{(z_{1,j}^*,s)\,;~~s\geq z_{2,j}\}$.
Since the functional ${\cal S}^{\bf n}$ is invariant under
vertical shifts, we have
${\cal S}^{\bf n}(\widetilde\mu) = {\cal S}^{{\bf n}}(\mu)$. However, the transportation cost
is strictly smaller: ${\cal I}^\alpha(\widetilde\mu) < {\cal I}^\alpha(\mu)$. This contradicts the optimality of $\mu$. \hphantom{MM}\hfill\llap{$\square$}\goodbreak
\vskip 1em
Next, as shown in Fig.~\ref{f:ir126}, we consider a point $P^*= (p_1^*, p_2^*)\in \gamma$ where the component
$z_2$ achieves its maximum, namely
\begin{equation}\label{z22}
p_2^*~=~\max\{z_2\,;~~(z_1, z_2)\in \gamma\}~\geq~0.\end{equation}
Notice that such a maximum exists because $\gamma$ is a continuous curve,
starting at the origin. If this maximum is attained at more than one point,
we choose the one with smallest $z_1$-coordinate, so that
\begin{equation}\label{z11} p_1^* ~=~\min\{z_1\,;~~(z_1, p_2^*)\in \gamma\}.\end{equation}
Moreover, call
$$q_2^*~\doteq~\inf\{ z_2\,;~~(z_1,z_2)\in \hbox{Supp}(\mu)\},$$
and let $Q^*= (q_1^*, q_2^*) \in {\bf S}$ be the point on the ray ${\bf S}$ whose
second coordinate is $p_2^*$. We observe that, by the optimality of the solution,
all paths of the irrigation plan $\chi$ must lie within the convex set
$$\Sigma^*~\doteq~\{(z_1,z_2)\,;~~z_1\in [0, q_1^*],\quad z_2\geq q_2^*\}.$$
Otherwise, calling $\pi^*:{\mathbb R}^2\mapsto \Sigma^*$ the perpendicular projection
on the convex set $\Sigma^*$, the composed plan
$$\chi^*(\xi,t)~\doteq~\pi^*\bigl(\chi(\xi,t)\bigr)$$
would satisfy
$${\cal S}^{\bf n}(\chi^*)~=~{\cal S}^{\bf n}(\chi),\qquad {\cal E}^\alpha(\chi^*)~<~
{\cal E}^\alpha(\chi),$$
contradicting the optimality assumption.
By a projection argument we now show that, in an optimal solution, all the particle paths remain below the
segment $\gamma^*$ with endpoints $P^*$ and $Q^*$.
\begin{lemma}\label{l:35}
In the above setting, let
$$\gamma^*~=~\bigl\{(z_1,z_2)\,;~z_1 = a+bz_2\,,\qquad z_2\in [q_2^*, p_2^*]
\bigr\}$$
be the segment with endpoints $P^*, Q^*$.
If
\begin{equation}\label{oip}(\xi,t)\mapsto \chi(\xi,t)~=~(z_1(\xi,t), z_2(\xi,t)\bigr)\end{equation}
is an optimal irrigation plan for the problem (\ref{maxb}), then
we have the implication
\begin{equation}\label{below}
z_2(\xi,t)\,\in\, [q_2^*, p_2^*]\qquad\Longrightarrow\qquad z_1(\xi,t)~\leq ~a+b\,z_2(\xi,t).
\end{equation}
\end{lemma}
\vskip 1em
{\bf Proof.} {\bf 1.}
It suffices to show that the maximal curve $\gamma$
lies below $\gamma^*$. If this is not the case,
consider the set of particles which go through the point $P^*$ and
then move to the right of $P^*$, namely
\begin{equation}\label{OS}
\Omega^*~=~\Big\{\xi\in [0,M]\,;~~\chi(\xi,t^*)= P^* ~~\hbox{for some $t^*\geq 0$},
~~~z_2(\xi, t)< p_2^*~~\hbox{for $t>t^*$}\Big\}.\end{equation}
\vskip 1em
{\bf 2.} Consider the convex region below $\gamma^*$, defined by
$$\Sigma~\doteq~\Big\{ (z_1,z_2)\,;~~0\le z_1\leq a+ bz_2\,,\qquad z_2\in
[q_2^*, p_2^*]\Big\}.$$
Let $\pi:{\mathbb R}^2\mapsto\Sigma$ be the perpendicular projection.
Then the irrigation plan
\begin{equation}\label{chid}\chi^\dagger(\xi,t)~\doteq~\left\{\begin{array}{rl} \pi \Big(\chi(\xi,t)\Big)\quad &\hbox{if}~~
\xi\in \Omega^*, ~t>t^*,\\[3mm]
\chi(\xi,t)\quad &\hbox{otherwise,}\end{array}\right.\end{equation}
has total cost strictly smaller than $\chi$.
Indeed, for all $x, \xi,t$ we have
\begin{equation}\label{equal}\bigl|\pi(x)\bigr|_{\chi^\dagger}~\geq~|x|_\chi\,,\qquad
\bigl|\dot \chi^\dagger(\xi,t)\bigr|~\leq~\bigl|\dot \chi(\xi,t)\bigr|.\end{equation}
Notice that, in (\ref{equal}), equality can hold for a.e.~$\xi,t$
only in the case where $\chi=\chi^\dagger$.
\vskip 1em
{\bf 3.} We now observe that the perpendicular projection on $\Sigma$
can decrease the $z_2$-component. As a consequence, the measures $\mu$ and $\mu^\dagger$ irrigated by $\chi$ and $\chi^\dagger$ may have a different projections on the $z_2$ axis.
If this happens, we may have ${\cal S}^{\bf n}(\mu)\not={\cal S}^{\bf n}(\mu^\dagger)$.
To address this issue, we observe that all particles $\xi\in \Omega^*$
satisfy
$\chi^\dagger(\xi, t^*)= \chi(\xi,t^*)=P^*$. In terms of the $z_1,z_2$
coordinates, this implies
\begin{equation}\label{zp2}
z_2^\dagger (\xi,t^*) ~=~ z_2(\xi, t^*) ~= ~p_2^*,
\qquad z_2^\dagger(\xi, T(\xi))~ \leq ~z_2(\xi,T(\xi)) ~< ~p_2^*\,.\end{equation}
By continuity, for each $\xi\in\Omega^*$ we can find a stopping
time $\tau(\xi)\in [t^*, T(\xi)]$ such that
$$z_2^\dagger(\xi, \tau (\xi))~=~z_2(\xi,T(\xi)).$$
Call $\widetilde \chi$ the truncated irrigation plan, such that
\begin{equation}\label{cds}\widetilde \chi(\xi,t)~\doteq~
\left\{\begin{array}{cl} \chi^\dagger (\xi,t)\quad &\hbox{if}~~
\xi\in \Omega^*, ~t\leq \tau(\xi),\\[3mm]
\chi(\xi,\tau(\xi))\quad \quad &\hbox{if}~~
\xi\in \Omega^*, ~t\geq \tau(\xi),\\[3mm]
\chi(\xi,t)\quad &\hbox{if}~~\xi\notin\Omega^*.\end{array}\right.\end{equation}
By construction, the measures $\mu$ and $\widetilde \mu$ irrigated by $\chi$ and $\widetilde \chi$ have exactly the same projections on the $z_2$ axis.
Hence ${\cal S}^{\bf n}(\widetilde\mu)~=~{\cal S}^{\bf n}(\mu)$. On the other hand,
the corresponding costs satisfy
$${\cal E}^\alpha(\widetilde\chi)~\leq~{\cal E}^\alpha(\chi^\dagger)~<~{\cal E}^\alpha(\chi).$$
This contradicts optimality, thus proving the lemma.\hphantom{MM}\hfill\llap{$\square$}\goodbreak
\section{Proof of Theorem~\ref{t:1}}
\label{s:4}
\setcounter{equation}{0}
In this section we give a proof of Theorem~\ref{t:1}.
As shown in Fig.~\ref{f:ir126},
let $P^*=(p_1^*, p_2^*)$ be the point defined at (\ref{z22})
We consider two cases:
\begin{itemize}
\item[(i)] $P^*=0\in {\mathbb R}^2$,
\item[(ii)] $P^*\not= 0$.
\end{itemize}
Assume that case (i) occurs.
Then, by Lemma~\ref{l:34},
the only branch that can bifurcate to the left of $\gamma$
must lie on the $z_2$-axis. Moreover, by Lemma~\ref{l:35},
the path $\gamma$ cannot lie above the segment with endpoints $P^*$, $Q^*$. Therefore, the restriction of the measure $\mu$ to the half space $\{z_2\leq 0\}$
is supported on the line ${\bf S}$.
Combining these two facts we achieve the conclusion of the theorem.
The remainder of the proof will be devoted to showing that the case (ii)
cannot occur, because it would contradict the optimality of the
solution.
\vskip 1em
To illustrate the heart of the matter, we first consider the elementary configuration
shown in Fig.~\ref{f:ir129}, left, where all trajectories are straight lines.
We call $\kappa$ the flux along the segment $P^*Q$
and $\sigma$ the flux along the
horizontal line bifurcating to the left of $P^*$.
As in Fig.~\ref{f:ir129}, right, we then
replace the segments $P P^*$ and $P^* Q$ by a single segment with endpoints $P,Q$.
To fix the ideas, the lengths of these two segments will be denoted by
\begin{equation}\label{lab}\ell_a~=~|P-P^*|,\qquad \qquad \ell_b~=~|Q-P^*|.\end{equation}
The angles between these segments and a horizontal line will be denoted by
$\theta_a,\theta_b$, respectively. Our main assumption is
\begin{equation}\label{abas} 0\,\leq \,\theta_a\,\leq\, {\pi\over 2},\qquad\quad
0\,\leq \,\theta_b\,<\, {\pi\over 2} - \theta_0\,.
\end{equation}
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=12cm]{ir129-eps-converted-to.pdf}}}
\caption{\small The basic case: in a neighborhood of $P^*$ the
trajectories are straight lines. To show that the configuration on the left is not optimal, we replace the portion of the trajectory
between $P$ and $Q$ with a single segment. }
\label{f:ir129}
\end{figure}
Having performed this modification, the previous transportation cost along $PP^*$ and $P^*Q$
$$ (\kappa + \sigma)^\alpha \ell_a + \kappa^\alpha \ell_b $$
is replaced by
\begin{equation}\label{new} \kappa^\alpha \sqrt{ \ell_a^2 + \ell_b^2
- 2 \ell_a \ell_b \cos(\theta_a + \theta_b) }+ \sigma^\alpha \ell_a\cos\theta_a \,. \end{equation}
Notice that the last term in (\ref{new}) accounts for the fact that an amount $\sigma$ of particles
need to cover a longer horizontal distance, reaching $P$ instead of $P^*$.
The difference in the cost is thus expressed by the function
$$ f(\ell_a,\ell_b) ~= ~(\kappa + \sigma)^\alpha \ell_a - \sigma^\alpha \ell_a\cos\theta_a
+ \kappa^\alpha \left[ \ell_b - \sqrt{ \ell_a^2 + \ell_b^2 - 2 \ell_a \ell_b \cos(\theta_a + \theta_b) } \right]. $$
Notice that this function is positively homogeneous of degree 1 w.r.t.~the variables
$\ell_a,\ell_b$.
We observe that, by choosing the angle $\theta_c$ between the
segment $PQ$ and a horizontal line to be just slightly larger than
$\theta_b$, we can render the ratio
$\ell_a/\ell_b$ as small as we like. Taking advantage of this fact, we set
$$\ell_a=\varepsilon\ell,\qquad \ell_b~=~\ell$$
for some $\varepsilon>0$ small.
By the homogeneity of $f$ it follows
$$ f(\varepsilon \ell, \ell)~ = ~\ell \left[ \varepsilon (\kappa + \sigma)^\alpha - \varepsilon \sigma^\alpha \cos\theta_a
+ \kappa^\alpha \Big( 1 - \sqrt{ 1 + \varepsilon^2 - 2 \varepsilon \cos(\theta_a + \theta_b) } \Big) \right].$$
This yields
\begin{equation}\label{pe2} \begin{array}{rl} \displaystyle{d\over d\ell} f(\varepsilon \ell, \ell) &=~ \varepsilon (\kappa + \sigma)^\alpha - \varepsilon \sigma^\alpha \cos\theta_a
+ \kappa^\alpha \Big( 1 - \sqrt{ 1 + \varepsilon^2 - 2 \varepsilon \cos(\theta_a + \theta_b) } \Big) \\[3mm]
&= ~\displaystyle \varepsilon \Big[ (\kappa + \sigma)^\alpha - \sigma^\alpha \cos\theta_a + \kappa^\alpha \cos(\theta_a + \theta_b) + {\cal O}(1)\cdot \varepsilon \Big] .\end{array} \end{equation}
Setting
$$\lambda ~=~{\sigma\over \kappa+\sigma}$$
we now study the function
\begin{equation}\label{Fdef}F(\lambda, \theta_a,\theta_b)~\doteq~1-
\lambda^\alpha \cos\theta_a +(1- \lambda)^\alpha \cos(\theta_a +\theta_b ),\end{equation}
and find under which conditions on $\theta_b$
this function $F$ it remains positive for all $\lambda\in [0,1]$, $\theta_a\in [0, \pi/2]$.
\begin{lemma} \label{l:41}
\begin{itemize}
\item[(i)]
For $\alpha\geq 1/2$ and any $\theta_a,\theta_b\in [0, \pi/2]$, we always have
$F(\lambda, \theta_a,\theta_b)\geq 0$.
\item[(ii)] When $0<\alpha<1/2$ we have
$F(\lambda, \theta_a,\theta_b)\geq 0$ for every $\theta_a,\theta_b\in [0,\pi/2]$
provided that $\theta_b$ satisfies the additional bound
\begin{equation}\label{tbb} \cos \theta_b ~\ge ~1- 2^{2\alpha-1}.\end{equation}
\end{itemize}
\end{lemma}
\vskip 1em
{\bf Proof.}
The function $F$ in (\ref{Fdef}) can be written in terms of an inner product:
\begin{equation}\label{fip} \begin{array}{rl} F(\lambda,\theta) &=~ 1 - \cos{\theta_a}\left[\lambda^\alpha - (1-\lambda)^\alpha \cos{\theta_b} \right] - \sin{\theta_a} (1-\lambda)^\alpha \sin{\theta_b} \\[4mm]
&= ~1 - \Big\langle \left(\cos{\theta_a}, \,\sin{\theta_a}\right)\,,~\Big( \lambda^\alpha - (1-\lambda)^\alpha \cos{\theta_b}~,~ (1-\lambda)^\alpha \sin{\theta_b} \Big)\Big\rangle.
\end{array}\end{equation}
To prove that $F\geq 0$ it thus suffices to show that the second vector on the right hand side of
(\ref{fip}) has length less than or equal to one. Namely
$$ \lambda^{2\alpha} + (1-\lambda)^{2\alpha}
- 2\lambda^\alpha (1-\lambda)^\alpha \cos{\theta_b} ~\le ~1. $$
This inequality holds provided that
\begin{equation}\label{c3} \cos{\theta_b} ~\ge ~{ \lambda^{2\alpha} + (1-\lambda)^{2\alpha} -1 \over 2 \lambda^\alpha (1-\lambda)^\alpha }\,. \end{equation}
In the case where $\alpha\geq 1/2$ we have
$$\lambda^{2\alpha} + (1-\lambda)^{2\alpha} ~\leq~1\qquad\hbox{for all}~\lambda\in [0,1],$$
hence (\ref{c3}) holds.
To study the case where $\alpha<1/2$,
consider the function
$$ g(\lambda)~ \doteq~ { \lambda^{2\alpha} + (1-\lambda)^{2\alpha} -1\over 2 \lambda^\alpha (1-\lambda)^\alpha }~=~1+ { \bigl( \lambda^{\alpha} - (1-\lambda)^{\alpha}\bigr)^2 -1\over 2 \lambda^\alpha (1-\lambda)^\alpha}\, . $$
We observe that, for $0 \le \alpha \le \tfrac12$, one has
\begin{equation}\label{gp}0~\leq ~g(\lambda)~\leq ~g\Big({1\over 2}\Big) ~=~1- 2^{2\alpha-1} ,\end{equation}
while
$$\lim_{\lambda\to 0+} g(\lambda)~=~\lim_{\lambda\to 1} g(\lambda)~=~0.$$
From (\ref{gp}) it now follows that the condition (\ref{tbb}) guarantees that
(\ref{c3}) holds, hence $F\geq 0$, as required.
\hphantom{MM}\hfill\llap{$\square$}\goodbreak
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=10cm]{ir135-eps-converted-to.pdf}}}
\caption{\small A more general configuration, compared with the one in Fig.~\ref{f:ir129}.
}
\label{f:ir135}
\end{figure}
We now consider the more general configuration shown in Fig.~\ref{f:ir135}.
Water is transported along the path $\gamma$ up to the point $P^*$.
Then the flux is split into a finite number of straight paths.
One goes horizontally to the left, with flux $\sigma\geq 0$. The other
pipes go
to the right, with fluxes $\kappa_1,\ldots,\kappa_n >0$,
at angles
\begin{equation}\label{angles}
0~\leq ~\theta_n~<~\cdots~<~\theta_2~<~\theta_1~< ~{\pi\over 2}-\theta_0.\end{equation}
We compare this configuration with a modified irrigation plan,
where a ``bypass" is inserted along a segment $\widetilde\gamma$ with
endpoints $P$, $P_1$,
at an angle $\beta$ satisfying
\begin{equation}\label{beta} \theta_1~<~\beta~< ~{\pi\over 2}-\theta_0.\end{equation}
In this case, water particles travel along $\gamma$ until they reach $P$.
Then, an amount $\sigma$ of particles bifurcates to the left. All the remaining
particles are transported along the segment $\widetilde\gamma$,
until they reach the points $P_n,\ldots, P_1$ along the old pipes.
The next lemma estimates the saving in the irrigation cost
achieved by inserting the ``bypass" along the segment $PP_1$.
\begin{lemma}
\label{l:42} As in Theorem~\ref{t:1}, assume that either
$1/2\leq\alpha \leq 1$, or else (\ref{bigan}) holds.
In the above setting, one has
\begin{equation}\label{saving}
\hbox{\rm [old cost]} - \hbox{\rm [new cost]}~\geq~|P_1-P^*|\cdot \delta(\theta_1,\kappa),\end{equation}
where $\delta(\theta_1, \kappa)$ is a continuous function,
strictly positive
for $0\leq \theta_1< {\pi\over 2}-\theta_0$ and $\kappa=\kappa_1+\cdots+\kappa_n>0$.
\end{lemma}
{\bf Proof.}
{\bf 1.} As in the previous lemmas, we call $\theta_a$ the
angle between the segment $PP^*$ and a horizontal line.
The difference between the
old cost and the new cost can be expressed as
\begin{equation}\label{onc1} |P - P^*| \left(\sigma + \sum_{j=1}^{n}\kappa_j \right)^\alpha + \sum_{j=1}^{n}\kappa_j^\alpha |P^* - P_j|
- \sigma^\alpha \cos{\theta_a} |P-P^*| - \sum_{j=1}^{n}\left( \sum_{i=1}^{j}\kappa_i \right)^\alpha
|P_{j+1} -P_j|, \end{equation}
where, for notational convenience, we set $P_{n+1} \doteq P$.
According to (\ref{onc1}) we can write
\begin{equation}\label{ABn} \hbox{\rm [old cost]} - \hbox{\rm [new cost]}~=~A+ S_n\,,\end{equation}
where
\begin{equation}\label{Adef} A~\doteq~|P-P^*| \left[ \left(\sigma + \sum_{j=1}^{n}\kappa_j \right)^\alpha - \sigma^\alpha \cos{\theta_a} \right] +
\left( \sum_{j=1}^{n}\kappa_j \right)^\alpha \Big( |P^* - P_1| - |P - P_1| \Big) ,\end{equation}
\begin{equation}\label{Sn
S_n~
~\sum_{j=1}^{n}\kappa_j^\alpha |P^* - P_j| -
\Big( \sum_{j=1}^{n}\kappa_j \Big)^\alpha
\Big( |P^* - P_1| - |P_{n+1} - P_1| \Big) - \sum_{j=1}^{n}\Big( \sum_{i=1}^{j}\kappa_i \Big)^\alpha
|P_{j+1} -P_j|.
\end{equation}
\vskip 1em
{\bf 2.}
Notice that the quantity $A$ in (\ref{Adef}) would
describe the difference in the costs
if all the mass $\kappa=\kappa_1+\cdots+\kappa_n$
were flowing through the point $P_1$. Using Lemma \ref{l:41},
we can thus choose $P=P_1$ close enough to $P^*$
such that this difference is strictly positive.
More precisely, for a fixed $\kappa>0$, we claim that one can achieve the lower bound
\begin{equation}\label{lb1}\begin{array}{rl}A&\displaystyle \geq~ |P-P^*| \left[(\sigma + \kappa)^\alpha - \sigma^\alpha \cos{\theta_a} + \kappa^\alpha
\cos(\theta_a + \theta_1) - {\kappa^\alpha \over 2} {|P-P^*| \over |P_1 - P^*|}\, \right]
\\[3mm]
&\geq~|P_1-P^*|\cdot\delta(\theta_1,\kappa)~>~0.\end{array}
\end{equation}
Indeed, the last two terms within the square brackets in (\ref{lb1}) are derived from
$$ \begin{array}{rl} |P^*-P_1| - |P-P_1| &=~\displaystyle |P^*-P_1|\left[1 - \sqrt{1 - 2{|P-P^*| \over |P^*-P_1|}\cos(\theta_a+\theta_1)
+ {|P-P^*|^2 \over |P^*-P_1|^2} } \right] \\
&\geq~\displaystyle |P^*-P_1|\left[1 - \left(1 - {|P-P^*| \over |P^*-P_1|}\cos(\theta_a+\theta_1)
+ {|P-P^*|^2 \over 2|P^*-P_1|^2} \right) \right]. \end{array} $$
Moreover, since we have the strict inequalities
\begin{equation}\label{t1s}\left\{\begin{array}{rll}&\theta_1\,<\,{\pi\over 2} \qquad& \hbox{if}\quad \alpha\geq {1\over 2}\,,\\[4mm]
&\theta_1<{\pi\over 2}-\theta_0\qquad &\hbox{if}\quad \alpha< {1\over 2}\,,\end{array}
\right.\end{equation}
the same argument used the proof of (\ref{c3}) in Lemma~\ref{l:41} now yields the
strict inequality
\begin{equation}\label{c33} \cos{\theta_1} ~< ~{ \lambda^{2\alpha} + (1-\lambda)^{2\alpha} -1 \over 2 \lambda^\alpha (1-\lambda)^\alpha }\,. \end{equation}
Given $\kappa>0$ and $P_1$,
we can then choose $P$ close enough to $P^*$
so that
\begin{itemize}
\item the terms within the square brackets in (\ref{lb1}) is strictly positive,
\item the ratio $|P-P^*|/|P_1-P^*|$ is small but uniformly positive, as long as
$\theta_1$ remains bounded away from ${\pi\over 2}$ or from ${\pi\over 2}-\theta_0$
respectively, in the two cases considered in (\ref{t1s}).
\end{itemize}
This proves our claim (\ref{lb1}).
\vskip 1em
{\bf 3.} To complete the proof of the lemma, it remains to prove that $S_n \ge 0$.
This will be proved by induction on $n$. Starting from (\ref{Sn}) and using the inequalities
$$ |P_n - P_1|~ \leq~ |P^* - P_1|,\qquad\quad \Big( \sum_{i=1}^n\kappa_i \Big)^\alpha~\leq~\kappa_n^\alpha + \Big( \sum_{i=1}^{n-1}\kappa_i \Big)^\alpha,$$
we obtain
\begin{equation}\label{Sn1}
\begin{array}{rl} S_n&=\displaystyle~\sum_{j=1}^{n}\kappa_j^\alpha |P^* - P_j| -\Big( \sum_{j=1}^{n}\kappa_j \Big)^\alph
\underbrace{\bigl( |P^* - P_1| - |P_n - P_1| \bigr) }_{\ge 0}
- \sum_{j=1}^{n-1}\Big( \sum_{i=1}^{j}\kappa_i \Big)^\alpha |P_{j+1} -P_j|\\[4mm]
&\displaystyle\geq ~\sum_{j=1}^{n-1}\kappa_j^\alpha |P^* - P_j| -\Big( \sum_{j=1}^{n-1}\kappa_j \Big)^\alpha\bigl( |P^* - P_1| - |P_{n-1} - P_1| \bigr)
- \sum_{j=1}^{n-2}\Big( \sum_{i=1}^{j}\kappa_i \Big)^\alpha |P_{j+1} -P_j|
\\[4mm]
&\qquad\displaystyle +\kappa_n^\alpha |P^*-P_n| - \kappa_n^\alpha
\Big( |P^* - P_1|- |P_n - P_1| \Big)+\Big( \sum_{i=1}^{n-1}\kappa_i \Big)^\alpha |P_{n} -P_{n-1}|
\\[4mm]
&= ~\displaystyle S_{n-1} + \kappa_n^\alpha\Big( |P^*-P_n| -|P^* - P_1|+ |P_n - P_1| \Big)
~\geq~S_{n-1}\,.
\end{array} \end{equation}
Repeating this same argument, by induction we obtain
$$S_n~\geq~S_{n-1}~\geq~\cdots~\geq S_1\,.$$
Observing that
$$ S_1 ~=~ \kappa_1^\alpha |P^* - P_1| - \kappa_1^\alpha \Big( |P^* - P_1| - |P_2 - P_1| \Big)- \kappa_1^\alpha
|P_2 - P_1|~ =~ 0, $$
we complete the proof of the lemma. \hphantom{MM}\hfill\llap{$\square$}\goodbreak
\vskip 1em
We now consider the most general situation, shown in
Fig.~\ref{f:ir131}. Differently from the setting of Lemma~\ref{l:42}, various
scenarios must be considered.
\begin{itemize}
\item In addition to the horizontal path bifurcating to the left of $P^*$ with flux $\sigma$, there can be
countably many additional horizontal branches bifurcating
to the left of $\gamma$, below $P^*$.
We shall denote by $\sigma_n$, $n\geq 1$, the fluxes through these branches, at the bifurcation points.
\item There can be countably many distinct branches bifurcating
to the right of $P^*$, say with fluxes $\kappa^*_j$, $j\geq 1$.
\item Furthermore, there can be countably many additional branches
bifurcating to the right of $\gamma$, at points close to $P^*$.
We shall denote by $\kappa'_i$, $i\geq 1$, the fluxes through these branches, at the bifurcation points.
\item Finally, the measure $\mu$ could concentrate a positive mass
along the arc $PP^*$.
\end{itemize}
We observe that, by optimality, all particle trajectories to the right of $\gamma$
move in the right-upward direction. Namely, setting
$\chi(\xi,t)= (z_1(\xi,t), \, z_2(\xi,t))$, for these paths we have
$$\dot z_1(\xi,t)\,\geq\,0,\qquad \dot z_2(\xi,t)\,\leq\,0.$$
We now construct a ``bypass",
choosing a segment $PQ$ with endpoints
both lying on the curve $\gamma$, making an angle $\beta$ with the horizontal
direction such that
\begin{equation}\label{bb*}\beta^*~<~\beta~<~{\pi\over 2} - \theta_0\,.\end{equation}
Here $\beta^*$ denotes the angle between the segment $P^*Q^*$ and a horizontal line.
Given $\varepsilon>0$,
we can choose $N\geq 1$ large enough so that, among the
branches bifurcating from $P^*$, one has
\begin{equation}\label{sm1} \sum_{j>N} \kappa^*_j~<~\varepsilon.\end{equation}
Moreover, by choosing $Q$ sufficiently close to $P^*$, the following can be achieved:
\begin{itemize}
\item[(i)] The total flux along the horizontal branches bifurcating
to the left of $\gamma$ below $P^*$ satisfies
\begin{equation}\label{sm2}\sum_{n\geq 1} \sigma_n~<~\varepsilon.\end{equation}
\item[(ii)] The total flux along the branches bifurcating
to the right of $\gamma$ between $P$ and $P^*$, and between
$P^*$ and $Q$ satisfies
\begin{equation}\label{sm3}\sum_{i\geq 1} \kappa'_i~<~\varepsilon.\end{equation}
\item[(iii)] For each $j=1,\ldots,N$, there exists a path $\gamma_j$ connecting
$P^*$ with a point $P_j$ on the segment $PQ$, along which
the flux remains $\geq \kappa_j~\geq ~\kappa_j^*-(\varepsilon/N)$.
Here we denote by $\kappa_j$ the flux reaching $P_j$.
In other words, even if the $j$-th branch through $P^*$ further bifurcates,
most of the particles along this branch cross the segment $PQ$ at the same
point $P_j$.
\item[(iv)] The total mass of $\mu$ along $\gamma$, between
$P$ and $P^*$ is $<\varepsilon$.
\end{itemize}
\begin{figure}[ht]
\centerline{\hbox{\includegraphics[width=10cm]{ir131-eps-converted-to.pdf}}}
\caption{\small In the fully general situation, we have additional branches bifurcating
to the left of $\gamma$ between $P$ and $P^*$, and to the right of $\gamma$
at any point between $P$ and $Q$. In addition, there can be an additional absolutely
continuous source along the arc $PP^*$.}
\label{f:ir131}
\end{figure}
We estimate the difference in the new cost produced by these additional
branches.
Call $P=(p_1, p_2)$, $Q=(q_1, q_2)$.
\begin{itemize}
\item The additional mass on the left branches, together with the mass
of $\mu$ present between $P$ and $P^*$ now travels along
a horizontal line through $P$. By (i) and (iv) this mass is $<2\varepsilon$.
Hence:
\begin{equation}\label{e11}\hbox{[additional cost]} ~\leq~(2\varepsilon)^{1-\alpha} (z_2^*- z_2).\end{equation}
\item
The additional mass bifurcating to the right of $\gamma$, not crossing
the segment $PQ$ at one of the finitely many points
$P_1,\ldots, P_N$ is $< 3\varepsilon$. The additional
cost in transporting this mass from $P$ to some point between $P$ and $Q$
satisfies
\begin{equation}\label{e22}\hbox{[additional cost]} ~< \kappa_0^{\alpha-1} \cdot 3\varepsilon |P-Q|.
\end{equation}
\end{itemize}
We now use Lemma~\ref{l:42}. Combining (\ref{saving}) with
(\ref{e11})-(\ref{e22}) we obtain
\begin{equation}\label{sav}
\hbox{\rm [old cost]} - \hbox{\rm [new cost]}~\geq~|P_1-P^*|\cdot \delta(\theta_1,\kappa) -(2\varepsilon)^{1-\alpha} |P-P^*| -\kappa_0^{\alpha-1} \cdot 3\varepsilon |P-Q|. \end{equation}
By choosing $\varepsilon>0$ small enough, the right hand side of (\ref{sav}) is strictly positive. Hence the configuration with $P^*\not= 0$ is not optimal. This completes the proof
of Theorem~\ref{t:1}.
\hphantom{MM}\hfill\llap{$\square$}\goodbreak
\section{The case $d=2$, $\alpha=0$}
\label{s:6}
\setcounter{equation}{0}
We give here a proof of Theorem~\ref{t:2}.
\vskip 1em
{\bf 1.} Assume that there exists a unit vector ${\bf w}^*\in {\mathbb R}^2$ such that
$$K~=~~ \int_{{\bf n}\in S^1} \Big|
\langle {\bf w}^*, {\bf n}\rangle\Big|\, \eta({\bf n})\, d{\bf n}~>~c.$$
Let ${\bf v}= (\cos \beta, \sin\beta)$ be a unit vector perpendicular to ${\bf w}^*$,
with $\beta\in[0,\pi]$.
Let $\mu$ be the measure supported on the segment
$\{r{\bf v}\,;~r\in [0,\ell]\}$, with constant density $\lambda$ w.r.t.~1-dimensional Lebesgue
measure.
Then the payoff achieved by $\mu$ is estimated by
\begin{equation}\label{po8}\begin{array}{rl}\displaystyle{\cal S}^\eta(\mu)-c{\cal I}^0(\mu)&=~\displaystyle
\ell\cdot \int_{S^1} \left( 1- \exp\Big\{ - {\lambda\over \bigl|\langle {\bf w}^*, {\bf n}\rangle
\bigr|}\Big\}\right)\, \Big|
\langle {\bf w}^*, {\bf n}\rangle\Big|\, \eta({\bf n})\, d{\bf n} - c\,\ell\\[4mm]
&\displaystyle\geq~
\ell\cdot (1-e^{-\lambda})\, \int_{S^1}\Big| \langle {\bf w}^*, {\bf n}\rangle
\Big|\, \eta({\bf n})\, d{\bf n} - c\,\ell
\\[4mm]
&=~\Big[(1-e^{-\lambda})\,K- c\Big]\,\ell.\end{array}\end{equation}
By choosing $\lambda>0$ large enough, the first factor on the right hand side of
(\ref{po8}) is strictly positive. Hence, by increasing the length $\ell$, we can render
the payoff arbitrarily large.
\vskip 1em
{\bf 2.} Next, assume that $K\leq c$.
Consider any Lipschitz curve $s\mapsto \gamma(s)$, parameterized by
arc-length $s\in [0,\ell]$. Then, for any measure $\mu$ supported on $\gamma$,
the total amount of sunlight from the direction ${\bf n}$ captured by $\mu$
satisfies the estimate
$${\cal S}^{\bf n}(\mu)~\leq~\int_0^\ell
\Big|\langle \dot\gamma(s)^\perp ,\, {\bf n}\rangle\Big|\, ds.$$
Indeed, it is bounded by the length of the projection of $\gamma$ on the
line $E_{\bf n}^\perp$ perpendicular to ${\bf n}$.
Integrating over the various sunlight directions, one obtains
$${\cal S}^\eta(\mu)~\leq~\int_0^\ell \int_{S^1}
\Big|\langle \dot\gamma(s)^\perp ,\, {\bf n}\rangle\Big|\,\eta({\bf n})\, d{\bf n}\, ds~\leq~K\,\ell.$$
More generally, $\mu= \sum_i \mu_i$ can be the sum of countably many
measures supported on Lipschitz curves $\gamma_i$.
In this case, since the sunlight functional is sub-additive, one has
$${\cal S}^\eta(\mu)~\leq~\sum_i {\cal S}^\eta(\mu_i)~\leq~\sum_i K \ell_i\,.$$
Hence
$${\cal S}^\eta(\mu)-c{\cal I}^0(\mu)~\leq~\sum_i K\ell_i - c \sum_i\ell_i~\leq~0.$$
This concludes the proof of case (ii) in Theorem~\ref{t:2}.
\hphantom{MM}\hfill\llap{$\square$}\goodbreak
\vskip 1em
{\bf Acknowledgments.}
The research of the first author was partially supported by NSF with
grant DMS-1714237, ``Models of controlled biological growth".
The research of the second author was partially supported by a grant from the
U.S.-Norway Fulbright Foundation.
\vskip 1em
| {
"timestamp": "2020-06-03T02:18:44",
"yymm": "2006",
"arxiv_id": "2006.01679",
"language": "en",
"url": "https://arxiv.org/abs/2006.01679",
"abstract": "This paper is concerned with a shape optimization problem, where the functional to be maximized describes the total sunlight collected by a distribution of tree leaves, minus the cost for transporting water and nutrient from the base of the trunk to all the leaves. In the case of 2 space dimensions, the solution is proved to be unique, and explicitly determined.",
"subjects": "Optimization and Control (math.OC)",
"title": "On a Shape Optimization Problem for Tree Branches",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138138113964,
"lm_q2_score": 0.8080672112416737,
"lm_q1q2_score": 0.7900584749190361
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.